Tech

Europe should make its A.I. regulations more sweeping, prominent experts urge

Key Points
  • More than 50 individual expert and institutional signatories are urging European lawmakers to include general purpose AI in its regulations, rather than a more narrow definition of high-risk AI.
  • Regulation should be considered around how AI is developed, including how data has been collected, who was involved in the collection and training of the technology, and more, says signatory Mehtab Khan.
  • The group suggests that European policymakers take steps to future-proof the legislation, such as by avoiding restricting the rules to certain types of products, like chatbots. And they warn that developers should not be able to shirk liability by pasting on a standard legal disclaimer.
Timnit Gebru in 2018.
Kimberly White | Getty Images

A group of prominent artificial intelligence experts called on European officials to pursue even broader regulations of the technology in the European Union's AI Act.

In a policy brief released Thursday, more than 50 individual expert and institutional signatories advocate for the EU to include general purpose AI, or GPAI, in its forthcoming regulations, rather than limiting the regulations to a more narrow definition.

The group, which includes institutions like the Mozilla Foundation and experts like Timnit Gebru, says that even though general purpose tools might not be designed with high-risk uses in mind, they could be used in different settings that make them higher risk. The group points to generative AI tools that have risen in popularity over the past few months, like ChatGPT.

Regulation should be considered around how AI is developed, including around how data has been collected, who was involved in the collection and training of the technology and more, according to Mehtab Khan, a signatory and resident fellow and lead at the Yale/Wikimedia Initiative on Intermediaries and Information.

"GPAI should be regulated throughout the product cycle and not just the application layer," Khan said, adding that simple labels for high and low risk "are just inherently not capturing the dynamism" of the technology.

The group suggests that European policymakers take steps to future-proof the legislation, such as by avoiding restricting the rules to certain types of products, like chatbots. And they warn that developers should not be able to shirk liability by pasting on a standard legal disclaimer.

Sarah Myers West, managing director of the AI Now Institute who helped spearhead the policy brief, said the rise of mainstream generative AI tools like ChatGPT came about after the EU's earlier draft.

"That sort of wave of attention toward generative AI I think gave this clause greater visibility," Myers West said. "But even before that, there was a wide category of types of artificial intelligence that were not tooled for a particular purpose that would have similarly received this this kind of exemption."

"The EU AI Act is poised to become, as far as we're aware, the first omnibus regulation for artificial intelligence," Myers West said. "And so given that, it's going to become the global precedent. And that's why it's particularly critical that it fields this category of AI well, because it could become the template that others are following."

WATCH: How Nvidia grew from gaming to A.I. giant now powering ChatGPT

Nvidia expanded from gaming into A.I. Now the big bet is paying off as its chips power ChatGPT
VIDEO17:5617:56
How Nvidia grew from gaming to A.I. giant now powering ChatGPT