Top Stories
Top Stories
Tech

EU beats Google to the punch in setting strategy for ethical A.I.

Key Points
  • On Monday, the European Commission released a set of steps to maintain ethics in artificial intelligence.
  • The Commission's recommendations are another reflection of Europe's efforts to be a leader in regulating big technology companies.
  • The EU is lagging behind North America and Asia in private investments in AI.
Andrus Ansip, vice president for digital at the European Commission (EC), gestures as he speaks during a fireside chat at the CeBIT 2017 tech fair in Hannover, Germany, on Monday, March 20, 2017.
Krisztian Bocsi | Bloomberg | Getty Images

Less than one week after Google scrapped its AI ethics council, the European Union has set out its own guidelines for achieving "trustworthy" artificial intelligence.

On Monday, the European Commission released a set of steps to maintain ethics in artificial intelligence, as companies and governments weigh both the benefits and risks of the far-reaching technology.

"The ethical dimension of AI is not a luxury feature or an add-on," said Andrus Ansip, EU vice-president for the digital single market, in a press release Monday. "It is only with trust that our society can fully benefit from technologies."

The EU defines artificial intelligence as systems that show "intelligent behavior," allowing them to analyze their environment and perform tasks with some degree of autonomy. AI is already transforming businesses in a variety of functions, like automating repetitive tasks and analyzing troves of data. But the technology raises a series of ethical questions, such as how to ensure algorithms are programmed without bias and how to hold AI accountable if something goes wrong.

In its release Monday, the European Commission, the executive arm of the European Union, laid out seven key requirements for ethical AI, including maintaining human oversight of the technology, making procedures traceable and putting mechanisms in place to hold systems accountable. The requirements build on a draft set of guidelines released in December of last year.

The Commission's recommendations are another reflection of Europe's efforts to be a leader in regulating big technology companies. The EU implemented landmark data privacy legislation last year, while the Commission has issued record fines on tech giants like Google and Apple.

VIDEO3:3203:32
Is Google a monopoly?

Part of the EU's approach toward regulation is to fill the void from its own lack of big tech companies and tech investments. The Commission said on Monday Europe is behind in terms of private investments in AI, investing between 2.4 to 3.2 billion euros ($2.7-4.15 billion) in the technology in 2016. That compares to as much as $10.9 billion in Asia and $20.9 billion in North America.

The EU's guidelines are one of the first government-led efforts to address AI ethics, while companies are also taking their own initiatives. Two weeks ago, Google launched a board to discuss AI ethics issues, but the tech firm came under fire for the members it selected as part of the panel. It dissolved the effort after just one week.

Each morning, the “Beyond the Valley” newsletter brings you all the latest from the vast, dynamic world of tech – outside the Silicon Valley.

Subscribe:

By signing up for newsletters, you are agreeing to our Terms of Use and Privacy Policy.