Humanity needs to better prepare for the rise of dangerous artificial intelligence.
So says a report from 26 technology experts from leading artificial intelligence and security organizations including the non-profit OpenAI, the University of Cambridge's Center for the Study of Existential Risk, the University of Oxford's Future of Humanity Institute, the bipartisan nonprofit Center for a New American Security and the nonprofit Electronic Frontier Foundation, among others. (Tuesday, Open AI announced co-founder Elon Musk would depart the board but continue to donate and advise the organization.)
The report, titled "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," was published Tuesday.
"Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously," the report says, and that is cause for concern, say the experts.
As AI technology continues to become more powerful and plentiful, security attacks will become less expensive and more easily carried out, more precisely targeted and harder to trace, the report says.
"For many decades hype outstripped fact in terms of AI and machine learning. No longer," says Seán Ó hÉigeartaigh, executive director of Cambridge University's Center for the Study of Existential Risk and one of the co-authors, in a written statement.
Such attacks fall into three categories of violations: digital, physical and political, according to the report.
AI will allow the automation of tasks involved in digital cyberattacks that will make those offensives easier to carry out, larger and more efficient. They authors expect new varieties of attacks using speech synthesis for impersonation and automated hacking too.
In the physical ream, using AI to automate tasks involved in drone and autonomous weapon attacks "may expand the threats associated with these attacks," the report says. Further, the report predicts new attacks that "subvert" the signals to autonomous vehicles, causing them to crash. There could also be weapons including a "swarm of a thousand micro-drones," which is reminiscent of Netflix show "Black Mirror." In one popular episode, bee-like drones went on a murderous rage.
Thirdly, using AI to automate tasks involved in political security may expand existing surveillance, persuasion and deception threats, the report says. The authors expect new types of attacks based the ability to analyze human behaviors, moods and beliefs. "These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates," the report says.
To better protect against the rise of ill-intended AI, policymakers ought to be working closely with technical specialists to be aware of potential applications of machine intelligence. Also, technical developers ought to be proactively reaching out to appropriate leaders when they understand the technology they are developing can have negative applications, the report says.
Further, a set of best practices should be developed for AI and the community should work to expand the network of people who know about and understand the discussions, the report says.
"It is often the case that AI systems don't merely reach human levels of performance but significantly surpass it," says Miles Brundage, a research fellow at Oxford University's Future of Humanity Institute and one of the co-authors, in a written statement. "It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labor."
Addressing the future potential of nefarious AI is a serious task, the report warns. "The challenge is daunting and the stakes are high."
Elon Musk has been a vocal proponent of the potential danger of artificial intelligence.
"I have exposure to the most cutting edge AI, and I think people should be really concerned by it," Musk said at the National Governors Association in July. "AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole."
At the time, Musk advocated for preemptive regulatory control.
And in August, Musk tweeted that AI poses "vastly more risk than North Korea."
Currently, not enough is being done to prepare for potential dangers, the report says.
"While many uncertainties remain, it is clear that AI will figure prominently in the security landscape of the future, that opportunities for malicious use abound, and that more can and should be done," the report says.
"As AI systems increase in capability, they will first reach and then exceed human capabilities in many narrow domains, as we have already seen with games like backgammon, chess, Jeopardy!, Dota 2, and Go and are now seeing with important human tasks like investing in the stock market or driving cars. Preparing for the potential malicious uses of AI associated with this transition is an urgent task."
Of course, a future with AI is not all terrifying. The authors of the report do indicate that the development of AI is responsible for any number of beneficial applications already being used widely in society including automatic speech recognition, machine translation, spam filters and search engines.
Also, there are several very positive applications of artificial intelligence currently under development that the report authors highlight including driverless cars, digital assistants for medical professionals and AI-enabled drones for disaster relief efforts.
"AI can be our friend," Gates said, speaking with "Hamilton" composer Lin-Manuel Miranda and his wife, Melinda, at Hunter College in New York City earlier in February. "AI is just the latest in technologies that allow us to produce a lot more goods and services with less labor. And overwhelmingly, over the last several hundred years, that has been great for society."
One side effect of the increased productivity resulting from artificial intelligence is more free time, says Gates. "Certainly we can look forward to the idea that vacations will be longer at some point," he told FOX Business Network in January at the World Economic Forum. "The purpose of humanity is not just to sit behind a counter and sell things. More free time is not a terrible thing," says Gates.
Like this story? Like CNBC Make It on Facebook.