Some ways will be good, some bad, according to Gates.
"The world hasn't had that many technologies that are both promising and dangerous — you know, we had nuclear energy and nuclear weapons," Gates said March 18 at the 2019 Human-Centered Artificial Intelligence Symposium at Stanford University.
"The place that I think this is most concerning is in weapon systems," Gates said at Stanford.
A 2018 report by AI and security technology experts, says that digital, physical and political attacks using artificial intelligence could include speech synthesis for impersonation; analysis of human behaviors, moods and beliefs for manipulation; automated hacking and physical weapons like swarms of micro-drones.
Jeff Bezos has also expressed concerns about killer AI.
"I think autonomous weapons are extremely scary," said Bezos at the George W. Bush Presidential Center's Forum on Leadership in April. The artificial intelligence tech that "we already know and understand are perfectly adequate" to create these kinds of weapons said Bezos, "and these weapons, some of the ideas that people have for these weapons, are in fact very scary."
Meanwhile, AI also has the potential to do a lot of good for humanity, Gates said, because it can sort vast quantities of data much more proficiently and efficiently than humans.
"When I see it applied to something that without AI, it is just too complex, we never would have seen how that system works, that I feel like, 'Wow, that is a very good thing.'"
For example, said Gates, the "nature of these technologies to find patterns and insights...is a chance to do something in terms of social science policy, particularly education policy, also, you know, health care quality, health care cost — it's a chance to take systems that are inherently complex in nature," Gates said.
"These systems should help us look not just at correlations but try interventions and see causation, as well. So it's a chance to supercharge the social sciences."
Like this story? Like CNBC Make It on Facebook!