Microsoft, which is investing heavily in AI, is part of a growing number of technology companies calling for regulation around AI.
"I think China cares as deeply about AI ethics as the United States. To assume that somehow the Chinese people and the Chinese government are also not going to worry about the implications of AI run wild would be a problem," Nadella said at the World Economic Forum in Davos.
"And so therefore I think both the United States and China and the European Union having a set of principles that govern what this technology can mean in our societies and the world at large is probably more in need today than it was in the last 30 years."
However, Nadella's comments come as the U.S. increases its scrutiny of Chinese AI firms. Some of China's biggest AI firms were put on a U.S. blacklist and being accused of human rights abuses related to minority Muslims in northwest China.
Meanwhile, China's surveillance firms continue to expand globally as China aims to be the world leader in artificial intelligence by 2030.
Nadella said regulation "does have a real place here," particularly rules at the "time of use" of AI, like facial recognition.
"I think we should be thinking a lot harder around regulation at the time of use. Because facial recognition or object recognition by itself is not good or bad; it is just a technology. Its the use case that's good or bad. So we have to be able to sort of even think about regulation more at the run time, more at the design time," Nadella said.
For its part, China has published "principles" around AI. State-backed Beijing Academy of Artificial Intelligence released some guidelines last year around the research and development, use and governance of AI.
"Cooperation should be actively developed to establish an interdisciplinary, cross-domain, cross-sectoral, cross-organizational, cross-regional, global and comprehensive AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience and to jointly cope with the impact of AI with the philosophy of 'Optimizing Symbiosis,'" the principles said.
"Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it," Pichai wrote in the Financial Times.
"Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities," he added.