A research group made up of academics from across the globe have published a paper arguing that "cross-cultural cooperation" on AI ethics and governance is vital if the technology is to "bring about benefit worldwide."
The experts — from Cambridge University's Leverhulme Centre for the Future of Intelligence, Peking University's Center for Philosophy and the Future of Humanity, and the Beijing Academy of Artificial Intelligence — specifically want to see cooperation across different domains, disciplines, and cultures, as well as different nations.
"Such cooperation will enable advances to be shared across different parts of the world, and will ensure that no part of society is neglected or disproportionately negatively impacted by AI," wrote researcher Jess Whittlestone in a blog post this week that summarizes the paper.
"Without such cooperation, competitive pressures between countries may also lead to underinvestment in safe, ethical, and socially beneficial AI development, increasing the global risks from AI."
AI is poised to change the world in the coming decades as machines become increasingly competent at a range of tasks, from driving cars to discovering new drugs.
But some are concerned that AI could end up being a dangerous technology if it is developed in isolated silos across different labs in different countries.
In the near term, there's a genuine risk that AI could be used in warfare to power autonomous weapons, and in the long term, some have speculated that "superintelligent" machines could decide humans are no longer necessary and wipe them out altogether. However, there is no evidence to suggest that this would ever happen.
Political and business leaders are aware of the competitive edge that AI stands to give them. However, narratives that frame AI as a race between Eastern and Western nations "threaten to seriously undermine any prospects for international cooperation," according to Whittlestone.
In the paper, the authors suggest that mistrust between regions is one of the biggest barriers to greater cooperation on AI.
"Academic communities are particularly well-suited to building greater mutual understanding between regions and cultures, due to their tradition of free-flowing, international, and intercultural exchange of ideas," wrote Whittlestone.
Seán Ó hÉigeartaigh, another one of the paper's authors, told CNBC that he's been trying to bring together people who are working in machine learning with people who are working in cybersecurity, policy, law, governance and risk.
He says the aim is to get them to think through how the implications of the work that people are doing in one area can affect the sorts of challenges people might be facing in another area. "By bringing together people who think about these things from different angles, we're able to figure out what might be properly plausible scenarios that are worth trying to mitigate against," he said.