Artificial intelligence has a bias problem and the way to fix it is by making the tech industry in the West "much more diverse", according to the head of AI and machine learning at the World Economic Forum.
Just two to three years ago, there were very few people raising ethical questions around the use of AI, Kay Firth-Butterfield told CNBC at the World Economic Forum's Annual Meeting of the New Champions in Tianjin, China.
But ethical questions have now "come to the fore," she said. "That's partly because we have (the General Data Protection Regulation), obviously, in Europe, thinking about privacy, and also because there have been some obvious problems with some of the AI algorithms."
Theoretically, machines are supposed to be unbiased. But there have been instances in recent years that showed even algorithms can be prejudiced.
A few years ago, Google was criticized after its image recognition algorithm identified African Americans as "gorillas." Earlier this year, a Wired report said that Google has yet to fix the issue, and simply blocked its image recognition software from recognizing gorillas altogether.
"As we've seen more and more of these things crop up, then the ethical debate around artificial intelligence has become much greater," Firth-Butterfield said. "One of the things that we're trying to do at the World Economic Forum is really find a way of ensuring that AI grows exponentially, as it is doing for the benefit of humanity, whilst mitigate some of these ethical considerations in privacy, bias, transparency and accountability."
Experts have said that biases sometimes creep in on programs because human bias influenced those algorithms when they were being written.