Artificial intelligence is projected to shape the world's future as everything from cars to legal systems embraces truly smart technologies.
Some science fiction has predicted that artificial intelligence could one day take over the world and turn on humans, but experts warn there's a far more immediate risk, so-called biased AI. That is, when programs — which are theoretically neutral and without prejudice — rely on faulty algorithms or insufficient data to develop unfair biases against certain people.
Recent cases show that such a concern may be a problem of the present.
For one example, facial recognition technology has made headlines for not being racially inclusive. Nearly 35 percent of images for darker-skinned women faced errors on facial recognition software, according to a study by Massachusetts Institute of Technology. Comparatively lighter-skinned males only faced an error rate of around 1 percent.
Bias was also at the center of Google's decision to block gender-based pronouns from its Smart Compose feature — one of its AI-enabled innovations.
The potential problems of AI prejudice go much further, though, and demonstrate how some of the biases held in the real world can influence technology.

