Buybacks have gotten a bad rap from both Republicans and Democrats. But stocks would be trading at a massive discount without them.Marketsread more
Fiat Chrysler and France's Renault could soon partner up to take on the sweeping changes to the global auto industry, according to a report in the Financial Times. The...Autosread more
Microsoft shares have gained 133% since November 2015, outperforming a tech "basket of unicorns" over that stretch.Technologyread more
The president's state visit comes amid tensions with carmaker Toyota over potential auto tariffs. Trump has repeatedly threatened Japanese and European carmakers with tariffs.Traderead more
The IRS is about to release a new draft of Form W-4, which will more closely reflect the changes stemming from the Tax Cuts and Jobs Act. For workers, that means they'll need...Personal Financeread more
When commercial real estate investor Manny Khoshbin spent $2.2 million on the fastest production car in the world, he had no idea it would very quickly also become the...Autosread more
The Mega Millions jackpot has spilled over $400 million. It would be the ninth largest winning since the game began in 2002.Personal Financeread more
Trump was speaking at a meeting of Japanese business leaders in Tokyo during his state visit to Japan on Saturday.Marketsread more
The biggest U.S. gasoline price surge in years is running out of steam just in time for the start of the summer driving season.Energyread more
The federal minimum wage has remained $7.25 per hour since 2009. But several states, and even some companies, have since taken matters into their own hands to pay employees a...Workread more
Stocks rose on Friday, but notched weekly losses as investors worried the U.S.-China trade war is hurting economic growth.US Marketsread more
Artificial intelligence is projected to shape the world's future as everything from cars to legal systems embraces truly smart technologies.
Some science fiction has predicted that artificial intelligence could one day take over the world and turn on humans, but experts warn there's a far more immediate risk, so-called biased AI. That is, when programs — which are theoretically neutral and without prejudice — rely on faulty algorithms or insufficient data to develop unfair biases against certain people.
Recent cases show that such a concern may be a problem of the present.
For one example, facial recognition technology has made headlines for not being racially inclusive. Nearly 35 percent of images for darker-skinned women faced errors on facial recognition software, according to a study by Massachusetts Institute of Technology. Comparatively lighter-skinned males only faced an error rate of around 1 percent.
Bias was also at the center of Google's decision to block gender-based pronouns from its Smart Compose feature — one of its AI-enabled innovations.
The potential problems of AI prejudice go much further, though, and demonstrate how some of the biases held in the real world can influence technology.
AI programs are made up of algorithms, or a set of rules that help them identify patterns so they can make decisions with little intervention from humans. But algorithms need to be fed data in order to learn those rules — and, sometimes, human prejudices can seep into the platforms.
"Having access to large and diverse data sets helps to train algorithms to maintain the principle of fairness," according to Antony Cook, Microsoft's associate general counsel for Corporate, External and Legal Affairs for Asia.
However, "the issue of bias is not solely addressed by the generation of large amounts of data but also how that data is used by AI systems," he said.
Olly Buston, CEO of consulting think tank, Future Advocacy explained that machines often reflect human biases.
"For example, if an algorithm used to shortlist people for senior jobs is trained on data that reflects the fact that historically, more senior jobs have been held by men, then the algorithm's future behavior may reflect this, locking in the glass ceiling," said Buston.
Experts have called for more diversity in the AI field, saying it would help overcome biases.
"When we're talking about bias, we're worrying first of all about the focus of the people who are creating the algorithms," Kay Firth-Butterfield, head of AI and machine learning at the World Economic Forum told CNBC earlier this year. "We need to make the industry much more diverse in the West."
Stakeholders from various fields need to constantly engage in discussions of what constitutes inclusive AI — a human concern that should not be handled only by experts in technology, said Microsoft's Cook.
A "multi-disciplinary approach" is needed "to make sure that you've got the humanists working with the technologists. That way we'll get the most inclusive AI," he said. "Human decisions are not based on ones and zeros ... (but on) social context and social background."
The debate around the right ethical rules to apply to AI should involve technology companies, governments and civil society, Cook added.
Biased AI can have serious life-altering consequences for individuals.
It was reported in 2016 that the COMPAS program — or Correctional Offender Management Profiling for Alternative Sanctions — used by U.S. judges in some states to help decide parole and other sentencing conditions, had racial biases.
"COMPAS uses machine learning and historical data to predict the probability that a violent criminal will re-offend. Unfortunately it incorrectly predicts black people are more likely to re-offend than they do," according to a paper by Toby Walsh, an artificial intelligence professor at the University of New South Wales.
While biases in AI exist, it is important that certain decisions are not left to software, Walsh told CNBC.
That's especially when such decisions can directly harm a person's life or liberty, he added.
"If we work hard at finding mathematically precise definitions of ethics, we may be able to deal with bias in AI and so be able to hand over some of these decisions to fairer machines," Walsh said. "But we should never let a machine decide who lives and who dies."
AI software is only as good as the data it is trained to analyze. If a company only plugs in data points about one part of the world, then the resulting program will not be able to function as well in other places.
"There is a risk that an AI that is trained on data from one population will perform less well when applied to data from a different population," Buston said.
For example, he said, there is a chance "some AI apps that are developed in Europe or America will perform less well in Asia."
Meanwhile, one expert noted Asian countries' increasing progress in AI means more examples of bias problems are likely to arrise from the region.
"So you could imagine, for an example, data that comes from China and India — with combined population of 2.6 billion people when that data becomes widely be available and used — there will be biases that we might not see in the West but may be very salient or very sensitive in our part of the world," said Eugene Tan Kheng Boon, associate professor of law at Singapore Management University.
— CNBC's Saheli Roy Choudhury contributed to this report.