Nineteen billionaires release a letter asking the 2020 presidential candidates to support a tax on America's richest families.Economyread more
The Trump administration had argued the president has wide-ranging authority over national security matters.Politicsread more
Sen. Bernie Sanders announced a plan Monday to forgive the country's $1.6 trillion outstanding student loan tab, intensifying the higher education policy debate in the 2020...Personal Financeread more
Gold surged to its highest level in nearly six years on Monday as the prospects of lower Federal Reserve rates and lingering geopolitical tensions between the U.S. and Iran...Marketsread more
Shares of Ulta Beauty and Sally Beauty dropped on Monday after Amazon launched its own beauty store for professionals.Marketsread more
Goldman Sachs says there's still life left in value investing, especially with the Federal Reserve set to cut rates again.Marketsread more
McDonald's says it gained market share in the informal-eating-out category for the first time in five years, thanks to its nationwide launch of fresh beef.Restaurantsread more
Six women are running for president. Five of them are career politicians. Then there's Oprah-approved self-help guru Marianne Williamson.2020 Electionsread more
The major indexes have stretched to all-time highs and are riding one of their best first halves in decades.Trading Nationread more
As candidates from Elizabeth Warren and Bernie Sanders to John Delaney jockey for position in the 2020 Democratic primary, business issues will come up in the first debates.2020 Electionsread more
Google says it won't use its artificial intelligence technology for weapons or surveillance, with a few caveats, according to a list of ethical principles published by CEO Sundar Pichai.
The company will still work with the government and military in other areas, including cybersecurity and training, and it will only avoid surveillance that violates "internationally accepted norms," Pichai writes. Google also won't work on technologies that are likely to cause harm, unless it decides that "the benefits substantially outweigh the risks."
The guidelines come after months of internal controversy stemming from Google's partnership with the Pentagon to use AI to analyze drone footage. Several thousand employees signed a petition urging Pichai to keep Google out of the "business of war" and dozens resigned in protest. Google eventually told employees that it would not renew the contract when it expires next year.
Through the firestorm, Google executives reportedly promised that they would publish a list of ethical principles to guide its future projects. Pichai writes that this document will act as "concrete standards" that inform its research, product development and business decisions.
Notably, the standards leave room for Google's Cloud business to bid for contracts with government agencies or the military. In April, Defense One reported that the company was quietly pursuing a large, competitive cloud contract with the Defense Department.
In addition to outlining which AI applications it won't pursue, Google highlighted that it believes that AI should "avoid creating or reinforcing unfair bias" and provide privacy safeguards.
The company also says that it will "work to limit potentially harmful or abusive applications" of its AI technologies. In a previous version of the guidelines, however, the company wrote much more explicitly that it would "reserve the right to prevent or stop uses of our technology if we become aware of uses that are inconsistent with these principles."
The company blunted its language because it can't control all aspects of its technology, for example its open source AI software TensorFlow, a company spokesperson said. But it can try to wield its influence in the open source community, and can more directly control other tools, like software development kits, through more restrictive licensing agreements.
According to Pichai, here's everything Google says that it won't use its AI technologies for:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.
You can read the full set of principles here.