Google promises not to use A.I. for weapons or surveillance, for the most part

Key Points
  • CEO Sundar Pichai just published Google's list of ethical artificial intelligence principles.
  • He said that Google won't use the tools or weapons for surveillance, with some caveats.

Sundar Pichai, CEO of Google
Anindito Mukherjee | Bloomberg | Getty Images

Google says it won't use its artificial intelligence technology for weapons or surveillance, with a few caveats, according to a list of ethical principles published by CEO Sundar Pichai.

The company will still work with the government and military in other areas, including cybersecurity and training, and it will only avoid surveillance that violates "internationally accepted norms," Pichai writes. Google also won't work on technologies that are likely to cause harm, unless it decides that "the benefits substantially outweigh the risks."

The guidelines come after months of internal controversy stemming from Google's partnership with the Pentagon to use AI to analyze drone footage. Several thousand employees signed a petition urging Pichai to keep Google out of the "business of war" and dozens resigned in protest. Google eventually told employees that it would not renew the contract when it expires next year.

Through the firestorm, Google executives reportedly promised that they would publish a list of ethical principles to guide its future projects. Pichai writes that this document will act as "concrete standards" that inform its research, product development and business decisions.

No Google A.I. in weapons
No Google A.I. in weapons

Notably, the standards leave room for Google's Cloud business to bid for contracts with government agencies or the military. In April, Defense One reported that the company was quietly pursuing a large, competitive cloud contract with the Defense Department.

In addition to outlining which AI applications it won't pursue, Google highlighted that it believes that AI should "avoid creating or reinforcing unfair bias" and provide privacy safeguards.

The company also says that it will "work to limit potentially harmful or abusive applications" of its AI technologies. In a previous version of the guidelines, however, the company wrote much more explicitly that it would "reserve the right to prevent or stop uses of our technology if we become aware of uses that are inconsistent with these principles."

The company blunted its language because it can't control all aspects of its technology, for example its open source AI software TensorFlow, a company spokesperson said. But it can try to wield its influence in the open source community, and can more directly control other tools, like software development kits, through more restrictive licensing agreements.

According to Pichai, here's everything Google says that it won't use its AI technologies for:

1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

3. Technologies that gather or use information for surveillance violating internationally accepted norms.

4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

You can read the full set of principles here.