Tech

How a Pentagon contract became an identity crisis for Google

Scott Shane, Cade Metz and Daisuke Wakabayashi
WATCH LIVE
Getty Images | Justin Sullivan

WASHINGTON — Fei-Fei Li is among the brightest stars in the burgeoning field of artificial intelligence, somehow managing to hold down two demanding jobs simultaneously: head of Stanford University’s A.I. lab and chief scientist for A.I. at Google Cloud, one of the search giant’s most promising enterprises.

Yet last September, when nervous company officials discussed how to speak publicly about Google’s first major A.I. contract with the Pentagon, Dr. Li strongly advised shunning those two potent letters.

“Avoid at ALL COSTS any mention or implication of AI,” she wrote in an email to colleagues reviewed by The New York Times. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”

Dr. Li’s concern about the implications of military contracts for Google has proved prescient. The company’s relationship with the Defense Department since it won a share of the contract for the Maven program, which uses artificial intelligence to interpret video images and could be used to improve the targeting of drone strikes, has touched off an existential crisis, according to emails and documents reviewed by The Times as well as interviews with about a dozen current and former Google employees.

It has fractured Google’s work force, fueled heated staff meetings and internal exchanges, and prompted some employees to resign. The dispute has caused grief for some senior Google officials, including Dr. Li, as they try to straddle the gap between scientists with deep moral objections and salespeople salivating over defense contracts.

More from The New York Times:
Uber's exit from Southeast Asia upsets regulators and drivers
The next privacy battle in Europe is over this new law
F.B.I.'s urgent request: Reboot your router to stop Russia-linked malware

The advertising model behind Google’s spectacular growth has provoked criticism that it invades web users’ privacy and supports dubious websites, including those peddling false news. Now the company’s path to future growth, via cloud-computing services, has divided the company over its stand on weaponry. To proceed with big defense contracts could drive away brainy experts in artificial intelligence; to reject such work would deprive it of a potentially huge business.

The internal debate over Maven, viewed by both supporters and opponents as opening the door to much bigger defense contracts, generated a petition signed by about 4,000 employees who demanded “a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

Executives at DeepMind, an A.I. pioneer based in London that Google acquired in 2014, have said they are completely opposed to military and surveillance work, and employees at the lab have protested the contract. The acquisition agreement between the two companies said DeepMind technology would never be used for military or surveillance purposes.

About a dozen Google employees have resigned over the issue, which was first reported by Gizmodo. One departing engineer petitioned to rename a conference room after Clara Immerwahr, a German chemist who killed herself in 1915 after protesting the use of science in warfare. And “Do the Right Thing” stickers have appeared in Google’s New York City offices, according to company emails viewed by The Times.

Those emails and other internal documents, shared by an employee who opposes Pentagon contracts, show that at least some Google executives anticipated the dissent and negative publicity. But other employees, noting that rivals like Microsoft and Amazon were enthusiastically pursuing lucrative Pentagon work, concluded that such projects were crucial to the company’s growth and nothing to be ashamed of.

Many tech companies have sought military business without roiling their work forces. But Google’s roots and self-image are different.

“We have kind of a mantra of ‘don’t be evil,’ which is to do the best things that we know how for our users, for our customers and for everyone,” Larry Page told Peter Jennings in 2004, when ABC News named Mr. Page and his Google co-founder, Sergey Brin, “People of the Year.”

The clash inside Google was sparked by the possibility that the Maven work might be used for lethal drone targeting. And the discussion is made more urgent by the fact that artificial intelligence, one of Google’s strengths, is expected to play an increasingly central role in warfare.

Jim Mattis, the defense secretary, made a much-publicized visit to Google in August — shortly after stopping in at Amazon — and called for closer cooperation with tech companies.

“I see many of the greatest advances out here on the West Coast in private industry,” he said.

Dr. Li’s comments were part of an email exchange started by Scott Frohman, Google’s head of defense and intelligence sales. Under the header “Communications/PR Request — URGENT,” Mr. Frohman noted that the Maven contract award was imminent and asked for direction on the “burning question” of how to present it to the public.

A number of colleagues weighed in, but generally they deferred to Dr. Li, who was born in China, immigrated to New Jersey with her parents as a 16-year-old who spoke no English and has climbed to the top of the tech world.

Dr. Li said in the email that the final decision would be made by her boss, Diane Greene, the chief executive of Google Cloud. But Dr. Li thought the company should publicize its share of the Maven contract as “a big win for GCP,” Google Cloud Platform.

She also advised being “super careful” in framing the project, noting that she had been speaking publicly on the theme of “Humanistic A.I.,” a topic she would address in a March op-ed for The Times.

“I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry,” she wrote in the email.

Asked about her September email, Dr. Li issued a statement: “I believe in human-centered AI to benefit people in positive and benevolent ways. It is deeply against my principles to work on any project that I think is to weaponize AI.”

As it turned out, the company did not publicize Maven. The company’s work as a subcontractor came to public attention only when employees opposed to it began protesting on Google’s robust internal communications platforms.

The company promised employees that it would produce a set of principles to guide its choices in the ethical minefield of defense and intelligence contracting. Google told The Times on Tuesday that the new artificial intelligence principles under development precluded the use of A.I. in weaponry. But it was unclear how such a prohibition would be applied in practice.

At a companywide meeting last Thursday, Sundar Pichai, the chief executive, said Google wanted to come up with guidelines that “stood the test of time,” employees said. Employees say they expect the principles to be announced inside Google in the next few weeks.

The polarized debate about Google and the military may leave out some nuances. Better analysis of drone imagery could reduce civilian casualties by improving operators’ ability to find and recognize terrorists. The Defense Department will hardly abandon its advance into artificial intelligence if Google bows out. And military experts say China and other developed countries are already investing heavily in A.I. for defense.

But skilled technologists who chose Google for its embrace of benign and altruistic goals are appalled that their employer could eventually be associated with more efficient ways to kill.

Google’s unusual culture is reflected in its company message boards and internal social media platforms, which encourage employees to speak out on everything from Google’s cafeteria food to its diversity initiatives. But even within this free-expression workplace, longtime employees said, the Maven project has roiled Google beyond anything in recent memory.

When news of the deal leaked out internally, Ms. Greene spoke at the weekly companywide T.G.I.F. meeting. She explained that the system was not for lethal purposes and that it was a relatively small deal worth “only” $9 million, according to two people familiar with the meeting.

That did little to tamp down the anger, and Google, according to the invitation email, decided to hold a discussion on April 11 representing a “spectrum of viewpoints” involving Ms. Greene; Meredith Whittaker, a Google A.I. researcher who is a leader in the anti-Maven movement; and Vint Cerf, a Google vice president who is considered one of the fathers of the internet for his pioneering technology work at the Defense Department.

Because there was so much interest, the group debated the topic three times over one day for Google employees watching on video in different regions around the world.

According to employees who watched the discussion, Ms. Greene held firm that Maven was not using A.I. for offensive purposes, while Ms. Whittaker argued that it was hard to draw a line on how the technology would be used.

Last Thursday, Mr. Brin, the company’s co-founder, responded to a question at a companywide meeting about Google’s work on Maven. According to two Google employees, Mr. Brin said he understood the controversy and had discussed the matter extensively with Mr. Page and Mr. Pichai. However, he said he thought that it was better for peace if the world’s militaries were intertwined with international organizations like Google rather than working solely with nationalistic defense contractors.

Google and its parent company, Alphabet, employ many of the world’s top artificial intelligence researchers. Some researchers work inside an A.I. lab called Google Brain in Mountain View, Calif., and others are spread across separate groups, including the cloud computing business overseen by Ms. Greene, who is also an Alphabet board member.

Many of these researchers have recently arrived from the world of academia, and some retain professorships. They include Geoff Hinton, a Briton who helps oversee the Brain lab in Toronto and has been open about his reluctance to work for the United States government. In the late 1980s, Mr. Hinton left the United States for Canada in part because he was reluctant to take funding from the Department of Defense.

Jeff Dean, one of Google’s longest-serving and most revered employees, who now oversees all A.I. work at the company, said at a conference for developers this month that he had signed a letter opposing the use of so-called machine learning for autonomous weapons, which would identify targets and fire without a human pulling the trigger.

DeepMind, the London A.I. lab, is widely considered to be the most important collection of A.I. talent in the world. It now operates as a separate Alphabet company, though the lines between Google and DeepMind are blurred.

DeepMind’s founders have long warned about the dangers of A.I. systems. At least one of the lab’s founders, Mustafa Suleyman, has been involved in policy discussions involving Project Maven with the Google leadership, including Mr. Pichai, according to a person familiar with the discussions.

Certainly, any chance that Google could move quietly into defense work with no public attention is gone. Nor has Dr. Li’s hope to keep A.I. out of the debate proved realistic.

“We can steer the conversation about cloud,” Aileen Black, a Google executive in Washington, cautioned Dr. Li in the September exchange, “but this is an AI specific award.” She added, “I think we need to get ahead of this before it gets framed for us.”