WASHINGTON – Google chief executive Sundar Pichai quietly paid the Pentagon a visit during his trip to Washington last week, seeking to smooth over tensions roughly four months after employee outrage prompted the tech giant to sever a defense contract to analyze drone video, according to two people familiar with the meeting.
Pichai met with a group of civilian and military leaders mostly from the office of the Undersecretary of Defense for Intelligence, the Defense Department directorate that oversees the artificial-intelligence drone system known as Project Maven, according to people, who spoke on the condition of anonymity.
Google had worked with the Defense Department to develop Project Maven, which uses AI to automatically tag cars, buildings and other objects in videos recorded by drones flying over conflict zones. But in June, the tech giant said it would not renew its contract following an uprising from employees, who criticized the work as helping the military track and kill with greater efficiency.
A Defense Department spokesperson said, “We do not comment on the details of private meetings. Department leaders routinely meet with industry partners to discuss innovative technologies. These meetings support continuing dialogue aimed at solving future technology challenges.”
A spokeswoman for Google did not immediately respond to a request for comment.
The secrecy surrounding Pichai’s visit highlights one of the tech giant’s most challenging binds: How to retain Silicon Valley workers angered by the moral implications of developing warfare technology while also staying in the running for Washington’s lucrative military contracts. Previously, Google said it has, and would continue, to work with defense leaders on “cybersecurity, training, military recruitment, veterans’ health care, and search and rescue,” Pichai wrote in a blog post this summer. Google also has bid for one of the Pentagon’s most lucrative cloud-computing contracts.
Google’s change-of-heart over Project Maven, its first big AI partnership with the Pentagon, has become a key source of tension between the tech giant and military officials, who felt that Google should have done a better job communicating that the technology could help keep servicemembers out of harm’s way, according to a source familiar with the work.
“Without a doubt this has caused a lot of consternation inside the DOD,” said Bob Work, the former deputy secretary of defense who helped launch Project Maven last year. “Google created a big moral hazard for itself by saying it doesn’t want to use any of its AI technology to take human life. But they didn’t say anything about the lives that could be saved.”
Google’s decision to terminate its relationship with Project Maven also has drawn sharp rebukes from congressional lawmakers, particularly Republicans, who were the focus of Pichai’s rare, two-day swing through Washington last week.
Still, they are likely to press Pichai on the matter when he testifies at a yet-unscheduled House hearing expected later this year. In September, GOP Sen. Tom Cotton, R-Ark., blasted Google at a different hearing – where company executives declined to appear – because Google had ceased aiding the government on AI tools “that are designed not just to protect our troops, and help them fight in our country’s wars, but to protect civilians as well.”
Further troubling Cotton and his peers are reports that Google is “working to develop a new search engine that would satisfy the Chinese Communist Party’s censorship standards,” he said at the September hearing. Over the summer, Cotton and three other GOP lawmakers similarly criticized Google for aiding Chinese companies while withdrawing from partnerships with the DOD.
Project Maven marked the first known use of advanced AI in an operational combat zone, inspiring a broader debate over the potential dangers of deploying powerful machine-learning technology and “weaponized AI” into a theater of war.
The AI for Project Maven, known officially as the Algorithmic Warfare Cross-Functional Team, relies on the same style of “computer vision” techniques now key to consumer image-recognition software, including Google’s.
Marine Corps. Col. Drew Cukor, a Project Maven chief, said last year the AI would complement human analysts in performing the time-consuming task but would “not be selecting a target [in combat] … any time soon.” Military officials say AI-tagged drone footage could offer crucial intelligence needed to pinpoint terrorists and reduce civilian casualties.
Google faced a widespread backlash earlier this year over its involvement in Project Maven, including from more than 3,000 workers who addressed an open letter to Pichai saying “Google should not be in the business of war.” Critics said the AI could be used to target more devastating drone strikes and marked a concerning step toward “killer robots” and other lethally autonomous machines.
In June, Google said it would not extend its 18-month DOD subcontracting deal when it expires in March. It also unveiled a set of AI ethical principles, including an internal ban on developing AI that could be used in weapons or “to cause overall harm.” The guidelines were general and did not include detail on how they would be practically enforced.
Gregory Allen, an adjunct fellow for the Washington think tank Center for a New American Security, said Google’s sudden reversal was an embarrassing communications fiasco for an important DOD initiative and threatened to sour the company’s “very promising prior courtship.”
“Google’s credibility as a company you can trust with vital national security work was badly hurt by the Maven pullout,” Allen said.