The hottest trend among Silicon Valley executives is embracing regulation. As long as the rules are written on their terms.
Sundar Pichai, the new chief executive of Google’s parent company Alphabet, was the latest tech titan to jump on board: He wrote in the Financial Times yesterday that “there’s no question in my mind that artificial intelligence needs to be regulated.”
“There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition,” Pichai wrote. “While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.”
The op-ed, which was released the same day that Pichai gave a policy speech in Brussels, signals that Google wants a seat at the table as Europe aims to develop legislation that would address ethics in artificial intelligence. Pichai said he was open to “sensible” policies that take a “proportionate approach” to weigh the potential risks of A.I. against its social benefits. Europe’s privacy law, the General Data Protection Regulation, could serve as a model for A.I.-specific rules, Pichai proposed.
Pichai’s public outreach is a sign of how far the the tech industry’s relationship with regulators around the globe has shifted due to “techlash.” Companies, after years of simply not engaging with regulators, have accepted regulation is coming no matter what – and are actively working to shape the debate so they can live with the results.
Pichai has plenty of company: Apple chief executive Tim Cook took this tack in influencing the global privacy debate, calling on policymakers to focus on the shady practices of data brokers and free tech services that suck up people’s data. Microsoft President Brad Smith has called for facial recognition regulation to ensure that there are guardrails to limit surveillance or bias. Facebook chief executive Mark Zuckerberg published a carefully crafted op-ed last year, calling for multiple forms of regulation as an alternative to breaking the company up.
Pichai’s call for A.I. regulation was panned for being short on specifics. After all, it’s just a first step to say it’s time for regulation. It’s an entirely different challenge to determine what specifically such regulations entail.
But Pichai did make clear that one of his top priorities in the A.I. debate is ensuring there’s “international alignment” – and not a patchwork of laws around the world. The European Commission is expected to release a series of proposals to regulate the tech industry, including a white paper due next month on possible rules for A.I., according to the Wall Street Journal. Trump administration officials, meanwhile, have favored taking a lighter-touch regulatory approach when it comes to A.I.
That’s so far left companies so far to develop their own norms and ethical guidelines as they develop the technology. Google published its own A.I. principles in 2018 which prevent the company from deploying A.I. for weapons, or to violate human rights.
Competition may also be an incentive for Google and other tech titans to ensure governments adopt regulation. After all, companies have a financial motivation to ensure their rivals have to uphold the same ethical guidelines.
Pichai’s warning about facial recognition was timely in light of a New York Times story over the weekend by Kashmir Hill, which revealed that a small start-up has developed a database that can match unknown people to their online photos. The company is already working with 600 different law enforcement agencies. Hill wrote that tech companies including Google that were capable of releasing such a tool refrained from doing so because of ethical considerations. In 2011, Google’s then-chairman said it was the one technology the company had resisted because it could be used ” in a very bad way .”