AI companies form new safety body, while Congress plays catch up

- ADVERTISEMENT -
President Biden delivers remarks on Artificial Intelligence in the Roosevelt Room of the White House on July 21. MUST CREDIT: Washington Post photo by Demetrius Freeman

Leading artificial intelligence companies on Wednesday unveiled plans to launch an industry-led body to develop safety standards for rapidly advancing technology, outpacing Washington policymakers who are still debating whether the U.S. government needs its own AI regulator.

Google, ChatGPT-maker OpenAI, Microsoft and Anthropic introduced the Frontier Model Forum, which the companies say will advance AI safety research and technical evaluations for the next generation of AI systems, which companies predict will be even more powerful than the large language models that currently power chatbots like Bing and Bard. The Forum will also serve as a hub for companies and governments to share information with each other about AI risks.

“It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” said Anna Makanju, OpenAI’s vice president of global affairs, in a statement.

The announcement builds on the companies’ voluntary promises to the White House on Friday to submit their systems to independent tests and develop tools to alert the public when an image or a video was generated by AI.

The industry-led initiative is the latest sign of companies racing ahead of government efforts to craft rules for the development and deployment of AI, as policymakers in Washington begin to grapple with the potential threat of the quickly emerging technology. But some argue this industry-involvement presents its own risks. For years policymakers have said that Silicon Valley can’t be trusted to craft its own guardrails, following more than a decade of privacy lapses, threats to democracies around the world and incidents that endangered children online.

Sen. Josh Hawley (R-Mo.) said during a Tuesday hearing that the AI companies were the same firms that evaded regulatory oversight during social media’s battles with regulators, naming Google, Meta, and Microsoft, in particular, as AI developers and investors.

“We’re talking about the same people,” Hawley said.

Silicon Valley companies are no stranger to self-regulation. Social media companies established the Global Internet Forum to Counter Terrorism, which they use to share information about terrorist threats on their services. Meta famously launched an Oversight Board, a body of independent experts funded by the company that weighs in on thorny decisions about content moderation – including the company’s controversial decision to ban former president Trump in the fallout of the Jan. 6 attacks.

Yet such industry-led initiatives have come under criticism from consumer advocates, who say they farm out accountability to third-party experts. Consumer advocates have also criticized self-regulatory bodies as a diversion for policymakers, derailing legislation needed to rein in the industry’s abuses.

But government-led regulation isn’t arriving soon. Lawmakers in Congress are in the early stages of crafting a framework to address the threat of AI. Government agencies are beginning to grapple with the nascent technology, and the FTC is in the initial stages of a probe into OpenAI. Policymakers in Europe have been working on their own AI legislation for years, but even their more advanced proposal is years away from coming into force.

In the interim, politicians are pressuring companies to take voluntary steps to mitigate the potential risks of AI, after the recent release of ChatGPT underscored how AI can be abused to supercharge scams, undermine democracies and threaten jobs. All four of the companies behind the Forum participated in the White House’s AI pledge, which also included commitments to share data about the safety of AI systems with the government and outside academics. Some tech executives have also taken the E.U. AI Pact, a voluntary agreement with regulators to begin to prepare for the bloc’s forthcoming legislation.

The Forum is in the early stages of development, but its organizers say it could feed into existing government initiatives to address AI. In the coming months, the group will establish an advisory board to guide its strategy and priorities. The founding companies will also establish funding, a working group and charter.

The Forum is open to companies developing powerful models that “exceed the capabilities currently present in the most advanced existing models.” Members must demonstrate a commitment to safety and be willing to work on joint initiatives. Google and Microsoft are already founding members of Partnership on AI, a nonprofit created in 2016 to develop best practices around the responsible use of AI.

The Forum’s focus on the capabilities of future AI models mirrors statements that OpenAI chief executive Sam Altman made on a high-profile world tour in May and June, where he met with dozens of global leaders. In sold-out auditoriums from Singapore to Jordan, Altman was asked whether reports about his calls for regulation before Congress would help OpenAI and hurt his competitors. In each case, Altman tried to reassure the audience that he did not think there was any need to act now.

“We’re not advocating for regulation of the current models. We’re not advocating to go regulate open source models. We’re not advocating to go stop small companies from developing this. I think that would be a huge mistake,” Altman said in Amman, Jordan, in early June. “What we are saying is if someone – us, somebody else, but probably us – makes a model that is as smart as all of human civilization, has the power of all human civilization I think that deserves some regulation,” he added with a shrug.

At this point in the talk, Altman typically steered the conversation away from the type of red tape and bureaucracy loathed by entrepreneurs and AI enthusiasts who came to hear him speak and toward the idea of a global agency like the International Atomic Energy Agency, the U.N.’s nuclear energy watchdog.

Meta, which earlier this month made its AI model Llama 2 freely available to the public for research and building new products, was notably absent from the Forum. OpenAI spokeswoman Kayla Wood said this is just “an initial launch,” and that the forum welcomes additional companies to join.

Yet some of the Forum’s members have expressed skepticism about Meta’s decision to make its model widely available as “open source” software.

Anthropic chief executive Dario Amodei recently shared concern about the future of open source, which is generally better for innovation, but can pose “catastrophic dangers,” when it comes to advanced AI, he told the Hard Fork podcast.

Amodei said on the podcast that he doesn’t object to the current version of Meta’s open source model Llama 2.

“But I’m very concerned about where it’s going,” he said.

Share

LEAVE A REPLY

Please enter your comment!
Please enter your name here