Trump administration pulls back curtain on secretive cybersecurity process

0
- ADVERTISEMENT -
White House cybersecurity coordinator Rob Joyce discusses pressing cybersecurity threats facing the country during a Washington Post Live event in October. MUST CREDIT: Washington Post

The White House on Wednesday made public for the first time the rules by which the government decides to disclose or keep secret software flaws that can be turned into cyberweapons – whether by U.S. agencies hacking for foreign intelligence, money-hungry criminals or foreign spies seeking to penetrate American computers.

The move to publish an unclassified charter responds to years of criticism that the process was unnecessarily opaque, fueling suspicion that it cloaked a stockpile of software flaws that the National Security Agency was hoarding to go after foreign targets, but that put American’s cybersecurity at risk.

“This is a really big improvement and an outstanding process,” said White House cybersecurity coordinator Rob Joyce, who spoke at an Aspen Institute event and issued a blog post on the charter.

By making it public, he said, “we hope to demonstrate to the American people that the federal government is carefully weighing the risks and benefits” of disclosure vs. retention.

The rules are part of the “Vulnerabilities Equities Process,” which the Obama administration revamped in 2014 as a multiagency forum to debate whether and when to inform companies such as Microsoft and Juniper when the government has discovered or bought a software flaw that, if weaponized, could affect the security of their product.

The Trump administration has mostly not altered the rules under which the government reaches a decision but is disclosing its process. Under the VEP, an “equities review board” of at least a dozen national security and civilian agencies will meet monthly – or more often, if a need arises – to discuss newly discovered vulnerabilities. Besides the NSA, the CIA and the FBI, the list includes the Treasury, Commerce and State departments, and the Office of Management and Budget.

The priority is on disclosure, the policy states, to protect core Internet systems, the U.S. economy and critical infrastructure, unless there is “a demonstrable, overriding interest” in using the flaw for intelligence or law enforcement purposes.

The government has long said that it discloses the vast majority – more than 90 percent – of the vulnerabilities it discovers or buys in products from defense contractors or other sellers. In recent years, that has amounted to more than 100 a year, according to people familiar with the process.

But because the process was classified, the National Security Council, which runs the discussion, was never able to reveal any numbers. Now, Joyce said, the number of flaws disclosed and the number retained will be made public in an annual report. A classified version will be sent to Congress, he said.

“This represents a good step forward in transparency and shows the government getting more comfortable and more mature with this process,” said Michael Daniel, who, as Joyce’s predecessor, oversaw the revamped process. Daniel issued the first blog post on the VEP in April 2014 in large part to push back against the misperception that the Heartbleed bug, which sparked fears of a massive security hole in the Internet, had been kept secret by the NSA.

The debate raged anew this year when it became public that the malicious code at the heart of the WannaCry virus that hit computer systems globally was developed by the NSA. The Washington Post reported in May that officials at the agency had years earlier discussed whether the flaw at the base of the tool, EternalBlue, was so dangerous that it should be revealed to Microsoft.

Instead, the agency retained it. In August 2016, a mysterious group calling itself the Shadow Brokers put online a set of “exploits,” or tools, that included EternalBlue. That eventually led the NSA to alert Microsoft, which issued a patch in March. But not enough people and companies used the patch – especially in Russia, India, Iran, Brazil and other countries in Eastern Europe and Asia where computers were infected by WannaCry or other viruses based on the flaw.

Another major breach occurred in March, when the anti-secrecy group WikiLeaks dumped online a trove of CIA hacking tools.

“A lot of companies whose software was affected were confused,” recalled Ari Schwartz, coordinator of the Coalition for Cybersecurity Policy and Law, which includes such firms as Microsoft, Symantec, Intel and Palo Alto Networks. “They were taken aback that nobody from the CIA had come to them and told them. They found out about it from the press.”

Joyce noted that at times the government has alerted a company to a flaw only to be told, “That’s great, but we’re telling customers that they need to buy this shiny next-generation [device], and so they have no intention of patching their own equipment,” he said. There have also been companies who, when informed of a flaw, said: “That’s not a flaw. That’s a feature.”

All the government can do at that point, he said, is put out a Department of Homeland Security warning about the software flaws.

Tech companies generally reacted favorably to Wednesday’s move. “Getting the VEP right is critical to fostering trust and cooperation between the tech sector and the government,” said Heather West, senior policy manager at Mozilla, which makes the widely used browser Firefox. “This accomplishes a lot of things we were asking for in terms of reform.”

Former critics of the process also applauded the transparency. “I’m very happy to see that they make clear that the presumption lies in favor of disclosure,” said Michelle Richardson, a security expert at the Center for Democracy & Technology.

Share

LEAVE A REPLY

Please enter your comment!
Please enter your name here