How do you keep dangerous exploit software away from bad guys (and countries) and still let the good guys (security researchers, white-hat pentesters) have it when they need it? It’s never been easy – and it’s even tougher when 41 countries need to agree. They’ve been trying all year… and, for the moment, they’ve just given up.
These 41 countries (including the US, UK, Russia, Canada and Australia) are part of the so-called “Wassenaar Arrangement”. Wassenaar’s goal is to keep advanced weaponry and other high-risk technologies away from repressive regimes, terrorists or nations likely to use them to undermine regional or global security.
As the Associated Press describes it, this “voluntary arrangement relies on unanimous agreement to abide by its rules on export controls for hundreds of items, including arms such as tanks or military aircraft and ‘dual-use’ technologies such as advanced radar that can be used for both peaceful and military means”.
Once countries agree, they’re supposed to build these export restrictions into their own laws. But it’s up to them how they do that.
With us so far? Good. In December 2013, the US and other governments convinced the Wassenaar group to unanimously add “intrusion software” to the list of technologies requiring export controls.
As Ars Technica reports: “Those changes were intended to prevent repressive regimes from gaining access to commercial malware – such as the code sold by the Italy-based Hacking Team to Sudan and the surveillance tools from Blue Coat that were resold to Syria’s Assad regime and used to catch dissident bloggers.” (As Ars reported elsewhere, similar rules had already been implemented for “stingray-type” cellphone interceptors, after governments used them against citizens during the “Arab Spring” uprisings. Pages 4-5 of EFF’s attached testimony are replete with examples of Western infotech being applied to abuse human rights.)
Devil being in the details, and all that, the US Commerce Department’s Bureau of Industry and Security (BIS) tried to translate “intrusion software” into an actual, enforceable regulation. We’ll let Engadget take it from here: “It ignited an online firestorm of meltdowns, freakouts, and vicious infighting within the most respected circles of hacking and computer security. That’s because the new rules change the classification of intrusion software and Internet Protocol (IP) network communications surveillance – setting in motion a legal machine that might see penetration-testing tools, exploits, and zero-days criminalized.”
Dartmouth computer scientist Sergey Bratus told Engadget: “Without a working [exploit], I and my colleagues cannot claim that the security vulnerabilities we write about actually exist… The authors of this regulation may have believed that they were targeting a narrow group of products; as written, their regulation actually targets fundamental security technologies, and the most promising paths of their future development.”
BIS backed down. Then – coaxed gently by 125 congresspeople – the US government concluded that the current Wassenaar language could never be built into a workable US regulation. So the US took the issue back to the Wassenaar negotiating table. That meant trying to convince all 40 of its partners to tweak the language many of them were already integrating into their own export rules.
Their chances looked good for a while, reported Politico. But now it’s evidently all gone sideways, pear-shaped, belly-up (choose your metaphor).
(The US did get one fix. One Wassenaar rule has been tweaked to explicitly focus on attacker code used for botnet command-and-control. So it’s a bit less likely that ordinary cyberdefense tools will get swept up. But, so far, most of the cybersecurity folks who were freaked out a year ago remain freaked out.)
Wassenaar doesn’t affect the creation or use of intrusion software entirely within the US or any other country. It’s about what happens when you move code across borders. Unless and until the Wassenaar rules change, that’s deeply uncertain. As The Register says, in some countries, “security researchers will have to go through the tedious process of getting an export license if they want to, say, email a network penetration exploit to a colleague or client overseas to use as part of an audit”. Meanwhile, “computer science students are having to censor their own course work, researchers flying overseas are being very careful, and companies are begging for clear advice as to what they can and cannot sell”.
And next time researchers must respond fast to a worldwide zero-day attack, some might think twice about what they can share with foreign colleagues.
If only the attackers faced similar constraints.
The last sentence sums it all.
I would think this would be a threat for assessing. Never assess issues to lightly. Could be an actual threat. Central to Government.