US Government Malware Policy Puts Everyone At Risk

Last month, a massive ransomware attack hit computers around the globe, and the government is partly to blame.

The malicious software, known as “WannaCry,” encrypted files on users’ machines, effectively locking them out of their information, and demanded a payment to unlock them. This attack spread rapidly through a vulnerability in a widely deployed component of Microsoft's Windows operating system, and placed hospitals, local governments, banks, small businesses, and more in harm's way.

This happened in no small part because of U.S. government decisions that prioritized offensive capabilities — the ability to execute cyberattacks for intelligence purposes — over the security of the world’s computer systems. The decision to make offensive capabilities the priority is a mistake. And at a minimum, this decision is one that should be reached openly and democratically. A bill has been proposed to try to improve oversight on these offensive capabilities, but oversight alone may not address the risks and perverse incentives created by the way they work. It’s worth unpacking the details of how these dangerous weapons come to be.

Why did it happen?

All complex software has flaws — mistakes in design or implementation — and some of these flaws rise to the level of a vulnerability, where the software can be tricked or forced to do something that it promised its users it would not do.

For example, consider one computer, running a program designed to receive files from other computers over a network. That program has effectively promised that it will do no more than receive files. If it turns out that a bug allows another computer to force that same program to delete unrelated files, or to run arbitrary code, then that flaw is a security vulnerability. The flaw exploited by WannaCry is exactly such a vulnerability in part of Microsoft’s Windows operating system, and it has existed (unknown by most people) for many years, possibly as far back as the year 2000.

When researchers discover a previously unknown bug in a piece of software (often called a “zero day”), they have several options:

  1. They can report the problem to the supplier of the software (Microsoft, in this case).
  2. They can write a simple program to demonstrate the bug (a “proof of concept”) to try to get the software supplier to take the bug report seriously.
  3. If the flawed program is free or open source software, they can develop a fix for the problem and supply it alongside the bug report.
  4. They can announce the problem publicly to bring attention to it, with the goal of increasing pressure to get a fix deployed (or getting people to stop using the vulnerable software at all).
  5. They can try to sell exclusive access to information about the vulnerability on the global market, where governments and other organizations buy this information for offensive use.
  6. They can write a program to aggressively take advantage of the bug (an “exploit”) in the hopes of using it later to attack an adversary who is still using the vulnerable code.

Note that these last two actions (selling information or building exploits) are at odds with the first four. If the flaw gets fixed, exploits aren't as useful and knowledge about the vulnerability isn't as valuable.

Where does the U.S. government fit in?

The NSA didn’t develop the WannaCry ransomware, but they knew about the flaw it used to compromise hundreds of thousands of machines. We don't know how they learned of the vulnerability — whether they purchased knowledge of it from one of the specialized companies that sell the knowledge of software flaws to governments around the world, or from an individual researcher, or whether they discovered it themselves. It is clear, however, that they knew about its existence for many years. At any point after they learned about it, they could have disclosed it to Microsoft, and Microsoft could have released a fix for it. Microsoft releases such fixes, called “patches,” on a roughly monthly basis. But the NSA didn't tell Microsoft about it until early this year.

Instead, at some point after learning of the vulnerability, the NSA developed or purchased an exploit that could take advantage of the vulnerability. This exploit — a weapon made of code, codenamed “ETERNALBLUE,” specific to this particular flaw — allowed the NSA to turn their knowledge of the vulnerability into access to others’ systems. During the years that they had this weapon, the NSA most likely used it against people, organizations, systems, or networks that they considered legitimate targets, such as foreign governments or their agents, or systems those targets might have accessed.

The NSA knew about a disastrous flaw in widely used piece of software – as well as code to exploit it — for over five years without trying to get it fixed. In the meantime, others may have discovered the same vulnerability and built their own exploits.

Any time the NSA used their exploit against someone, they ran the risk of their target noticing their activity by capturing network traffic — allowing the target to potentially gain knowledge of an incredibly dangerous exploit and the unpatched vulnerability it relied on. Once someone had a copy of the exploit, they would be able to change it to do whatever they wanted by changing its “payload” — the part of the overall malicious software that performs actions on a targeted computer. And this is exactly what we saw happen with the WannaCry ransomware. The NSA payload (a software “Swiss Army knife” codenamed DOUBLEPULSAR that allowed NSA analysts to perform a variety of actions on a target system) was replaced with malware with a very specific purpose: encrypting all a users’ data and demanding ransom.

At some point, before WannaCry hit the general public, the NSA learned that the weapon they had developed and held internally had leaked. Sometime after that, someone alerted Microsoft of the problem, kicking off Microsoft’s security response processes. Microsoft normally credits security researchers by name or “handle” in their security updates, but in this case, they are not saying who told them. We don't know whether the weapon leaked earlier, of course — or whether anyone else had independently discovered knowledge of the vulnerability and used it (with this particular exploit or another one) to attack other computers. And neither does the NSA. What we do know is that everyone in the world running a Windows operating system was vulnerable for years to anyone who knew about the vulnerability; that the NSA had an opportunity to fix that problem for years; and that they didn't take steps to fix the problem until they realized that their own data systems had been compromised.

A failure of information security

The NSA is ostensibly responsible for protecting the information security of America, while also being responsible for offensive capabilities. “Information Assurance” (securing critical American IT infrastructure) sits next to “Signals Intelligence” (surveillance) and “Computer Network Operations” (hacking/infiltration of others’ networks) right in the Agency’s mission statement. We can see from this fiasco where the priorities of the agency lie.

And the NSA isn’t the only agency charged with keeping the public safe but putting us all at risk. The FBI also hoards knowledge of vulnerabilities and maintains a stockpile of exploits that take advantage of them. The FBI’s mission statement says that it works “to protect the U.S. from terrorism, espionage, cyberattacks….” Why are these agencies gambling with the safety of public infrastructure?

The societal risks of these electronic exploits and defenses can be seen clearly by drawing a parallel to the balance of risk with biological weapons and public health programs.

If a disease-causing micro-organism is discovered, it takes time to develop a vaccine that prevents it. And once the vaccine is developed, it takes time and logistical work to get the population vaccinated. The same is true for a software vulnerability: it takes time to develop a patch, and time and logistical work to deploy the patch once developed. A vaccination program may not ever be universal, just as a given patch may not ever be deployed across every vulnerable networked computer on the planet.

It’s also possible to take a disease-causing micro-organism and “weaponize” it — for example, by expanding the range of temperatures at which it remains viable, or just by producing delivery “bomblets”capable of spreading it rapidly over an area. These weaponized germs are the equivalent of exploits like ETERNALBLUE. And a vaccinated (or "patched") population isn't vulnerable to the bioweapon anymore.

Our government agencies are supposed to protect us. They know these vulnerabilities are dangerous. Do we want them to delay the creation of vaccine programs, just so they can have a stockpile of effective weapons to use in the future?

What if the Centers for Disease Control and Prevention were, in addition to its current mandate of protecting “America from health, safety and security threats, both foreign and in the U.S.,” responsible for designing and stockpiling biological weapons for use against foreign adversaries? Is it better or worse for the same agency to be responsible for both defending our society and for keeping it vulnerable? What should happen if some part of the government or an independent researcher discovers a particularly nasty germ — should the CDC be informed? Should a government agency that discovers such a germ be allowed to consider keeping it secret so it can use it against people it thinks are "bad guys" even though the rest of the population is vulnerable as well? What incentive does a safety-minded independent researcher have to share such a scary discovery with the CDC if he or she knows the agency might decide to use the dangerous information offensively instead of to protect the public health?

What if a part of the government were actively weaponizing biological agents, figuring out how to make them disperse more widely, or crafting effective delivery vehicles?

These kinds of weapons cannot be deployed without some risk that they will spread, which is why bioweapons have been prohibited by international convention for over 40 years. Someone exposed to a germ can culture it and produce more of it. Someone exposed to malware can make a copy, inspect it, modify it, and re-deploy it. Should we accept this kind of activity from agencies charged with public safety? Unfortunately, this question has not been publicly and fully debated by Congress, despite the fact that several government agencies stockpile exploits and use them against computers on the public network.

Value judgments that should not be made in secret

Defenders of the FBI and the NSA may claim that offensive measures like ETERNALBLUE are necessary when our government is engaged in espionage and warfare against adversaries who might also possess caches of weaponized exploits for undisclosed vulnerabilities. Even the most strident supporters of these tactics, however, must recognize that in the case of ETERNALBLUE and the underlying vulnerability it exploits, the NSA failed as stewards of America's — and the world's — cybersecurity, by failing to disclose the vulnerability to Microsoft to be fixed until after their fully weaponized exploit had fallen into unknown hands. Moreover, even if failing to disclose a vulnerability is appropriate in a small subset of cases, policy around how these decisions are made should not be developed purely by the executive branch behind closed doors, insulated from public scrutiny and oversight.

A bipartisan group of US Senators has introduced a bill called the Protecting our Ability To Counter Hacking (PATCH) Act, which would create a Vulnerabilities Equities Review Board with representatives from DHS, NSA, and other agencies to assess whether any known vulnerability should be disclosed (so that it can be fixed) or kept secret (thereby leaving our communications systems vulnerable). If the government plans to retain a cache of cyberweapons that may put the public at risk, ensuring that there is a permanent and more transparent deliberative process is certainly a step in the right direction. However, it is only one piece of the cybersecurity puzzle. The government must also take steps to ensure that any such process fully considers the duty to secure our globally shared communications infrastructure, has a strong presumption in favor of timely disclosure, and incentivizes developers to patch known vulnerabilities.

This will not be the last time one of these digital weapons leaks or is stolen, and one way to limit the damage any one of them causes is by shortening the lifetime of the vulnerabilities they rely on.

Add a comment (24)
Read the Terms of Use

Blue Russian

It's not just American produced malware. Its foreign malware that should be feared as well. The ACLU is all for net neutrality, but against state created malware. Net neutrality allows anonymous transfer of malware code among peers.

What happens when our voting machines are infiltrated by the Russians or any other party and the results changed to favor one candidate over another? Oh wait this has already happened, as reported from the NSA over the past few days. The "Russian Patriot Hackers" as Putin calls them, hacked network connected voting machines in multiple locations.

Hmmmm, Trump has Russian connections, and multiple voting locations in swing states are hacked by Russians. Hmmmmm.

But wait it gets better! If the NSA report turns out to be accurate, the only way they could have intercepted the attack would have been through intercepted data captures such as stingray, cointel pro, and other like programs.

So what do we do? Allow CIA/NSA hackers to collect all data or Foriegn hackers control our supposedly "free" internet?

IT Jeff

I think you miss the point a bit - by neutralizing our cache of exploits we protect american (and global) systems from those exploits. Inherently doing that would make American system MORE resilient to foreign intrusion and hacking, not less. The argument put forward by the agencies is that our OFFENSIVE capability is at risk, that is they feel it is more important that we have the ability to hack Russian/dissident/other nations systems then it is to protect our own. Bringing net neutrality up says you either need to read more on these issues because you seem a little confused, or that you have an agenda you are more interested in pushing.

Re:IT Jeff

Net neutrality is very much an issue here. It is defined as having the ability to access any content regardless of its source. So "hackers", wether state or privately supported, rely on the ability to access and share data/code freely. All it's gonna take to end that is one huge attack on financial systems where even the back ups are destroyed or corrupted. When that happens net neutrality will be dead and the government will start "managing" the internet. They will create roamer/crawler bots that will scan entire servers and legally shut them down if they present a threat. It's coming, get real.


@IT Jeff Why don't people understand what net neutrality is? And what it would do to themselves without it, they have no clue.

Gabi Renato

Suffering from herpes is a Terrible Experience which i suffered till i meet a herbal doctor called Dr ATAKUMA.
I tried Herbs from some other Herbal Drs.communicating with them but i never received cure.Just about few month ago i saw a recent post of how Dr ATAKUMA herbs cured people and that he is a honest man to work with in the procedures on finding a cure then i contacted him in trial and the Herbs he Sent me got me Cured after two and half weeks.
I am so happy that i am free from herpes virus.if you need help from any diseases, here is his email or add him on whatsapp DR now and see what he can do my email is for more info


So PATCH will be looking at vulnerabilities in "our" systems. I assume this means US gov systems used prolifically. These systems are comprised of servers manufactured by Cisco, IBM, Microsoft, Apple and others. Most, more than 90%, run some variation of Microsoft software.

So we are giving PATCH, i.e. The NSA, DHS, FBI and others unlimited access to private corporations code to look for "holes".

Does anyone see a problem with this? It violates patent law and privacy laws of corporations. Plus it give the government the ability to "break" any privacy for Private citizens.

The only way the government can safe guard the internet is to work with public corporations to identify weakness. But this also means the government will employ full time hackers to try and break code. So the cycle continues, create and break. Same spy techniques different people.


The software industry brought this on themselves, after decades of being hacked, security is still an after thought, or not thought of at all. Companies like Microsoft, Cisco, etc regularly ship software that they know is full of holes. Then prosecute security researchers that bring those flaws to their attention. We don't need more regulation, more laws and stiffer penalties, we need greater cooperation between software writers and security researchers.


Each and every official and contractor at the NSA agreed to a supreme loyalty oath to follow the U.S. Constitution and Bill of Rights. There has never been a "terrorism-exemption" from following their oath of office to the U.S. Constitution - a wartime charter designed to be followed during wartime.

Maybe the ACLU should support "oath of office training" at the NSA with an annual refresher exam for all officials and contractors - especially for top management at the NSA.


Maybe the most powerful "check & balance", created by the Framers of the Constitution, against tyranny is the supreme "Oath of Office" primarily listed under Article VI of the U.S. Constitution.

It actually empowers "subordinates" in the government to "check & balance" supervisors, in their own agencies, that are disloyal to their own oath of office and disloyal to the American people. It empowers subordinates to refuse illegal and disloyal orders from superiors.


Oaths are pointless! They only serve as an oral contract between the employer and employee. In this case the government. Oaths do not cover every situation and circumstance, nor do they prevent any attempt by the oath taker to do "wrong". Oaths are a useless tool of a by gone era. People are motivated by various things. An oath is low on the list and worthless. They are only used when prosecuting someone for allegedly breaking an oath. But this isn't even the case because Trump is still president and he's broken his oath!


Sign Up for Breaking News