I Quit My Job to Protest My Company’s Work on Building Killer Robots

When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.

We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.

I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.

Some background: In 2014, Stephen Hawking and Elon Musk led an effort with thousands of AI researchers to collectively pledge never to contribute research to the development of lethal autonomous weapons systems — weapons that could seek out a target and end a life without a human in the decision-making loop. The researchers argued that the technology to create such weapons was just around the corner and would be disastrous for humanity.

I signed that pledge, and in January of this year, I wrote a letter to the CEO of Clarifai, Matt Zeiler, asking him to sign it, too. I was not at all expecting his response. As The New York Times reported on Friday, Matt called a companywide meeting and announced that he was totally willing to sell autonomous weapons technology to our government.

I could not abide being part of that, so I quit. My objections were not based just on a fear that killer robots are “just around the corner.” They already exist and are used in combat today.

Now, don’t go running for the underground bunker just yet. We’re not talking about something like the Terminator or Hal from “2001: A Space Odyssey.” A scenario like that would require something like artificial general intelligence, or AGI — basically, a sentient being.

In my opinion, that won’t happen in my lifetime. On the other end of the spectrum, there are some people who describe things like landmines or homing missiles as “semiautonomous.” That’s not what I’m talking about either.

The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill. One example of a fully autonomous weapon that’s in use today is the Israeli Harpy 2 drone (or Harop), which seeks out enemy radar signals on its own. If it finds a signal, the drone goes into a kamikaze dive and blows up its target.

Fortunately, there are only a few of these kinds of machines in operation today, and as of right now, they usually operate with a human as the one who decides whether to pull the trigger. Those supporting the creation of autonomous weapons systems would prefer that human to be not “in the loop but “on the loop” — supervising the quick work of the robot in selecting and destroying targets, but not having to approve every last kill.

When presented with the Harop, a lot of people look at it and say, “It’s scary, but it’s not genuinely freaking me out.” But imagine a drone acquiring a target with a technology like face recognition. Imagine this: You’re walking down the street when a drone pops into your field of vision, scans your face, and makes a decision about whether you get to live or die.

Suddenly, the question — “Where in the decision loop does the human belong?” — becomes a deadly serious one.

What the generals are thinking

On the battlefield, human-controlled drones already play a critical role in surveillance and target location. If you add machine learning to the mix, you’re looking at a system that can sift through exponentially increasing numbers of potential threats over a vast area.

But there are vast technical challenges with streaming high definition video halfway around the world. Say you’re a remote drone pilot and you’ve just found a terrorist about to do something bad. You’re authorized to stop them, and all of a sudden, you lose your video feed. Even if it’s just for a few seconds, by the time the drone recovers, it might be too late.

What if you can’t stream at all? Signal jamming is pretty common in warfare today. Your person in the loop is now completely useless.

That’s where generals get to thinking: Wouldn’t it be great if you didn’t have to depend on a video link at all? What if you could program your drone to be self-contained? Give it clear instructions, and just press Go.

That’s the argument for autonomous weapons: Machine learning will make war more efficient. Plus there's the fact that Russia and China are already working on this technology, so we might as well do the same.

Sounds reasonable, right?

Okay, here are six reasons why killer robots are genuinely terrifying

There are a number of reasons why we shouldn’t accept these arguments:

1. Accidents. Predictive technologies like face recognition or object localization are guaranteed to have error rates, meaning a case of mistaken identity can be deadly. Often these technologies fail disproportionately on people with darker skin tones or certain facial features, meaning their lives would be doubly subject to this threat.

Also, drones go rogue sometimes. It doesn’t happen often, but software always has bugs. Imagine a self-contained, solar-powered drone that has instructions to find a certain individual whose face is programmed into its memory. Now imagine it rejecting your command to shut it down.

2. Hacking. If your killer robot has a way to receive commands at all (for example, by executing a “kill switch” to turn it off), it is vulnerable to hacking. That means a powerful swarm of drone weapons could be turned off — or turned against us.

3. The “black box” problem. AI has an “explainability” problem. Your algorithm did XYZ, and everyone wants to know why, but because of the way that machine learning works, even its programmers often can’t know why an algorithm reached the outcome that it did. It’s a black box. Now, when you enter the realm of autonomous weapons, and ask, “Why did you kill that person,” the complete lack of an answer simply will not do — morally, legally, or practically.

4. Morality & Context. A robot doesn’t have moral context to prioritize one kind of life over another. A robot will only see that you’re carrying a weapon and “know” that its mission is to shoot with deadly force. It should not be news that terrorists often exploit locals and innocents. In such scenarios, a soldier will be able to use their human, moral judgment in deciding how to react — and can be held accountable for those decisions. The best object localization software today is able to look at a video and say, “I found a person.” That’s all. It can’t tell whether that person was somehow coerced into doing work for the enemy.

5. War at Machine Speed. How long does it take you to multiply 35 by 12? A machine can do thousands of such calculations in the time it takes us to blink. If a machine is programmed to make quick decisions about how and when to fire a weapon, it’s going to do it in ways we humans can’t even anticipate. Early experiments with swarm technology have shown that no matter how you structure the inter-drone communications, the outcomes are different every time. The humans simply press the button, watch the fight, and wait for it to be over so that they can try to understand the what, when, and why of it.

Add 3D printing to the mix, and now it’s cheap and easy to create an army of millions of tiny (but lethal) robots, each one thousands of times faster than a human being. Such a swarm could overwhelm a city in minutes. There will be no way for a human to defend themselves against an enemy of that scale or speed — or even understand what’s happening.

6. Escalation. Autonomous drones would further distance the trigger-pullers from the violence itself and generally make killing more cost-free for governments. If you don’t have to put your soldiers in harm’s way, it becomes that much easier to decide to take lives. This distance also puts up a psychological barrier between the humans dispatching the drones and their targets.

Humans actually find it very difficult to kill, even in military combat. In his book “Men Against Fire,” S.L.A. Marshall reports that over 70 percent of bullets fired in WWII were not aimed with the intent to kill. Think about firing squads. Why would you have seven people line up to shoot a single person? It’s to protect the shooters’ psychological safety, of course. No one will know whose bullet it truly was that did the deed.

If you turn your robot on and it decides to kill a child, was it really you who destroyed that life?

There is still time to ask our government to agree to ban this technology outright

In the end, there are many companies out there working to “democratize” powerful technology like face recognition and object localization. But these technologies are “dual-use,” meaning they can be used not only for everyday civilian purposes but also for targeting people with killer drones.

Project Maven, for instance, is a Defense Department contract that’s currently being worked on at Microsoft and Amazon (as well as in startups like Clarifai). Google employees were successful in persuading the company to walk away from the contract because they feared it would be used to this end. Project Maven might just be about “counting things” as the Pentagon claims. It might also be a targeting system for autonomous killer drones, and there is absolutely no way for a tech worker to tell.

With so many tech companies participating in work that contributes to the reality of killer robots in our future, it’s important to remember that major powers won’t be the only ones to have autonomous drones. Even if killer robots won’t be the Gatling gun of World War III, they will do a lot of damage to populations living in countries all around the world.

We must remind our government that humanity has been successful in instituting international norms that condemn the use of chemical and biological weapons. When the stakes are this high and the people of the world object, there are steps that governments can take to prevent mass killings. We can do the same thing with autonomous weapons systems, but the time to act is now.

Official Defense Department policy states currently that there must be a “human in the loop” for every kill decision, but that is under debate right now, and there’s a loophole in the policy that would allow for an autonomous weapon to be approved. We must work together to ensure this loophole is closed.

That’s why I refuse to work for any company that participates in Project Maven, or who otherwise contributes dual-use research to the Pentagon. Policymakers need to promise us that they will stop the pursuit of autonomous lethal weapons systems once and for all.

I don’t regret my time at Clarifai. I do miss everyone I left behind. I truly hope that the industry changes course and agrees to take responsibility for its work to ensure that the things we build in the private sector won’t be used for killing people. More importantly, I hope our government begins working internationally to ensure that autonomous weapons are banned to the same degree as biological ones.

View comments (31)
Read the Terms of Use

Ms. Gloria Anasyrma

Human life will eventually come to an end on the plant Earth. Why worry about the details how?
It's going to happen one way or another: global warming, nuclear winter, massive asteroid strike, or maybe a robot rebellion. So don't quit your job just keep on working at what ever you do and enjoy life while you still have it.

Anonymous

Nihilism anyone? It's free.

Anonymous

This is why they want control of the schools. Indoctrinate the kids to accept their fates and just do as they are told.

Dr. Joseph Owen...

This is a very dangerous position to take. Our beliefs create the possibilities our future holds. If we believe it is no use to try to stop evil, then we are choosing to let evil win. Historians, like myself, know that that the future is open. Nothing is inevitable. To say humans will disappear is to be at fault for the genocide such beliefs could create.

Ms. Gloria Anasyrma

Thank you for your response Dr., but try to tell it to a dinosaur like a Smegmasaurus.

Anonymous

We should start war crime indictments of relevant drone pilots ASAP. It would create a deterrent effect to minimize future war crimes. Prior to 9/11, war time authorities were limited to actual battlefields. Today the US defines the entire “globe” as a battlefield, not close to any actual physical battlefield. We need a Nuremberg war crime court.

Anonymous

Thank you for a well-argued case against this using this kind of technology. It’s frightening how few people are paying attention to what is going on in the world that is unrelated to the drama surrounding the President and the White House.

As surely as these technologies exist or are being pursued, different forms of electromagnetic energy are being used against innocent people through directed energy weapons. This is happening across the US and the world for decades but either organizations like the ACLU did not believe they existed even though many are old technologies(!) or they were afraid to investigate. Since the attacks on US and Canadian diplomats to Cuba and China, these issues can no longer be ignored. Please help raise awareness about the existence and use against civilians of these nightmare technologies. FreedomForTargetedIndividuals.org

Anonymous

Please google the NY Times article on "Targeted Individuals". This is a mental health issue exacerbated by the Internet connecting groups of people that need help.

Anonymous

Another strong counter argument against the idea that China or Russia might be developing these things, is that they quite frankly don't have the technical expertise to build anything like this. China is step by step recreating the Soviet space program from the 1980s, which is great, but step two after that isn't killer robots of any level of sophistication that we would need to be worried, it's refined domestic surveillance systems. The Russian economy is in a continuous, seemingly systemic state of contraction as the neverending brain drain crisis there enters it's, what, fourth decade? Then, on top of that, the Russian space industry is essentially on the road to collapse once NASA no longer needs to use them for launches, essentially the last thing propping them up. That whole hypersonic rocket thing turned out to be a complete bluff designed to make it look like they still have the vaguest idea what they're doing. Russia's economy and enormous social issues and China's preference for internal stability and friendly trading relations, make a new Red Army made of steel marching across the world essentially impossible. We don't need this, and you shouldn't help prop up the for profit industry of death that certain elements around the world are trying to perpetuate for their own benefit. You did the right thing.

Jon

Autonomous drones that can be used to harm or kill humans are orders of magnitude less complex than a space program. This technology is something China prioritizes. I think you're wrong to dismiss them as not being up to the task. This is something a dedicated individual software developer could do on their own. China is certainly capable.

Pages

Stay Informed