drawing the line at autonomous war bots

July 31, 2009

I’m usually not one for hemming in research in any field and any direction, even when the directions hold potential ethical pitfalls (human cloning, for example). However, the attempt to develop autonomous and ethical robots for use in any wartime situation completely crosses the line.

I should distinguish between autonomous and remote controlled robots. Autonomous robots receive no human input for their direct actions. They are capable of making decisions for themselves and then acting on them. The military already uses remote controlled robots for handling IEDs and scouting. Such technology is merely another extension of a human controller’s will and (in my opinion) completely ethical.

In an interview with h+ magazine, though, Ronald Arkin of Georgia Tech, discusses creating an “ethical governor” to ensure that future autonomous robots don’t break the “rules of war.” I can see the allure of having robots in the battlefield: they’re expendable, entirely rational, and have faster reaction time than humans. Here are the “rules” that Arkin suggests:

1. Engage and neutralize targets as combatants according to the ROE.
2. Return fire with fire proportionately.
3. Minimize collateral damage — intentionally minimize harm to noncombatants.
4. If uncertain, invoke tactical maneuvers to reassess combatant status.
5. Recognize surrender and hold POW until captured by human forces.

We are so, so far from having any kind of autonomous robot that can intelligently follow these rules that it’s not even worth spending the time on them. We would have to have a solid model of human intelligence up and running before we can even think about creating a robot that can discern and apply these rules. If current soldiers have trouble distinguishing between “enemy combatants” and civilians, how in the world will a robot be able to do it?

This type of research falls right in line with Regan’s Star Wars, having dubious (probably too generous) effectiveness and absurd cost. The problem is that too many of us have naive fantasies of robots fighting our wars. Let’s grow up and spend resources money more wisely than that, eh?

When we reach the singularity and finally develop a robust artificial intelligence that parallels our own, which — I guarantee — will not be in any of our lifetimes (although , Ray Kurzweil would have you believe otherwise) then we can start thinking about the rules for our warriors bots.

Advertisements

One Response to “drawing the line at autonomous war bots”


  1. I certainly share your concern about increased automation in our increasingly ignoble wars of aggression, but I’m afraid I don’t share your faith in the distinction drawn between functionally autonomous machines and remote-controlled machines relying on human input. Both, it seems, have equal trouble sparing innocent civilians. The Predator drone attacks on suspected militants in Waziristan, for example, boast a whopping 6 percent kill ratio of militants to noncombatants since they began in January 2006. These lovely pilotless planes, wonders of modern technology, are controlled by technicians in Nevada, a human input sadly inaccurate in its efforts. Using bots to dismantle IEDs is one thing; razing entire villages of Pushtuns is quite another. That is by way of saying that perhaps our military engineers are not as far off as you suggest in building the unthinkable and actually using it in theater.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: