Tuesday, February 26, 2013

Autonomous Weapons

Progress in the fields of robotics, systems engineering, manufacturing, energy production, and even computer engineering and computer science are nothing short of spectacular.  Our every day lives reap the rewards of an ever-increasing life style afforded through implementing automation.

We can see amazing 3d animated movies from Pixar/Disney, we can install and use prosthetic limbs, we have smartphones, and many other wondrous applications of technology.  There is one industry that also benefits from the direct application of technology.  The war industry.

What does this mean and how does it impact civilization?

For as long as humans have known to exist, we have always been territorial in so much that we get in to conflicts with one another for various reasons throughout the ages.  At the end of the day, though, there was almost always some winner, and some loser.

We started with bare fists then moved on to tools and machines.  This last part brought on a whole new, and potentially devastating aspect to war and conflict.  To what extent can we damage and destroy each other and potentially the planet itself?

These days, it is common to hear about collateral damage - or the "accidental" damage caused to unintended targets (usually civilians).  This almost always indicates accidental casualties in the form of human lives or unrelated structures and/or services.  With the use of technology, our destructive capacity has increased well beyond where we have come from in humanity's early days.

During recent conflicts, a spotlight has been put on covert operations ("blowback"), show of force, nuclear weapons deterrence, terrorism, and whole lot of other scary stuff.  One thing that made traditional war mongering unappealing with a domestic populace was the political cost of human lives -- no politician wants to be seen as the person that killed off a generation of their own "sons and daughters" in some foreign war.  While the ideas of patriotism hold true in any population supporting the current aristocracy, death brings a quick reminder that any leader is working with borrowed time before a population grows tired of their war endeavors.

Fortunately for the political machine, at least in the United States, the civilian population is somewhat removed and apathetic from the political throes of unending war (though they certain carry the burden of funding said war efforts).  The US has been involved with the Middle East in combative/support type roles for over 2 decades almost non-stop, and a lot of regular people just don't care -- unless their friends and family are dying or get maimed.  

Even the fact that it's hard to say what exactly we're doing over there is tough because it all depends on who's defining "combat".  What does that mean?  If we listen to politicians, they claim combat is over and now we're there to be everyone's friends.  The reality is, combat still happens with our guys and their guys every day.  Who's the winner, who's the loser?  War has changed, just as our weapons have.  The line has blurred between enemy combatant and a kid using a machine gun.  Who's defending what?  Why are we there anyways?

I ask these questions because in the realm of "Autonomous Weapons", these sorts of questions would possibly be decided by software.  Currently, many machines are remote controlled, still directly influenced by some sort of human "pilot".  They may be predator drones, or be a land mine recovery robot; it doesn't matter, a human is still controlling their action directly and would be more or less responsible for their actions or misdeeds.  What would happen if an automated weapon performed a misdeed?  Who's to blame, the software programmer?

Our human pilots [attempt] to follow certain laws of combat (which also get blurry) or at least certain orders from their chain of command (in the military anyways).  This doesn't even touch the idea of the current privatization of our military capacities to mercenaries ("privatized security firms") as this brings a whole other aspect to war worthy of its own tirade.

War is also a blame game these days.  Who can shirk responsibility to someone else the longest.  This is the world of politics and that is what runs the war game.  People will believe what THEY want us to believe because we don't know enough about what happens in the world to know any better.

So, let's get back to the topic at hand.  With all that said, how do we define an automated weapon?  It can "think" on its own?  As a computer scientist, I deal with software in many aspects.  Software would be responsible for the decision making process of any said automated weapon.  During academia, I saw many complicated proofs that basically asserted that computers (the "brains" of any automated weapon) operate with logic, yet still require constant input either from the environment or from a programmer to assert facts about existence so that it can make "logical" connections.
Now, I'm also a fan of philosophy and will admit that humans almost never act in a 100% logical fashion, and this is what sets "US" apart from "THEM".  What we're basically proposing here is that a killer robot would be able to understand abstract ideas and make automated decisions without the need for human input -- then put that same robot in to a position of lethal authority over perceived enemy combatants.

No problem we think, we'll just program Asimov's three laws of robotics in to all death dealing machines and we'll be ok!

Wrong.

Technically speaking, an automated weapon doesn't even have to be complicated.  It could just be a gun connected to a motion detector and fire when something moves.  There is no complicated AI here, just an "if then else" type of situation (one which all software incidentally will be using).  The psuedocode would be:
if (motiondetected AND has human form) fireLasers at target

At any rate, what all the excitement is about would be to essentially replace actual ground soldiers with robots.  How could this happen?  You can bet your money (and everyone else's money in the form of taxes) that our government is looking for a way to make this happen.  An automated military would save a lot of political face when we remove the human cost to war, meaning we would see a lot more inclination to use force to get our way.  It would be somewhat easy to just crank out 100,000 deathbots in a factory and deploy them to, say Africa, to claim all their diamond mines because we can.  No soldiers needed anymore, and because nobody is dying (our guys anyways), nobody cares!

One of the other "benefits" of deathbots would be that we would no longer need such a large standing military.  Think of all the money saved in health care costs, life insurance pay outs, training costs, etc!  In fact, this would be so amazing, we wouldn't need veterans benefits anymore because the veteran population would dwindle to nothing through attrition!  Because enlistment rates would plummet, we really wouldn't need things like the GI bill (which was arguably a major factor creating the current middle class in the USA)... the list could go on.

I would like to say I'm joking, but these would be perceived benefits, depending on who you are in the political machine.

Next rant about this, I'll go in to software aspects on how automated deathbots would theoretically work, or not work.  Until then, enjoy Asimov's Three Laws of Robotics (rule 0 exists as well)

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Oh yea, and here's an article if you haven't gotten enough reading yet.

http://www.guardian.co.uk/technology/2013/feb/23/stop-killer-robots



No comments:

Post a Comment