Pages

Monday 18 February 2013

Game of Drones


I was quite struck last week by the video we watched of Big Dog, the robot developed by DARPA to act as a pack mule to accompany armed forces in combat. Big dog is a sign of the major strides being made in robotics and all of us who had not seen it in action were suitably impressed, not least by the way big dog seemed to be perfectly in tune with the background music.

When I went to look it up again on youtube during the week the video I found did not have any background music and so I heard it’s actual sound which to quote Dylan Moran sounded a little “like a typewriter eating tinfoil being kicked down the stairs”. One of the top comments on the video posed the question “Where have I heard that sound before? Oh yeah – my deepest darkest nightmares”.

That response really evoked a reaction in me. For me it showcased a fear of new technology, that these advances could become a Frankenstein’s monster that we can no longer control. I want to ask if these fears are legitimate or if they’re just another example of the fear of change and the unknown? It could be argued that many people feared air travel for a long time and I think that at the time of the Hindenburg disaster for example, a strong case for ending the aviation industry could have been made. But today these incidents are long forgotten and the vast majority of people would agree that widespread, efficient, safe and affordable air travel has made the world a far better and more accessible place.

When we were told that Big Dog was developed by DARPA it was quickly pointed out as evidence that they have done at least something good in their research efforts.  Impressive as big dog is the cynical side of me thinks of the reported 3 - 5,000 people who have been killed in Pakistan, Yemen and Somalia in recent years by unmanned drones like in the video above according to the bureau of investigative journalism. This technology seems to be ultimately leading to the development of unmanned, fully autonomous weapons systems as outlined here. Human rights watch has called for pre-emptive bans on any such weapons “before killer robots cross the line from science fiction to feasibility”.

Such weapons truly are the stuff of nightmares and all of this begs a fundamental question of the scientific community. These advances are both frightening and fascinating, so are all such advances to be seen as positive according to scientific principles? It’s a difficult question and I’m interested to hear the perspectives of those who have more expertise in Artificial Intelligence and Robotics.

9 comments:

  1. I frequently have a look at the latest DARPA's inventions as they're always innovative and awfully evil. A few examples :
    -Laser-guided bullet : http://www.bbc.co.uk/news/technology-16810107
    -Electromagnetic weapons : http://en.wikipedia.org/wiki/Electromagnetic_weapon
    -The controversial "Flesh eating" robot : http://www.tomsguide.com/us/Military-Robot-EATR-Flesh-Eating,news-4249.html

    I think the fear of robots is totally legitimate as those machines are so complex that we cannot establish their intentions with a simple look at them. Moreover, I think most people are unconsciously afraid of the fact that robots have no feeling and therefore, are able to be entirely rational, which can make us feel inferior. Artificial Intelligence is a domain which is still pretty recent and, as all new sciences, people tend to think it is magical when they do not understand it.

    ReplyDelete
  2. I want to begin by quoting the Frenchman: "I think the fear of robots is totally legitimate" :P

    Anyways. I think the other side of this argument must also be considered. For example, one of the original purpose of robotic drones was the irradication of mine fields--these drones have saved countless lifes, and even more limbs. True given time and perhaps Borg intervention they could rebel and things could get VERY bad for humanity, and I suppose fear of that situation is fair. However just because it is possible does not mean I'll lose sleep over it.

    ReplyDelete
    Replies
    1. Ruairi's point isn't that such robots might rebel and attack their masters, but rather to draw attention to the horrific aspects of such technology. That self-sufficient, unmanned robots might exist and be deployed at all, capable of wandering around extinguishing life without concern or thought is the horror: and this could certainly happen without any "rebellion".

      Human Rights watch, as linked to by Ruairi, put it well:

      "fully autonomous weapons [..] would inherently lack human qualities that provide legal and non-legal checks on the killing of civilians"

      Delete
  3. I suppose another risk, rather than robot rebellion, is that autonomous weapons will simply make military interventions very cheap in terms of casualties for the side that possesses them. This would create a lower inhibition against belligerence, particularly on the part of democratic regimes that require popular support for legitimacy. If, for example, there are no more Young American Boys coming home in body bags thanks to autonomous weapons, then why not send a few terminators or drones in to blow up the bad guys in their spiderholes whenever a (possibly dodgy) tip is received? Will the voters really care? I suppose something like this lies behind what is happening in Pakistan right now.

    Not sure I fully buy Human Rights Watch's argument though. 'Human qualities' may provide checks on the killing of civillians sometimes, but aren't they also the stimulus for hot-headed massacres and other forms of unbridled savagery?

    ReplyDelete
  4. Yes I'm not really concerned with a robot rebellion, I mean robotic vacuum cleaners could equally rebel and I'm not trying to stoke a debate on American foreign policy either. The question is whether we should always support scientific advances. In this example the consequences of these advances look potentially horrific but should we remain open to the possibility of positive spin off like potentially a minefield clearing drone as Travis alluded to? What if the money being spent on these projects by DARPA leads to some kind of breakthrough in robotics which could revolutionise crop production, healthcare or some other noble cause?

    ReplyDelete
  5. Indeed, the military funds a vast amount of American research. My own PhD was funded by the American Navy, under a project that was concerned with the analysis of temporal patterns. No possible military exploitation followed, that I could see. A relatively small amount is related to combat and weaponry, and that is, unsurprisingly, classified. Most of the research they fund is public, however.

    ReplyDelete
  6. All technological advances have a dark side unfortunately, such is the twisted ingenuity of mankind.
    It is worthy to attempt to fore-see and pre-emptively arrest the more insidious aspects of research before they seriously harm, as HRW attempt to do.

    As Hugh says above, their argument in this case may be suspect: it might be said to boil down to a request that, if people are to be killed, they be killed because some human agent really wanted them dead rather than because something was programmed to kill them.

    ReplyDelete
  7. I think that drone autonomy while chilling is unlikely. A future I predict is more along the lines of militarised forces fighting unmanned wars where the winner ultimately is better equipped technologically and can afford to endure longer. Would drones vs drones be a more civil form of altercation? More of a return to economic wars of the past.

    ReplyDelete
  8. Here is an article I came across today in which NY mayor Bloomberg talks about the possibility of "domestic drones"

    http://news.yahoo.com/blogs/ticket/bloomberg-calls-domestic-drones-scary-inevitable-154106406--politics.html

    ReplyDelete