Tuesday 31 January 2012

Why would robots be evil?

This question was prompted by this link forwarded to me by a friend. The link itself is a short film about a robot unfriendly world in which a shopkeeper is beset by a diminutive robot thief. It's really quite sweet and worth a look but only scratches the surface when it come to answering the question of the title.

Its a question of ethics related to non-human actors although in the case of robots its more of a grey area with the automated machinery of today and hypothetical thinking machines. Robot ethics is something I've often thought about after reading so much of the likes of Isaac Asimov. While I have some strong views on the subject they are necessarily quite complicated and I haven't fully explored them in text before.

The robots that exist today aren't really that relevant to discussion of machines having morality yet. As far as I know most physical robots do not display emergent behaviour and a large amount of time is dedicated to minimize the chances of it altogether. They are programmed to behave in a very specific manner generally acting as dumb automations with as much free will as another other simpler tool, say an electric drill or chainsaw. They can be dangerous certainly but responsibility for their actions rests entirely on their operators and programmers.  They can be programmed to have very complex responses though as in making the finest measurements of their environments allowing an artificial hand to grab and egg without cracking it or an industrial robot to come to a sudden stop if it detects a person entering its zone of operation. More impressive still are the likes of self driving cars which are stunning in their ability to identify things in their environment almost as well as a human driver (probably better soon).

While impressive these skills are limiters on their actions in order to make them safer, these small abilities of awareness still leaves them as automated machines and have to be meticulously coded beforehand in an attempt to foresee all eventualities. If such a robot did cause harm to a person it would be due to a failing on the programmers behalf not the machine itself.

Physical robots cannot be allowed to have the opportunity to do unexpected things because they are not intelligent enough yet to know fully what they are capable of and the possible consequences. Self awareness is needed before a robot can be trusted to work with no boundaries. It needs to know the limits of its data gathering abilities and to be suspicious of uncertain information because a false negative could be fatal. The physical world is so complex that the amount of information a robot would have to be aware of is staggering, even more so if its actions are wholly deterministic and dependant on its code. Someone will have to write out the method to identify different materials and the procedure to handle them.

Somewhere where these issues do not have to be taken into consideration are artificial environments simulated on a computer. This kind of practise is fascinating it allows a robotic intelligence to grow and learn without needing an expensive physical body with multitude of fine sensors and motors. All sorts of cool things are being done with this; robotic children who learn from their actions, natural selection from robots with evolving attributes to tackle simulated tasks, Artificial intelligences able to best skilled humans in games or just hold a conversation. I imagine that any robot complex enough to be described as thinking will be produced in a virtual environment.

One advantage of a simulated environ is it can be customized to have as little or a much detail as the programmer/user likes. It would be a lot easier to teach a robot to walk if the environment is simple flat shapes. This also means no worries due to imperfect sensors, all the data about the robots surroundings can be given to exact precision by the program running the simulation. Perhaps robots will become able to operate in the real world by slowing improving our ability to simulate the world until the two are near indistinguishable. This seems a more feasible challenge than starting with physical objects, mostly due to the opportunity to introduce complexity slowly.

I may be muddying the line between robot and artificial intelligence here, in my opinion the two are effectively the same only the former often has a physical discrete form. Even then you could hypothetically have a robotic worker who completes a task without any real form, say a banker or factory administrator simply existing as a process on a mainframe. To act morally good or bad I would posit that a robot would require an advanced artificial intelligence not necessarily linked to a physical form.

One area in which robots are unquestionably used in an environment filled with ethical implications is unmanned drones in warfare. Most uses of autonomous drones have been surveillance but I think there have been instances of using them as weapon platforms? As far as I know the ones used to harm humans are all remotely operated so the morality of the action is upon the operator. There isn't anything stopping militaries using autonomous systems which acquire and dispatch targets independently. I don't like it but imagine when not if it becomes possible responsibility will lie on the programmer of the identification code and the soldiers who activate it. the robot will still be a directed tool.

Well on that dark note I thought I'd stop for now, I've only thought about current day robotics and argued that they don't yet have the intellect to commit moral acts. Which will lead nicely onto the next subject of what then when/if they do? Which I have just as many thoughts about.


2 comments:

  1. http://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/

    ReplyDelete
  2. Wow really good link there Aveek, might have to investigate more by Colin Allen. He seems to know much more than I do, so much more that I might need to read up a bit more on the topic before I write the next part.

    ReplyDelete