September 02, 2004

On I, Robot and Machine Intelligence

Steamrollering into the cyberpunk forum.... Hello!

It is common in a sci-fi storyline for robots to 'break their programming'. The recent film I, Robot presumed to twist this conceit by having the robots doing evil things without breaking their programming. Much along the lines of the Djinn in the trashy horror film "Wishmaster" (where an evil genie deliberately misinterprets the wishes of his masters, who invariably end up mutilated in some humourous manner). For me, I, Robot was much less fun (in this respect), and served to propagate yet more tedious misconceptions about the nature of both machine and human intelligence. Programming a robot is not as easy as you think.

I will attempt here to use I, Robot to illustrate some misconceptions about the required nature of workable machine intelligence...

Human beings ARE robots. The most hi-tech robots in existence (as far as any of we humans are aware). Even the stupid ones. We've been incrementally designed by the effects of evolution on DNA, constructed from our genetic code via RNA. (Yes, I am very aware that 'evolution' is a problem word, abused and distorted all the time). Forget your normal associations. Animals are robots constructed from organic matter (carbon is a very useful material - much better than metal, and nature discovered this long long ago).

Think of a fly trying to get out of a window with a narrow opening. You can see the simplicity of its built-in algorithms. It's pretty much:

do while (away_from_light AND not dead)
   fly_towards_light();
loop

If it weren't for turbulence and other subtle complexities of the physical world it would never get out of that damn window.

This is how all of the computers that we use in every day life work. Every condition and possibility (or method of describing ranges of possibilities) must be prescribed, or the computer can't do the job, or it gets stuck in a loop (which is what is usually happening when you see "Not Responding" in Windows). If a possible error has not been predicted (e.g. "glass_invented"), then the computer/robot will get stuck.

Intelligence could be defined as an ability to 'step outside the system'. If you were stuck in an underground tunnel and wanted to escape, you might well start a go_towards_light() loop, but if the light turned out to be coming from a tiny grating that you couldn't get through then you'd break out of that algorithm, or modify its structure [e.g. go_towards_light(where_source_size_is_bigger_than_self )]. A classical computer is incapable of making this sort of modification to its algorithms.

A cursory glance at the robots in I, Robot, even the OLD ones, is that they are not based on classical computing. They can learn and execute commands that have not been hard-coded in their operating systems. The robot running to bring a bag for its owner could not have had all the procedures pre-programmed for THAT EXACT TASK.

More to the point: the ability to understand plain English and respond correctly - the ability to act on language-based (as opposed to button-pressing) instructions is a plain sign of some sort of intelligence. You need the fuzziest of fuzzy logic and the most dynamic of learning structures to understand the massive variety of accents, unfamiliar words, and to know what questions to ask if you don't understand, or you'll go straight back to the shop as your owner asks for her money back.

If the holographic recording of the dead scientist ("You must ask the right questions") represents a classical computer (it could have conceivably been pre-programmed to expect Will Smith's voice and ways of asking questions, though it would still have needed a certain amount of fuzzy logic to parse the language so easily), then the millions of robots living in the homes of all kinds of people represent this second-generation approach to computing. They are intelligent, in the same way that humans (and dogs and dolphins) are intelligent: they can learn from their environment, and they can step-outside-the-system.

You can't program a human with a set of static algorithms. Just try telling a child to repeatedly attempt an impossible task - they'll give up very soon.

Therefore, the implementation of Asimov's 3 laws of robotics is FAR FROM TRIVIAL.

So we have to look at HOW HUMANS ARE PROGRAMMED. I posit that we are programmed THROUGH OUR EMOTIONS. To program a second-generation robot, you have to make it feel pain when it tries to disobey one of your rules. This is necessarily a fuzzy method of programming. You can anticipate a much wider range of possibilities by tapping into your robot's perceptual system. The robot perceives that it might be killing a human and the pain-response is triggered. This is how humans work. We know the ambiguities of the perception of wrong-doing only too well.

I, Robot implicitly assumes, by making Sunny such a sympathetic character, that the set of morals and emotions that human beings have are in some way special - a natural outcome of any instance of intelligence. This is not the case. Our emotions and morals are installed primarily by our DNA, albeit heavily filtered by our (cultural and physical) environment. Schizophrenic tendencies are genetic (by which I mean varying degress of psychotic, pathological and paranoid tendencies, as well as creative and depressive mentalities). Interestingly, it is by reference to rare schizoid, 'malfunctioning' humans that Will Smith's character defines his own humanity (in the slightly contrived conversation with Sunny in the police station).

But our programming is contingent on its benefits for the replication of DNA in its environment. Our moral philosophies bounce around inside a limited set of possibilities, limited by our pre-programmed feelings that respond to learned perceptual responses. The real range of possible emotions is unlimited. We could just as easily (even ACCIDENTALLY) program one of our second-generation machines with feelings of pleasure on killing a human (in much the way that a malfunctioning human - a psychopath, might take pleasure in such an activity), but only through emotional programming.

Think how impossible it would be to program a robot arm in a factory with the hopelessly abstract constraint that it 'is not to kill any human'. You'd need light sensors, sounds sensors - all manner of perceptual systems, then you'd have to program it with responses to each combination of stimuli, and even then it would still get it wrong. You err on the side of caution and it stops all the time. You take an optimistic approach and some stupid factory worker creeps up on it and gets sliced in half before the machine knows what has happened...

Coming soon: "When the Humans Break Their Programming"

Posted by smunk at September 2, 2004 02:14 PM | TrackBack
Comments

Mike, welcome, this is brilliant.... exactly why I asked you to join da Kru....

I accept everything you are saying about the difficulties of programming the NS-4's and 5's, but does the film really suggest that this is an easy matter? Isn't the robot psychiatrist's role partly to faciliate machine learning?

After all, the whole point of the denoument with VIKI etc presuppposes that she at least has learned --- even though she is constrained by the three laws herself.

I suppose what I'm suggesting is that the laws aren't programmes as such - more parameters or something???

Posted by: mark at September 2, 2004 03:31 PM

it's good to have some hard science at k-punk at last.

Posted by: singularity at September 4, 2004 01:08 PM