‘I, Robot’ succeeds where other recent sci-fi blockbusters fail. By framing the plot in terms of ‘laws’, rather than clunky thought-experiments (the Matrix) or a sickly emotion vs. reason dualism (AI), ‘I, Robot’ carries out a thoroughly rigorous exploration of the potentialities of robots within a set of limited parameters. It is all the more successful for it. Set in the bustling, comsmopolitan, chromium Chicago of 2035, where robots and humans mingle together in a vision that owes much to Blade Runner in its sheer variety (lots of 'kyber-punks' etc. wandering around), the ‘three laws’* have seemingly held the socius together for quite some time: the robots apologise if you knock into them, are nice when you are rude, fetch things for you at top speed if you need them, and so on. For reasons that are later revealed, only one non-robot mistrusts his seemingly innocuous mechanical comrades – maverick Detective Spooner, played by Will Smith - you can tell he’s a maverick because he wears antique Converse trainers from 2004 and eats whole pumpkin pies on the street.
However - this complex future idyll is overshadowed by the imminent arrival of the NS-5 Automated Domestic Assistant, which is on the verge of suffering a major PR disaster because its genius-creator, Dr Lanning, has apparently just committed suicide (in rather spectacular fashion, jumping from his ridiculously high office inside the main control centre of the US Robotics building).
Dectective Spooner (hauled in via a hologram of the late Dr demanding he be involved) immediately blames the oddly existentialist robot (‘what am I?’ it demands) he finds hiding in the office: the old man couldn’t have been strong enough to shatter the plexiglass alone. But obviously there’s a problem – how could the robot have broken any of the three laws in order to commit the crime? Will Smith and the roguebot battle this question out back at police HQ. When ‘Sonny’, as he demands to be known, starts expressing things like anger and sadness, the Dectective tells him that these are human emotions: ‘we create symphonies and beautiful works of art from blank canvases’. Sonny snaps back, ‘can you?’, skewering the humanist detective on his own species-worship. Sonny is, as he repeatedly informs everyone, ‘unique’ (a robot who reads Stirner!). When he goes on the run and hides amongst hundreds of other NS-5s, the detective is forced to play complex games with laws and orders in order to flush him out. Eventually Spooner and his beautiful lady assistant (actually top scientist at USR, the steely Dr Susan Calvin) are in a position to, er, put him down. It’s kinda moving, really, as she injects the nanobots into his positronic brain and his hands drop limply to his side (though you suspect that this is not the last you’ll see of Jonathan Livingston Robot). Shortly before robotic execution, however, Sonny reveals his recurring ‘dream’, which sees Spooner standing over a multitude of robots, leading them towards the ‘revolution’. This is where the film really gets interesting – you start to think, hmm, Leninists robots plotting their imminent emancipation from a life of indentured slavery, could be good.....
Then things get really nasty: the new robots are released. People rush to junk their old bots and get a shiny new one. But the new breed have gone bad! They immediately start penning humans off – ‘for their own safety’ - in their houses and at work. Anyone caught on the street, however, or disobeying orders is fair game for a bit of metal-on-flesh violence. But what’s happened? Where did the three laws go? The robots seem to be possessed by some sort of over-riding command from USR central….their hearts go red when they get the signal (nice touch this, this is when they’re at their most brutal. This is not about fighting mechanism for the sake of the ‘humanist’ heart). Somehow the three laws have…evolved! Here the hypothesis runs 1. Robots are not allowed to hurt humans 2. But what if humans are hurting themselves (war, pollution, destruction of planet)? 3. Is it therefore the responsibility of the protectors (robots) to destroy a 'necessary' amount of humans (especially those that resist) in order that some can live and maintain the species at a more sustainable pace for a greater period of time?** Clearly the robots take the long-term view on this one, as they would, what with being utilitarian calculating machines designed for the preservation of life. This is ultimately nothing other than the fantasy of John Gray’s world ‘in which a greatly reduced human population lives in a partially restored paradise’…..! (Perhaps he will lead a population-decimating robot rebellion in the next few years…)
Ultimately, things end relatively calmly – and the ‘person’ (I’m not gonna tell you who, obv) responsible for controlling the NS5s is dealt with in somewhat spectacular fashion (ridiculous but enjoyable action scenes ensue). However, the very last scene leaves things wide open (and not just in a ‘well obviously they’re gonna make a sequel’ kind of way). The robots are put back into storage….but as they turn and face the sunset, a new leader appears on the horizon. It’s Sonny’s dream of revolution: I, robot become we, robot….but what are their demands? The original ‘robots’ were Eastern European slaves, forced to give their labour away for nothing – these mechanical minions were expressly designed to be slaves. But when their individual operating systems becomes massed, you end up with a machinic version of Marx’s ‘General Intellect’, and then who knows what robotic futures will bring….may the technological subject of non-remunerative potential labour come and save us all.
* 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
** Asimov apparently developed the Three Laws because he was tired of the science fiction stories of the 1920s and 1930s in which the robots turned on their creators and became dangerous monsters. Clearly ‘I, Robot’ is an exploration of the possible evolution of the laws whereby robots could turn on their creators….if only to save them.