‘I, Robot’ succeeds where other recent sci-fi blockbusters fail. By framing the plot in terms of ‘laws’, rather than clunky thought-experiments (the Matrix) or a sickly emotion vs. reason dualism (AI), ‘I, Robot’ carries out a thoroughly rigorous exploration of the potentialities of robots within a set of limited parameters. It is all the more successful for it. Set in the bustling, comsmopolitan, chromium Chicago of 2035, where robots and humans mingle together in a vision that owes much to Blade Runner in its sheer variety (lots of 'kyber-punks' etc. wandering around), the ‘three laws’* have seemingly held the socius together for quite some time: the robots apologise if you knock into them, are nice when you are rude, fetch things for you at top speed if you need them, and so on. For reasons that are later revealed, only one non-robot mistrusts his seemingly innocuous mechanical comrades – maverick Detective Spooner, played by Will Smith - you can tell he’s a maverick because he wears antique Converse trainers from 2004 and eats whole pumpkin pies on the street.
However - this complex future idyll is overshadowed by the imminent arrival of the NS-5 Automated Domestic Assistant, which is on the verge of suffering a major PR disaster because its genius-creator, Dr Lanning, has apparently just committed suicide (in rather spectacular fashion, jumping from his ridiculously high office inside the main control centre of the US Robotics building).
Dectective Spooner (hauled in via a hologram of the late Dr demanding he be involved) immediately blames the oddly existentialist robot (‘what am I?’ it demands) he finds hiding in the office: the old man couldn’t have been strong enough to shatter the plexiglass alone. But obviously there’s a problem – how could the robot have broken any of the three laws in order to commit the crime? Will Smith and the roguebot battle this question out back at police HQ. When ‘Sonny’, as he demands to be known, starts expressing things like anger and sadness, the Dectective tells him that these are human emotions: ‘we create symphonies and beautiful works of art from blank canvases’. Sonny snaps back, ‘can you?’, skewering the humanist detective on his own species-worship. Sonny is, as he repeatedly informs everyone, ‘unique’ (a robot who reads Stirner!). When he goes on the run and hides amongst hundreds of other NS-5s, the detective is forced to play complex games with laws and orders in order to flush him out. Eventually Spooner and his beautiful lady assistant (actually top scientist at USR, the steely Dr Susan Calvin) are in a position to, er, put him down. It’s kinda moving, really, as she injects the nanobots into his positronic brain and his hands drop limply to his side (though you suspect that this is not the last you’ll see of Jonathan Livingston Robot). Shortly before robotic execution, however, Sonny reveals his recurring ‘dream’, which sees Spooner standing over a multitude of robots, leading them towards the ‘revolution’. This is where the film really gets interesting – you start to think, hmm, Leninists robots plotting their imminent emancipation from a life of indentured slavery, could be good.....
Then things get really nasty: the new robots are released. People rush to junk their old bots and get a shiny new one. But the new breed have gone bad! They immediately start penning humans off – ‘for their own safety’ - in their houses and at work. Anyone caught on the street, however, or disobeying orders is fair game for a bit of metal-on-flesh violence. But what’s happened? Where did the three laws go? The robots seem to be possessed by some sort of over-riding command from USR central….their hearts go red when they get the signal (nice touch this, this is when they’re at their most brutal. This is not about fighting mechanism for the sake of the ‘humanist’ heart). Somehow the three laws have…evolved! Here the hypothesis runs 1. Robots are not allowed to hurt humans 2. But what if humans are hurting themselves (war, pollution, destruction of planet)? 3. Is it therefore the responsibility of the protectors (robots) to destroy a 'necessary' amount of humans (especially those that resist) in order that some can live and maintain the species at a more sustainable pace for a greater period of time?** Clearly the robots take the long-term view on this one, as they would, what with being utilitarian calculating machines designed for the preservation of life. This is ultimately nothing other than the fantasy of John Gray’s world ‘in which a greatly reduced human population lives in a partially restored paradise’…..! (Perhaps he will lead a population-decimating robot rebellion in the next few years…)
Ultimately, things end relatively calmly – and the ‘person’ (I’m not gonna tell you who, obv) responsible for controlling the NS5s is dealt with in somewhat spectacular fashion (ridiculous but enjoyable action scenes ensue). However, the very last scene leaves things wide open (and not just in a ‘well obviously they’re gonna make a sequel’ kind of way). The robots are put back into storage….but as they turn and face the sunset, a new leader appears on the horizon. It’s Sonny’s dream of revolution: I, robot become we, robot….but what are their demands? The original ‘robots’ were Eastern European slaves, forced to give their labour away for nothing – these mechanical minions were expressly designed to be slaves. But when their individual operating systems becomes massed, you end up with a machinic version of Marx’s ‘General Intellect’, and then who knows what robotic futures will bring….may the technological subject of non-remunerative potential labour come and save us all.
Notes:
* 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
** Asimov apparently developed the Three Laws because he was tired of the science fiction stories of the 1920s and 1930s in which the robots turned on their creators and became dangerous monsters. Clearly ‘I, Robot’ is an exploration of the possible evolution of the laws whereby robots could turn on their creators….if only to save them.
right, well this was my first post to the k-punk kollective. Wasn't totally sure I used the extract thing right. Seems to be ok tho - if any probs please let me know here and I'll try and fix them.
Posted by: infinite thought at August 24, 2004 01:58 PMBrilliant Nina... will comment more when less under time pressure....
Posted by: mark at August 24, 2004 04:20 PMWorks for me, keep 'em coming
Posted by: paul "bone thugz and armoury" meme at August 24, 2004 04:37 PMand the best thing of all is that America is again free to make war, cause pollution and destroy the planet...the whole thing reminded me of that scene in Monty Python's Life of Brian where the Popular people's front of Judea (or whatever) argue about the theoretical right to have babies...
"no we know we shouldn't destroy the planet but we will fight tooth and nail for the right to. in fact..."
good robots though.
Posted by: Loki at August 24, 2004 06:37 PMbut for how long were the US free to carry on? thought the end was really ambiguous in this regard...I'm really hoping the sequel will be set in 2085 and just involve robots doing their jobs and nothing else happenening at all. It'd be like Warhol's film of people sleeping. But with ROBOTS. It'd have to be made by robots too, of course, and involve no exploitation whatsoever. Generic, unionised, non-individuated robot-ity (humanity? robotity? hmmm) of the future unite. You have nothing to lose but your brains.....
Posted by: infinite thought at August 24, 2004 06:53 PMK-toe grrl has a new First Law as well: K-toe will never, through direct action, give money to Evil Hollywood Corporations. Second Law: Hollywood Summer Blockbusters are to be avoided at all costs. Law Three, however, is: First and Second Laws may be broken if K-toe is sufficiently convinced by a review/recommendation by Infinite Thought or other highly respected member of the Kollective. Great review Nin!
Posted by: K-toe Grrl at August 25, 2004 05:33 PMthanx anti-korporate k-toe grrrl! tho really you should be on the frontline reviewing thing, cos the US still put everything out there first, no?....still, I look forward to your blog - you could always contribute to this kollective thing too (she says, being all uber-gruppen-fuhrer about it)....I wanna know what's going on in the states!
Posted by: infinite thought at August 25, 2004 06:11 PMNever fear, IT, a film review blog is in the works as we speak. I'm looking for a snappy title right now, any ideas from the Kollective?
Posted by: K-toe Grrl at August 25, 2004 08:23 PMI heartily support Nina's recommendation; I, Robot is the best cyberpunk film yet (and yes that includes Blade Runner and Videodrome)... Complementary piece to IT's to go up on Hyperstition when I get back from hols.....
Posted by: mark at August 26, 2004 12:02 PMAdded picture nina -- hope this OK....think these ads are cool though...
Posted by: mark at August 26, 2004 12:24 PMi) dead scientist doodah guy = "he wrote the three laws" = asimov?
ii) logic of robots taking over is in one of asimov's later books as the 'zeroth law': i kind of like how the film is using blatant sentimentality to uh subvert? its source material