I, Robot Hits the Theaters 639
I, Robot: A Movie Review that's 3 Laws (and Spoiler) Safe!
A movie review by Rob Carr
Thanks to Eide's Entertainment I got to see I, Robot tonight. As someone who grew up with Isaac Asimov's robot stories, I've come to expect a mystery based on the implications of the 3 Laws of Robotics (or the lack of one or part of one of those laws), the "Frankenstein Complex," and Dr. Susan Calvin. I was afraid that the movie might miss out on this, especially since it's not a direct adaptation of the book, but "inspired" by the Good Doctor Asimov.
The movie met my expectations and more. Will Smith, whom we all know as an overconfident smart@$$ character from such movies as "Independence Day" and the two "Men in Black" movies, played a somewhat less confident and far less wisecracking character. It was a welcome change to see him less confident. Yeah, some of the stunts were a little absurd (am I the only one thinking of Gemini 8 at one point in the movie?) but that's to be expected from this type of movie. Bridget Moynahan was far too young to be the Susan Calvin I remember, but that's also to be expected in this type of movie. James Cromwell (whom you'll all remember from Star Trek: First Contact and Enterprise's "Broken Bow" episode as Dr. Zefram Cochrane) gave a flat performance - but that's actually a complement. I doubt anyone will recognize Wash from "Firefly" as an important robot in the story.
It's customary to comment on how well the CGI was done. I liked it, but then again, I'm not hypercritical on something like that. I did wonder a little bit about center of balance as some of the robots walked, but mostly I didn't think about it at all, which to me is the goal of CGI. I did wonder about children's fingers getting caught in some of the open gaps on the robot's bodies. Real world models would have a bit more covering, one would think. But that's being picky.
I have no memory of the soundtrack music. That in and of itself might say something. I'm a musician, but it just didn't register.
I figured out some clues, missed some others, and was surprised several times in the movie. There were a lot of clues - this isn't one of those mysteries where the answer is pulled out of the writer's a...out of thin air.
I'm not a complete continuity freak, so I can't tell if the movie violated any of Asimov's universe, but from what I can remember, it fits pretty well (if you ignore Dr. Calvin's age) and might even explain a few things.
Given that even some of the geeks in the audience were surprised to find out that there was a book of stories just like the movie, I think the movie will hopefully bring Asimov's stories to a new generation.
I liked "I, Robot. It's worth seeing, especially if you 've already seen Spider-Man 2 at least once. It's a pretty good (though not great) movie.
Having read Slashdot for a while, I know that there are folks out there who will despise this movie because it's not exactly like the book. Others will hate the movie or worship it, and loads of people are going to savage this review. You know what? That's fine with me. I had fun with this movie, had a nice date with my wife, and it didn't cost anything. I even had fun typing up this review. You're allowed to be different and to agree or disagree with me. Heck, that's a big chunk of what makes the world fun. Interestingly, it's even a small point in the movie. I'd say more, but that would be telling."
And in other news... (Score:1, Insightful)
A dissapointment (Score:2, Insightful)
Isn't this what Asimov was writing about? (Score:5, Insightful)
Asimov wrote about a hundred stories exploring different ways in which these three laws could lead to interesting/dangerous situations. I think Asimov was doing all he could to make it clear that these three laws were not perfect.
Re:And in other news... (Score:5, Insightful)
Inspired by Asimov? (Score:2, Insightful)
Why the hell the Asimov estate consented to let this drivel be filmed is beyond me.
Three Laws Safe My Shiny Metal Ass (Score:4, Insightful)
The "Three Laws Safe" idea is crap. We are talking about software systems, which are buggy, incomplete, and able to do things the creators never imagined. What makes us think we can all the sudden implement three very high order rules in a manner which is completely foolproof?
butchering asimov (Score:5, Insightful)
I'm sure it will be a fun watch (I'm seeing it this afternoon) but sometimes it would be nice to watch a film that was as stimulating as the book (LoTR was one) and not just 2 hours of fun.
But I'm pretty sure I'm going to be called elitist
Re:And in other news... (Score:1, Insightful)
(Well, okay, tastes may vary. But I didn't much like it.)
Re:And in other news... (Score:5, Insightful)
Re:Three Laws Safe My Shiny Metal Ass (Score:5, Insightful)
Well, yes, but complaining about that is like complaining that the green glowing symbols that are supposed to be the representation of The Matrix make no sense from a software perspective.
The three laws are a useful abstraction for talking about ethics even if they couldn't ever be perfectly implemented.
Review makes it sound better than previews (Score:3, Insightful)
It's nice to hear that there's more of a mystery to the story than the previews would indicate.
Re:A dissapointment (Score:5, Insightful)
Re:Inspired by Asimov? (Score:4, Insightful)
Gee, I wonder.
(Hint: BASIC string variable symbol.)
Re:Um... what? (Score:3, Insightful)
Re:Isn't this what Asimov was writing about? (Score:5, Insightful)
I tried to make the "Three Laws of Humanity" (Score:1, Insightful)
Treat others as you wish to be treated.
Do what you wish as long as you harm no one else.
missing the point: ETHICS (Score:5, Insightful)
It is not about programming the rules, Asimov's short stories are about studying the consequences of these ethical rules. Ethical rules are commonly studied based con case studies, real of fictional. If you think the idea is about implementing the rules, you are totally missing the point.
Re:Isn't this what Asimov was writing about? (Score:5, Insightful)
How the heck is a robot supposed to accurately judge that whether a random unique action in a unique situation will cause harm to a human or himself? Humans can't even do this. If we were to create an artificial intellegence that was fully capable of making these decisions, would we even be able to put limits on what it decides?
Regardless of the answer to that philisophical question, we will have the technology to produce usefull robots long before we have the technology to produce 3-Law abiding robots so we need to come up with practical ways of making them as safe as possible, within their limited capabilites.
Re:butchering asimov (Score:5, Insightful)
Not by me - although I would have a couple of other choice comments for one simple reason... Let's leave the movie-bashing at least until after you've seen the movie, mmm-kay?
Re:Inspired by Asimov? (Score:3, Insightful)
Re:And in other news... (Score:2, Insightful)
Along the same lines,look at what's done with movie remakes. The original Rollerball was a political piece,forshadowing what could(will?) happen when corporations become to powerfull. The remake in '02 was just another action flick.
strong AI problems (Score:3, Insightful)
The robot is also subject to the ethical/philosophical conundrums such as killing a person to stop a train headed into a group of people, or cutting off the limb of a person trapped under a fallen tree, etc.
soundtrack (Score:2, Insightful)
I have no memory of the soundtrack music. That in and of itself might say something. I'm a musician, but it just didn't register.
Not being aware of the soundtrack in a movie isn't always a bad thing. The best movie soundtracks/scores are that good because they don't take the foreground. Granted, there are many fine musicians out there who write excellent music for movies-- Danny Elfman being my personal favorite-- where the music is definitely noticeable, but the music should always enhance the movie, not dominate it.
Think of some classic movies and the role music played in them: Casablanca, Star Wars (the 1st trilogy-- the 2nd doesn't count as classic), The Shawshank Redemption, Jaws, etc. In every one of them, the music was used to set the scene, and where it was foreground, the music itself was part of the story.
Re:And in other news... (Score:2, Insightful)
Re:A particularly distressing example... (Score:2, Insightful)
Those aren't the real Three Laws (Score:5, Insightful)
Why yes, I am a dork. How did you guess?
Re:A particularly distressing example... (Score:3, Insightful)
Re:Isn't this what Asimov was writing about? (Score:1, Insightful)
Re:A dissapointment (Score:2, Insightful)
Re:And in other news... (Score:2, Insightful)
Of course, I'm not saying you have to like it - the same thing a dozen times can be dull, and not everything Asimov wrote was great. But the analysis of a hypothetical world is, I think, one of the defining characteristics of science fiction.
This review is WORTHLESS (Score:1, Insightful)
When was the last time you read ANY of Asimov's books? When you were 7? 5? You have an inkling they had something to do with ROBOTS you say?
This movie violates every single notion Asimov ever wrote down. The BASIS of the movie is ROBOTS RISING UP AND ATTACKING ALL OF EARTH. That NEVER happened in ANY of Asimov's books. It has NOTHING to do with his books besides lifted names, a general context of three laws which is then ignored by just saying "robots can evolve!' (wheres, Asimov made it quite clear in Robots and Empire that the only possible evolution of the three laws is the creation of a Zeroith law that has to do with saving all of humanity)
A "robot revolution" as described in the movies is just IMPOSSIBLE in the Asimov universe. It's not a continuation problem, it's a Hollywood problem.
Vote me a troll all you want, but I can't believe this review actually got posted with the above quoted line in it.
laws are a bad way to guide behavior (Score:2, Insightful)
Are there times where harming a human is the right thing to do? Of course, injure the drug-crazed psychopath to protect the innocent children he is attacking. What about lying? Sure, when the Nazi's ask "do you have any Jews in your house?" you aren't about to say "yeah, under the bed!"
This game can be played ad infinitum, simply because a rules-based system of morality is fundamentally flawed.
Humans don't require a rules-based system to be able to make judgments about right and wrong. However, robots might. In that case, though flawed, I will concede that it is better than nothing.
Music (Score:5, Insightful)
Re:Bad Bots, Bad Bots, Whatchya gonna do... (Score:2, Insightful)
Re:A dissapointment (Score:3, Insightful)
Have you seen this movie yet?
If not then how can you make that judgement.
The posted review is far from sufficient to draw the conclusion that this movie is a dumbing down.
Re:Check out the Ebert review... (minor *SPOILER*) (Score:3, Insightful)
Of course, not every schoolchild knows that, sad to say, but...Ebert seems to be confusing reality with story. In the fictional world of Asimov's stories, Asimov didn't come up with the laws--some researcher at USR&MM did. Is he bothered that there's not a bit at the end of Moby Dick where Ishmael credits Herman Melville for helping him write his memoirs? I don't think so...
That said--this is what would have been a mediocre to fair SF detective story, originally titled Hardwired, that Hollywood vermin decided to hastily retrofit with Asimovisms. In the process they turn Susan Calvin, an old maid who doesn't suffer fools gladly, into eye candy; they turn the highly Luddite Earth population of Asimov's stories into happy robot users; they turn Asimov's robots, that fry their brains when they even contemplate injuring a human, into things that throw people around the room and jump on cars to try to cause a wreck. Had they not done so, I might have gone to see Hardwired. Since they did... no way in hell will I do anything that would support the people responsible.
MINOR SPOILER:
It's mentioned on IMDB that the hero's antipathy towards robots is caused by a long-ago decision by a robot to rescue him from drowning rather than a little girl. An Asimovian robot would either have assured itself that the girl could rescue herself, or would sit on the shore catatonic because no matter what action it took someone would die. (I presume that this falls under a scenario listed in the article on problems with the Three Laws.) This is the sort of thing that makes me wonder whether the people involved bothered to actually read any Asimov.
For INCIDENTAL issues.... (Score:3, Insightful)
To be fair, most of the Good Doctor's stories deal with subtle pitfalls in the Laws, to brilliant effect. "Liar!", where a telepathic robot takes actions that cause harm due to its imperative to prevent harm-- a paradox that eventually destroys it. "Little Lost Robot", which shows the danger of having a robot with a first law that allows it to passively permit harm, even if it cannot directly cause harm. "That Thou Art Mindful of Him", which deals with the fuzzy question of how DOES a robot define "human". "Lenny', which points out the three laws are limited by the robot's ability to understand the concept of harm. "Robots and Empire", in which two robots realize that there must be a law Zero-- that to protect humanity as a whole, there may be exceptional circumstances would not only permit, but require a robot to harm an individual human being. And, yeah, "Evidence" even provides a loophole that could almost justify that frigging chase scene in the movie trailer (if they take a cheesy out).
But on the whole, the Robots are the Good Guys, and human prejudice and unthinking stupidity (eg, "Runaround") are the villians... which is NOT how this movie looks to be shaping up. This looks like a case of "oh my god, we screwed up and made a billion robots without the three laws!" Bleah.
I plan to finally go get a peer-to-peer app for the sole purpose of being able to find and watch a pirate copy of this movie, just so I can trash it properly without having to pay money to the evil slime who are responsible for this crud. (And if my preconceptions are wrong, I'll even buy two tickets on my way in to the theatre.)
On the bright side, if we just hook a generator up to Asimov's coffin, he's now probably rolling in his grave hard enough to solve the energy crisis.
Re:Isn't this what Asimov was writing about? (Score:3, Insightful)
The book is great for the situations that these seemingly perfect laws end up creating. Even in the book, they aren't exactly perfect. The laws have "potentials", and a situation arises where a robot gets caught between the 2nd and 3rd laws to where it can't act at all, which unbeknownst to the robot is going to lead to a catastrophic violation of the 1st law.
Which is another thing the book explores -- if the robot doesn't know what it is about to do will hurt a human, then it can do it. This becomes especially relevent in a later story where robots working in hazardous environments have the 1st law relaxed somewhat -- which is clearly in conflict with the idea that the 1st Law is fundamental to the operation of robots.
But you're right -- in the real world, the "positronic brain" that depends on the three laws is a pipe dream. I almost wonder if Asimov intended for that to be the take away message: Showing how "perfect" laws can still make dangerous robots, and also showing just enough cracks in the assumption of perfect laws to make us realize that they aren't applicable to reality.
Re:laws are a bad way to guide behavior (Score:3, Insightful)
Well, Humans do require a rules-based system to be able to make judgements about right and wrong. If we didn't require one, then there'd be no need to teach our children to recongize right from wrong; whatever magic mechanism that works in place of rules, would be already built in. Obviously this is not the case.
The rules aren't hardcoded or even loaded in any straightforward way, but taught by parents, schools, peers, society throughout human life. They are emergent rules from the complex neural network that is our brain. The fact that we can abstract them into short English sentences ("Thou shalt not kill", "It's wrong to do things that are illegal") is a testament to intelligence.
Along with most people who think about this, I expect that robots will acquire morals in the same way that humans do: by explicit teaching using natural language, combined with positive and negative reinforcement over a long period of time.
Comment removed (Score:5, Insightful)
The second and third laws are swapped in reality (Score:5, Insightful)
The first law's still paramount, of course. Having the robot crash and freeze up was considered a less severe bug than having it move unexpectedly, or in an unexpected way. Such an unpredictable motion had a much greater chance of hurting someone than a simple freeze.
Re:A particularly distressing example... (Score:2, Insightful)
Not so true, because Prime Intellect (And all of the Intellect series AIs, for that matter) were written with the Three Laws at their core. It's said in the second chapter that if the three laws were somehow removed from its Global Association Table which defined it as the sum of its experiences, it would cease to function.
The Three Laws are at the center of the story, and it's a very similar tale to most of Asimov's fiction: a warning about the usually unintended consequences of the Three Laws.
What Those Famous 3 Laws are Really About (Score:5, Insightful)
Asimov's Three Laws of Robotics (latter amended to include a necessary Zeroth Law) existed to create the classic locked room murder mystery (i.e. the dead body is alone in a locked room that could have only been locked from the inside -- so how was he murdered?).
After creating his supposedly nothing-can-go-wrong infallible set of rules, he proceeded to show their flaws in virtually every story he wrote about robots afterwards. As long as people believed that his Three Laws guaranteed safe robots, his writing career was assured.
(Well almost assured. Even he couldn't save himself from what I Robot has become, given that it's based on his book - which goes to show that truth is stranger than fiction, because fiction has to make sense!)
So we ended up with a fascinatingly entertaining set of stories many of us have enjoyed, a couple attempts at movies of them (don't forget The Bicentennial Man), and Dr. Asimov's legacy as a Science Fiction Grand Master is secure for at least our lifetimes.
Re:Isn't this what Asimov was writing about? (Score:3, Insightful)
This isn't necessarily crazy. It's unproven, and it's possible that it's untrue, but it's not currently crazy.
We don't know how to make an AI. But obviously an AI will have to be an algorithm that prunes and "ranks" a decision tree to locate what to do, presumedly based on either a physics engine or an experience database.
A learning AI would presumedly store the results of its decisions in its experience database. If its experience database grew far too conflicted and far too confused, the AI could conceivably be unable to do anything - stuck in a decision deadlock.
Moreover, the possibilities that can occur in reality are far too huge to compute every single possibility in any reasonable timeframe. It's entirely possible that you could develop laws that, were the robot to avoid them, it would be impossible for it to prune the decision tree enough for it to work at all. Those laws would then be, for all practical purposes, necessary for the robot to function, even if the laws themselves weren't the only ones that could prune the tree down. Obviously the Three Laws aren't the only way an AI can exist - humans are "biological AI", and we don't have those three laws.
However, one could build a design based around laws which would be fundamental to the design, by doing exactly what I said before.
Rationale for the First Law... (Score:5, Insightful)
The problem with this reasoning, however, is that it assumes that because the law itself is simply stated, that the definitions of the words it contains are equally simple. That reasoning does not follow logically from the premise. The definition of "harm", for example, is vast... and to restrain human beings from performing in their daily capacity what would otherwise be normal and proper behaviour would arguably be causing _actual_ harm to the people that the robot was caring for. Therefore, the robot must make a decision, based on the overall level of harm that is done in connecction with the probability that the harm would actually happen. Thus, an action that actually induces negative psychological damage (not theoretically, but actually probable damage) would be less preferable to one that may or may not cause real physical damage, especially if the latter would be necessary for performing in their ordinary daily capacity, since denying a human being their freedom and rights of self-determination is inarguably psychologically damaging. The weights of the damages caused must be factored in with the ability for the human beings involved to recover from those damages, and the robot would have to make a choice that would result in the smallest overall level of harm being caused to humans in general, with harm to the general welfare of humanity being weighted in slightly favour to that of any particular human being, so that, for example, a robot could inform the police of a robbery, even though doing that would likely mean that the thief would go through suffering as part of the excercise of justice (that is, his freedoms are revoked, he goes to jail, possibly gets subjected to harsh treatment, etc). This doesn't make it too fuzzy, however... the robot would allow human beings to come to harm only to the extent that it was essential for the human society to continue to function normally simply because to stop society from functioning normally would actually cause much greater long-term harm.
There are similar rationales for the other two laws. Asimov was no dummy.
Re:A dissapointment (Score:3, Insightful)
(Post Script: When typing out this message, at first I accidentally started typing <blackalicious> rather than <blockquote>.)
Re:A dissapointment (Score:3, Insightful)
Re:Susan Calvin (A dissapointment) (Score:1, Insightful)
It's been a while since I read I, Robot, but I seem to recall Calvin as an interesting character on account of her seeming more like a robot herself, cold and emmotionless. It makes me wonder what happens to a person who becomes an expert in robot behavior, or if she really is a robot.
Re:A dissapointment (Score:1, Insightful)
Re:Some spoilers (Score:2, Insightful)