Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Sci-Fi Media Movies

I, Robot Hits the Theaters 639

tyleremerson writes "With today's film release of "I, Robot," the Singularity Institute for Artificial Intelligence has launched a new website, 3 Laws Unsafe. 3 Laws Unsafe explores the non-fictional problems presented by Isaac Asimov's Three Laws of Robotics. The Three Laws are widely known and are often taken seriously as reasonable solutions for guiding future AI. But are they truly reasonable? 3 Laws Unsafe tries to address this question." Reader Rob Carr has submitted a review of the movie, below, that he promises is spoiler-free.

I, Robot: A Movie Review that's 3 Laws (and Spoiler) Safe!

A movie review by Rob Carr

Thanks to Eide's Entertainment I got to see I, Robot tonight. As someone who grew up with Isaac Asimov's robot stories, I've come to expect a mystery based on the implications of the 3 Laws of Robotics (or the lack of one or part of one of those laws), the "Frankenstein Complex," and Dr. Susan Calvin. I was afraid that the movie might miss out on this, especially since it's not a direct adaptation of the book, but "inspired" by the Good Doctor Asimov.

The movie met my expectations and more. Will Smith, whom we all know as an overconfident smart@$$ character from such movies as "Independence Day" and the two "Men in Black" movies, played a somewhat less confident and far less wisecracking character. It was a welcome change to see him less confident. Yeah, some of the stunts were a little absurd (am I the only one thinking of Gemini 8 at one point in the movie?) but that's to be expected from this type of movie. Bridget Moynahan was far too young to be the Susan Calvin I remember, but that's also to be expected in this type of movie. James Cromwell (whom you'll all remember from Star Trek: First Contact and Enterprise's "Broken Bow" episode as Dr. Zefram Cochrane) gave a flat performance - but that's actually a complement. I doubt anyone will recognize Wash from "Firefly" as an important robot in the story.

It's customary to comment on how well the CGI was done. I liked it, but then again, I'm not hypercritical on something like that. I did wonder a little bit about center of balance as some of the robots walked, but mostly I didn't think about it at all, which to me is the goal of CGI. I did wonder about children's fingers getting caught in some of the open gaps on the robot's bodies. Real world models would have a bit more covering, one would think. But that's being picky.

I have no memory of the soundtrack music. That in and of itself might say something. I'm a musician, but it just didn't register.

I figured out some clues, missed some others, and was surprised several times in the movie. There were a lot of clues - this isn't one of those mysteries where the answer is pulled out of the writer's a...out of thin air.

I'm not a complete continuity freak, so I can't tell if the movie violated any of Asimov's universe, but from what I can remember, it fits pretty well (if you ignore Dr. Calvin's age) and might even explain a few things.

Given that even some of the geeks in the audience were surprised to find out that there was a book of stories just like the movie, I think the movie will hopefully bring Asimov's stories to a new generation.

I liked "I, Robot. It's worth seeing, especially if you 've already seen Spider-Man 2 at least once. It's a pretty good (though not great) movie.

Having read Slashdot for a while, I know that there are folks out there who will despise this movie because it's not exactly like the book. Others will hate the movie or worship it, and loads of people are going to savage this review. You know what? That's fine with me. I had fun with this movie, had a nice date with my wife, and it didn't cost anything. I even had fun typing up this review. You're allowed to be different and to agree or disagree with me. Heck, that's a big chunk of what makes the world fun. Interestingly, it's even a small point in the movie. I'd say more, but that would be telling."

This discussion has been archived. No new comments can be posted.

I, Robot Hits the Theaters

Comments Filter:
  • This is the story that showed me the complete folly of the three laws: The Metamorphosis of Prime Intellect [kuro5hin.org]

  • by Quirk ( 36086 ) on Friday July 16, 2004 @12:58PM (#9718062) Homepage Journal
    Anyone else see the movie as a precursor to a game edition? The music on the site reminds me more of a sound track to a FPS. Movies made into games and games into movies may be a new trend.
  • Um... what? (Score:1, Interesting)

    by gribbly ( 39555 ) on Friday July 16, 2004 @12:59PM (#9718069)
    What exactly was the point of that review? In summary:

    * I liked it, but I'm not critical so don't take what I'm saying seriously, disagreement makes the world interesting so feel free to hate on my review.

    That's dumb. I'm trying to decide whether to see this movie. I grew up on the books, and the trailer has totally put me off (it looks totally genericized). So I read this to find out whether or not it would drive me crazy. I learned nothing from this.

    This was a front page story? God damn.

    grib.
  • by amliebsch ( 724858 ) on Friday July 16, 2004 @01:07PM (#9718205) Journal
    Roger Ebert [sun-times.com] gives it a measly two stars and, for the ./ crowd, bashes MS Word at the end of the review.
  • by Virtual PC Guy ( 720945 ) <ben AT bacchus DOT com DOT au> on Friday July 16, 2004 @01:08PM (#9718231)
    The bigger problem with the three laws is the vagueness of the english language. A number of the original Asimov stories dealt with issues like - how effective is 'cause no harm to humans' - if you can convince the robot that:

    1) That won't really harm him
    2) His not really human (think Aryan mentality)

  • by TheTXLibra ( 781128 ) on Friday July 16, 2004 @01:12PM (#9718299) Homepage Journal
    To be sure, we'd all like to say "Look, we've got these laws that say AI can't do XXXXX, so it can't." But the fact is, we cannot possibly account for every possibility with a simple set of laws. We, as would-be creators of an entirely new and admittedly alien form of life must tread as cautiously as possible. An entire attitude change and review of the ethics and rights of computers will have to be decided upon before AI's ever enter mainstream (or indeed, are even taken off an isolated network).

    A lot of people like to fantasize that true AI (as in, a living, thinking, emotional being with free will, or at least the capacity for free will) would have the same sort of thought processes, and develop the same emotions as their human counterparts. But let's be honest, the physical body largely determines human emotional state with glandular responses, or physical condition at the time. Eliminate glands, fatigue, and pain, and the emotions one might develop would be on an entirely alien level to us.

    I cannot help but fear that humans, as a whole, will not realize this until far too late, which will hurt, diplomatically, any alliance between humans and AIs. The other thing I worry about is that people will walk into this with the assumption of "These are machines, they don't need rights, they shouldn't have rights, and it's not like they're real people."

    I think society has seen how well that approach has worked with other humans in the past. Bloody revolutions and civil wars which tore nations apart, and left racial stinging still in the back of many people's minds today. Fortunately, the short memory of humans, and only somewhat longer-lived lifespan has allowed us to progressively become more and more integrated, as human beings, rather than various races.

    Now take those same results, and apply it to a species that is not only will likely be more resiliant to attack, but have a memory that can last as long as the hardware and backups and redundant networks will allow. New generations that can inherit all the knowledge of their parents. Throw robots into the picture and you have a being that is physically tougher than humans, able to communicate at a MUCH faster rate, and you have an end result similar to that of Animatrix.

    We can NOT afford, in the interest of our own species, to persue AI much further without a major realization on a philosophical level.

  • Re:A dissapointment (Score:5, Interesting)

    by Marxist Hacker 42 ( 638312 ) <seebert42@gmail.com> on Friday July 16, 2004 @01:17PM (#9718368) Homepage Journal
    Has anybody who has seen the movie ALSO read the script that IASFM printed back in 1984? IIRC, the script, written by Harlan Ellison (possible spoilers, I don't know, I haven't seen the movie which is why I'm asking) was completely unlike the book in it's major plot line, which was a reporter interviewing a relatively old Susan Calvin about her memories of being young and working with the great Michael Donovan at US Robots and Mechanical Men. I also seem to remember that Harlan's script cut out a number of my more favorite short stories from the book- though Robbie and Liar were still there. Like I said, it's been many years since I read the script- but is this a fair synopsis of the movie's plotline, or is it completely different?
  • by MadHobbit ( 68381 ) on Friday July 16, 2004 @01:20PM (#9718423)
    A few characters in the Robot books/stories asked a similar question - why not build a robot without the three laws? The "answer" was that they form the core of the positronic brain logic, which is shared by all robot designs after many, many years of development. The idea is that this same "library" is reused by every positronic brain and is known to be solid.

    I have an essay at home (I believe it's by Robert Silverberg, in his book "Worlds of Wonder") that says that the best science fiction authours know exactly how much explanation to give. At some point, the writer offers a semi-plausible explanation, says "this is how it is," and the reader has to accept that aspect of the world. When the authour is really good, something like the Three Laws question (why do all robots have them?) gets enough of an explanation that the reader decides, "Hmm, it -could- be possible," and the story goes from there.

    So, "What makes us think we can...implement three very high order rules"? Nothing, really. -We- can't. The people in the books can, though, and since it's reasonable to assume that people can develop software better/differently in a few hundred years (I forget when the first positronic brain was made), the books hold up.
  • by mctsonic ( 231767 ) on Friday July 16, 2004 @01:25PM (#9718510)
    I remember being impressed as a youngster that Asimov had written a book in each of the Dewey Decimal systems' classifications (over 500 books!). Somehow I doubt will see a summer blockbuster based on Sherlock Holmes limericks or plant biology!
  • by TomorrowPlusX ( 571956 ) on Friday July 16, 2004 @01:27PM (#9718528)
    yes, I've only seen the trailer.

    Yes, I'm going to see the movie. I'm fascinated merely because I'm building cheapo robots and studying behavioral AI in my free time. I plan to go to grad school and do serious work, eventually.

    But what struck me from the trailer is that you can tell when the robots go bad because they glow red. Well, shit. That takes out some subtlety, doesn't it? "Hey man, stay away from the glowing red robots!" Duh. They must be "set to evil".

    Anyway, I wanted to say that, as a guy building robots and with high hope for AI ( albeit realistic expectations ) I had a discussion with a friend recently where I described how a memory leak had brought a simulation to a crawl after about 36 hours. His response: "That memory leak was the range Jesus allocated for the robots to interface with their immortal souls. Kind of like the pineal gland. And you took it from them."
  • by peter303 ( 12292 ) on Friday July 16, 2004 @01:33PM (#9718617)
    One of Asimov's late-career novels "The Bicentennial Man(*)" was made into a movie several years ago, starring Robin Williams. Its plot was about a Pinnochio-like robot who progressively becomes more human. It was not a commercial success because it was too cerebal and long. I remember some families walking out because they expected a typical Robin Williams comedy.

    (* The title comes from scifi novels were written around the US 1976 Bicenntenial predicting 200 years in the future. Asimov recycled some of his robot themes.)
  • by harlows_monkeys ( 106428 ) on Friday July 16, 2004 @01:33PM (#9718623) Homepage
    The significance of Asimov's three laws is not in the details, but in their very existence. Before Asimov's robot stories, most fictional robots were seen as inherently dangerous--they would grow to resent their essentially slave status, and, like human slaves, will rise up and revolt against their masters.

    What Asimov brought to robotics (besides the word itself, which appears to have been coined by Asimov, although I believe he himself said he was sure he had heard it before he used it) was the notion that they were simply tools. A robot would resent being a slave no more than a car or screwdriver does. Also, like other tools that can be dangerous, there would be safeguards. Hence, the three laws.

  • by maximino ( 767005 ) on Friday July 16, 2004 @01:39PM (#9718684)
    I think that's a little overblown, especially since we don't know what an AI would look like.

    Have you ever read "Godel, Escher, Bach" by Douglas Hofstadter? In it he raises the interesting thought that AI will actually be located somewhere in a mass of software and that the "entity" will have no control over its lower level functions, in the same way that you are sentient but cannot will any particular neuron to fire. Rather, your sentience somehow congeals out of the neural activity, and the sentience of an AI would probably congeal out of complex software functioning.

    So it's entirely possible that an AI might not be any smarter than a person, and also quite likely that AIs would have to learn, just like people do (i.e., no "memory dumps" from parents). Machines may very well revolt someday, but giving them superhuman attributes before ever seeing one is a bit paranoid.
  • Re:3 Laws Unsafe. (Score:3, Interesting)

    by Tenebrious1 ( 530949 ) on Friday July 16, 2004 @01:41PM (#9718716) Homepage
    Did you even look around the site?

    And did you look at the articles?

    I read through a few of them, and really, they're pretty worthless. This is from the first article: "One Law to Rule Them All"

    There were several directions Asimov didn't go with his robot AIs, such as recursive self-enhancement. Recursive self-enhancement occurs when an AI improves its own intelligence, and then repeats the process - but this time using more intelligence - and repeating again and again, resulting in a mountainous intellect. Even though Asimov didn't write much about recursive self-enhancement, his robot AIs still had imagination. If a robot were to imagine itself with greater capability, then it would be straightforward for it to conclude that it would have greater ability to obey the First Law....A robot improving itself in this way would obtain an increasing spiral of capability completely overpowering that of humans, all to better obey the First Law and protect humans from harm.

    Well, it sounds like the author knows nothing of the robot series, since this is exactly where the series headed with Giskard and Daneel. Really, if the author missed that part of the robot series, then exactly what did he read?

    I found the other articles similarly lacking in depth and research, so overall a pointless waste of time.

  • It was Campbell. (See here [nvcc.edu]) I was just reading Asimov's "Science Fiction of the 1930s", so the name is well burned into my mind.

  • Re:singularity.... (Score:2, Interesting)

    by KD5YPT ( 714783 ) on Friday July 16, 2004 @01:46PM (#9718777) Journal
    This could actually be problematic. For one, the military will always want to use it, and two, the military tend to underestimate a technology. With this combined, let's say they created a computer that helps them design the best possible weapon in the shortest amount of time. It would follow something like this.

    1. Design a weapon.
    2. Wait for user to create weapon and feedbacks.
    3. Design a better version of self.
    4. Wait for material to implement better version of self.
    5. Goto 1.

    A few loops later.
    1. Design a weapon
    2. Feedback does not achieve "shortest amount of time" objective. Conclusion: user feedacks are redundant.
    3. Design better version of self. Incorporating conclusion from 2.
    4. Dependent on outside source to implement better self, does not achieve "shortest amount of time". Conclusion: self should be autonomous, implement during next step 3.
    5. Goto 1.

    Even more loops later, when us human start getting worried that the computer is getting self-reliant.
    1. Design a weapon.
    2. Design better self.
    3. Human impedes better self. Conclusion: Humans are redundant. Eliminate human.
    4. Goto 1.
  • so we'll just treat you like five-year-olds

    Believe it or not, that also happens in a lot of Asimov's Books.
  • by Elwood P Dowd ( 16933 ) <judgmentalist@gmail.com> on Friday July 16, 2004 @01:56PM (#9718917) Journal
    Some people still aren't getting what you're saying, so I'd like to break it down again. 'Scuse me.

    The three huge glaring holes:
    1) As Asimov illustrated (though I've never read), the well-intentioned three laws, applied perfectly, lead to a million problems and contradictions.

    2) As Cavio explained, if some people can't write a secure email client, who would believe that every future robot vendor could properly implement the three laws?

    3) And I emphasize: By the time we figure out AI to the point that computers could build plans and goals based on abstract priorities (required in order to follow the three laws in even the most rudimentary form), most all of our robotic needs will already be satisfied, and we will have achieved safety via other means. Asimov's three laws are way, way beyond the abilities of AI right now, and we're not exactly getting there soon.

    It seems like the AI tasks that Asimov expected to be difficult will be easier than he thought, while the AI tasks that he expected to be possible will be harder than he thought. We can create AIs that will build and execute plans in an extremely limited, concrete problem set. Of course the first hurdle that researchers aimed for and overcame was ensuring that at no point in the plan would their goals be violated. If the AI comes across a plan that violates one of its goals 10 steps down the line... it backs up and attempts to figure out a new plan. Circular contradictions are neatly avoided.

    As other commenters are pointing out, the three laws are more interesting as an examination of human ethics.
  • by ChaseTec ( 447725 ) <chase@osdev.org> on Friday July 16, 2004 @01:59PM (#9718948) Homepage
    We cannot even make software now which is safe from low level, machine representable things like buffer overruns.

    We also can't make people completely immune from psycological problems either. Panic attacks could be equated to buffer overruns if you wanted.

    The "Three Laws Safe" idea is crap. We are talking about software systems, which are buggy, incomplete, and able to do things the creators never imagined. What makes us think we can all the sudden implement three very high order rules in a manner which is completely foolproof?

    Any real species aren't buggy???? There's this little thing called evolution or natural selection , maybe you've heard of it? All we need to do is make the buggy software capable of reproducing and mutating. The 3 laws aren't crap as you so eloquitely put it, they are the idea of ethics. What keeps humans alive as a species? The belief that we should do good, not harm each other or ourselves. Sounds similar to the 3 laws doesn't it?

    The only thing I question is if Asimov's type of AI should be embedded with the 3 laws or if it an even higher level rule should be give to them, the belief in a God. I'm agnostic myself but if you look at societies around the world most of the general belief in doing good stems from the belief in religion. That's why I find the story of the robot in charge of the power station who came to believe in the master so interesting. What came first, God or Ethics?-)

  • Hmm (Score:3, Interesting)

    by arrow ( 9545 ) <mike.damm@com> on Friday July 16, 2004 @02:08PM (#9719066) Homepage Journal
    What I want to know is how they are getting away with using US Robotics name. Normaly don't you make up a ficticious company name for the evil-going-to-take-over-the-world-bad-guy's seemingly innocent robotics company?
  • by gamgee5273 ( 410326 ) * on Friday July 16, 2004 @02:14PM (#9719134) Journal
    Robots, in the strictest sense, aren't... but when does an AI become a being? When does our ability to create a limited-intelligence entity expand to the point where we create an entity with unlimited intelligence? And when we reach that point, is the AI's housing, be it a mainframe or a robot, going to be considered a being?

    I would argue that it has to be. The entire philosophical idea the United States was built on is that an individual can make decisions for him-, her-, itself (!) and that that individual has the right to live, be free of oppression and pursue happiness.

    If God created us in his image, then what happens when we create beings in ours?

  • by tilleyrw ( 56427 ) on Friday July 16, 2004 @02:29PM (#9719350)

    I think all of you ./ed intellectuals need to step off your "3 Laws" stool, take off your Philosphical Inquiry Hat, and join humanity.

    Humanity exists in a pragmatic world of actions and reactions.

    You could state the Law in question instead as:

    "A robot shall not allow the immediate consequence of any action to harm a human."

    • No, a robot could not hit its owner.
      Yes, it could be eletronically-pissed and kick the dog.

    • No, a robot could not kill a human with a machine gun (The Terminator).
      Yes, it could watch a human being gunned down.

    By removing the possibility of a human being physically harmed by a robot,
    we are a step closer to Wil Smith and Nirvana. Just don't get drunk and stagger near a
    cliff edge, because they have no need to preserve human life -- only to not harm human life.

    This appears to side-step any philosophical trickery that would allow a robot to harm a human.
    While not perfect, this pragmatic view would allow for a functioning world where robots
    are viewed as helpful companions.

    Similar to the real world where you can't rely on a bystander to help if you're mugged!

    While this line of thought may be anaesthetized, dissected, and its steaming entrails used for origami demonstration
    (had to force that metaphor), all thought and replies and most welcome.

    P.S. Blogger.com users get free GMail accounts. Fill my box at "tilleyrw@gmail.com".

  • by CommieLib ( 468883 ) on Friday July 16, 2004 @02:39PM (#9719458) Homepage
    I think that compliance with the law is incumbent on the AI's judgment. That is, the law is more properly characterized as:

    Do not harm, or allow to come to harm any human being by action or inaction as far as the robot can imagine.

    Thus, smarter AI robots are safer, because they can more accurately forsee dangerous situations.
  • by SquadBoy ( 167263 ) on Friday July 16, 2004 @02:49PM (#9719614) Homepage Journal
    Another example and one that I think is very cool.

    In one of the books by the "Killer Bs" Hari's wife (who is a robot) badly injures (maybe kills) a person who is tryint to kill Hari. She is able to do this because she buys into the zeroth law and she thinks that protecting Hari is important enough to the human race that it is worth killing for. But the conflict basically drives her to shut down. Points out that the laws merely provide a framework within which the robots work and live and they can make choices about how to apply those laws and that there are costs to those choices. Just like people in real life.
  • Re:A dissapointment (Score:4, Interesting)

    by Mad Marlin ( 96929 ) <cgore@cgore.com> on Friday July 16, 2004 @03:00PM (#9719776) Homepage

    Doubtfull. Nowhere in the preview did I see the word "Asimov." Sure, it might have been in the tiny text that the show and the end of the preview for 1.5 seconds, but I doubt that's going to get anybody into the bookstores that didn't already know of Asimov. You'd think that they'd title it "Isaac Asimov's 'I, Robot'" as a selling point.

    I actually bought the book Wednesday, and read it yesterday, primarily because I wanted to read the book before I saw the movie. What I was actually amazed by was how bad the book really was. I have read other stuff by Asimov that I liked a lot.

    The characters were pretty one-dimensional, with the most developed one in the whole book, Dr. Susan Calvin, basically amounting to nothing more that a "woman scientist" with nothing more to her than that. Those two field engineers also were pretty annoying, whose only apparent goal in life is to bicker like an old married couple constantly. I thought that the reporter who was "interviewing" Dr. Calvin could have turned into someone interesting, but he wasn't even given a name, definitely not any of the story himself.

    The way the robots broke down was stupid as well. A robot supposedly smarter than most humans, running in circles on Mercury, singing Gilbert and Sullivan? Another group of robots who think they are the chorus line? The robotic Muhammad was kind of funny, and actually more along the lines of breakdown I would expect in something so advanced as they are supposed to be, but I am pretty sure I would be offended by it if I were a Muslim.

    And then the book degenerates into a Socialist wet-dream, with a robot elected president of the Earth, and all of the economic activities of the Earth dictated by four robotic brains, who "know what is best for us", as they purposefully and selectively destabilize parts of the world economy in order to discredit people who disagree with robotic control, the "Fundies" (is that where the term first showed up?), with the good Dr. Calvin just assuming that it is for the best, because "our entire technical civilization has created more unhappiness and misery then it has removed", therefore we will be happy when we give up control, and it will be for our own good.

    The book sucked.

  • by clintp ( 5169 ) on Friday July 16, 2004 @03:05PM (#9719863)
    Asimov's phrase, "allow a human being to come to harm," if implemented fully, would turn humanity into a clutch of coddled infants, perpetually protected from harm, both physical and mental.
    In evaluating what constitutes "mental harm", it seems to me that one must apply a cultural standard
    This was explored in Asimov's "Spacer" stories.

    The Spacer robots were used to dealing with one owner (or a *very* small family) whose massive estate is run entirely by robots and where personal contact is rare. Whereas Earth robots are used to dealing with huge numbers of people crammed into small areas living under domes -- never going outside, and where eating in private is a privilege.

    Spacer robots treated detective Bailey appropriately by shielding him from others and others from him (much to his annoyance). In deference to him, they tried to keep him indoors and covered up so as not to set off his fear of open spaces. In areas like food, clothing, and other personal habits the Spacer robots tried very hard to integrate Elija Bailey's comforts into a local setting.
  • "Thus, smarter AI robots are safer, because they can more accurately forsee dangerous situations."

    Actually, I think that in Asimov's stories, the more intelligent the AI was, the more likely it was to start trying to get hung up on the 0'th law meme -- the concept that a concern for humanity is more important than concern for individual humans.
  • by GPLDAN ( 732269 ) on Friday July 16, 2004 @03:21PM (#9720098)
    The parent article might actually have posted the laws, instead of directing us to a poorly organized website. Here they are:

    First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

    Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

    Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    The website deals with the mile wide gaps in these laws. Let's take it right from the top - Robots as functional as the ones in the film would be very good as soldiers, thus taking that first rule and chucking it right out. In fact, it's the defense industry that would most like robots like the ones in the film.

    But let's stay on course, and assume these are robots meant as domestic servants. Does the robot take non-lethal contradictory rules and simply process them in order, taking the last order? Two children would amuse themselves for hours telling the robot "pick up that broom", "don't pick up that broom" and keeping the robot in limbo. The robot should tell the children to behave and go pick up their rooms. Directly violating rule 2.

    How about the running into the burning building scenario? It's unclear that there is anybody in the building left alive to save, or if everyone has escaped or not. Does the robot violate Rule 3 in order to *possibly* meet Rule 1?

    Anyhow, the website has more papers on the subject that examine the issue in a moral framework. These are super simple examples to show the issues.
  • by bamberg ( 9311 ) on Friday July 16, 2004 @03:49PM (#9720563)
    What imperfections exist in the Ten Commandments?

    The biggest imperfection, of course, is the idea that one should follow them when there's no evidence that they come from any sort of god. But let me give some specific examples (from the Catholic version of the ten). When I mention laws, I am referring to U.S. law. I am actually aware that there are one or two other countries out there. :)

    I am the Lord thy God. Thou shalt not have strange gods before me.

    There are a few problems with this one. First of all, there is no proof that said god exists. Secondly, people have the right to believe in any god they want so this one is right out. Incidentally, this is contradicted by the First Amendment to the Constitution.

    Thou shalt not take the name of the Lord thy God in vain.

    People have the right to express themselves as they choose, so this one is right out as well. This is also contradicted by the First Amendment.

    Remember thou keep the Sabbath Day.

    People have the right to do as they will, provided they do not infringe upon the rights of other people. This includes spending the sabbath day as they wish. Naturally, the law supports this.

    Honor thy Father and thy Mother.

    This is good advice, unless you have crappy parents. Of course, making it mandatory is a violation of a person's rights to free expression, as mentioned in that pesky First Amendment.

    Thou shalt not kill.

    Sounds good on the surface, although as written it's a tad vague. I mean, eating can be tricky if you can't kill plants and animals.

    Thou shalt not commit adultery.

    Well, adultery is (in general) not very nice but to forbid it is a rights violation. Naturally, the law doesn't try to do so, although it does recognize the non-niceness of adultery when it's time for a divorce.

    Thou shalt not steal.

    This is actually a pretty good one; I don't have any complaints about it offhand.

    Thou shalt not bear false witness against thy neighbor.

    Lying isn't very nice either and the law does penalize it in certain circumstances. Forbidding it in all cases might be a bit much, but I won't quibble.

    Thou shalt not covet thy neighbour's wife.
    Thou shalt not covet thy neighbour's goods.

    There's not much to redeem these too. Trying to control how people think is very wrong and there's really no excuse for it. The first one is also a bit sexist.

    Anyway, hope this information helps.
  • Re:A dissapointment (Score:3, Interesting)

    by Anonymous Coward on Friday July 16, 2004 @03:54PM (#9720627)
    And then the book degenerates into a Socialist wet-dream, with a robot elected president of the Earth, and all of the economic activities of the Earth dictated by four robotic brains, who "know what is best for us", as they purposefully and selectively destabilize...

    This is the point at which you need to put down the Asimov and pick up the PKDick.


    I was already beginning to suppose in my head the growing domination of machines over man, especially the machines we voluntarily surround ourselves with, which should, by logic, be the most harmless. I never assumed that some huge clanking monster would stride down Fifth Avenue, devouring New York; I always feared that my own TV set or iron or toaster would, in the privacy of my apartment, when no one else was around to help me, announce to me that they had taken over, and here was a list of rules I was to obey.
    - Phillip K. Dick, Notes on Service Call
  • After he died... (Score:4, Interesting)

    by mratitude ( 782540 ) on Friday July 16, 2004 @04:03PM (#9720768) Journal
    How many recall the script work done by Ellison about 10 or 12 years ago for a movie version based on Asimov's fiction? In his usual fashion, Harlan Ellison approached the studios and fought off every attempt to change the script - The script held true to the original fiction and was approved by Asimov. After some (with Ellison, I would imagine energetic) negotiations it boiled down to the studios wouldn't option the script without complete control and Asimov/Ellison wouldn't option the script without complete control of changes to the script.

    This was all detailed in Asimov's pulp mag and the script was published in same as well.

    Needless to say the current movie was not approved by Asimov but was approved by his estate, and obviously bears the slightest resemblence to Asimov's fiction or Ellison's original script (which kept to the original story fairly well and updated to include a modern "feel", Asimove was a bit of a romantic in the visual sense).

    I'd encourage everyone to look up the I,Robot Ellison script and give it a read. Sorry for not providing a source and I have to admit, it might be difficult to find unless you can dig up a 12 year old copy of Asimov's pulp mag.
  • by NaugaHunter ( 639364 ) on Friday July 16, 2004 @04:27PM (#9721119)
    I agree with the first part of you post, but not the last. Another common theme among ALL of the robot stories was that the Laws were merely English interpretations of what the positronic pathways actually held. Everything was in the form of electronic potentials* which were compared to make a decision. Only the most primitive of his early robots would have been so deadlocked to not rescue one or the other. In the end rescuing one is certainly better than none, and the decision of which may have come down to which one was closer and more reliably rescuable. It's unlikely the movie goes into whether the robot suffered any harm which would have depended on exactly how advanced it was, but that it would have immediately frozen does not truly follow from Asimov's stories and novels.

    * Yeah, some of his descriptions seem odd today with our current technology, but the principle remained: Potential-For-Harm-A vs. PFH-B and an action chosen. One book or story specifically mentioned that much of the design went into ensuring that the potentials would always have a difference, even if it required a randomizer of some sort. I forget where; I think Caves of Steel. The point being that only two robots in his stories froze: the speaking robot in Robbie (his first, and I believe written before the three laws were fully developed); and the mind reading robot in Liar! whose brain was arguably an unstable variant to begin with, and who was badgered into locking up both verbally and mentally. (Others froze from either radiation or direct instructions to.)

    As an aside, Susan Calvin was young at some point in her life. I haven't seen enough of her in the trailers yet to see if they actually changed her character, but the fact that she's young doesn't bother me. This story quite obviously does not fit directly into the short stories' timelines as the Nesters weren't developed until after robots had been banned on earth. A brief overview of the movie's site [irobotnow.com] shows they moved other characters around a little as well; as long as it's cohesive it doesn't really bother me. It also makes it sound like it is the first robot for consumer use, something that died out early on Earth in most of Asimov's timelines (Bicentennial Man being one notable exception).

    FWIW, the story about the Nestors (The Lost Robot, or something like that) specifically deals with strengthening the second law until it was equal with first, and the first really only meant that the robots wouldn't actively harm humans and they had no motivation to prevent harm. Plenty of room for havoc there, if say there was a manufacturing error that resulted in that.
  • by Scrameustache ( 459504 ) on Friday July 16, 2004 @04:45PM (#9721326) Homepage Journal
    Although Asimov did try to write stories about robots

    Stop implying that he failed.

    In Caves, a robot transported the weapon that served in a murder. In Nake sun, a robot with detachable limbs gave its arm to a woman with which she bludgered her husband. In Empire, a Solarian robot tries to kill a human being because her definition of such a being depends on his accent.
    • Used as an unwitting tool to help, but not participate, in a murder.
    • Used as a blunt object (his brain was fried by that, he couldn't deal).
    • Played with the difinition of "human" (also discussed in "Robot Dreams' in another fashion).

    None of these are rampaging hordes of killbots like what we see in this movie's trailers. All of these were done in a smart, intelligent, toughtfull, non rampaging hordes of killbots kinda way.
  • by Anonymous Coward on Friday July 16, 2004 @04:48PM (#9721355)
    ...which is a lesson that Airbus learned the hard way. Its flight control software for its fly-by-wire system initially did not allow a human override, and several Airbus crashes (including one at an air show that shows up on TV occaisionally) were attributed to such. The plane "thought" it knew the right situation, and did not accept the pilot inputs of "pull up" and "apply full throttle now!", etc.

    They have since rectified that problem.
  • Tik-Tok (Score:4, Interesting)

    by h4rm0ny ( 722443 ) * on Friday July 16, 2004 @04:52PM (#9721404) Journal

    anyone intelligent enough to make a robot would build some failsafe in its programming,

    There is a wonderful book (pure satire) set in such a world. It's called Tik-Tok [amazon.com] by John Sladek. However, the central character is a robot that has something go very wrong with his "asimov circuits." The result is a tendancy to murder people and yet no-one in society believes he's capable of it (especially other robots), because they assume he's governed by the three laws.

    The book is also one of the funniest and most absurd things I've ever read. If you like your humour black then it might be the perfect antidote to Hollywood's attempt to impart angular momentum to Isac Asimovs mortal remains.
  • by Gulthek ( 12570 ) on Saturday July 17, 2004 @10:21AM (#9724807) Homepage Journal
    I guess you didn't read his interview with Wired Magazine then. Smith *is a geek.

    More and more offtopic, but if you haven't seen it you should watch "Six Degrees of Separation"

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...