Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Sci-Fi Media Movies

I, Robot Hits the Theaters 639

tyleremerson writes "With today's film release of "I, Robot," the Singularity Institute for Artificial Intelligence has launched a new website, 3 Laws Unsafe. 3 Laws Unsafe explores the non-fictional problems presented by Isaac Asimov's Three Laws of Robotics. The Three Laws are widely known and are often taken seriously as reasonable solutions for guiding future AI. But are they truly reasonable? 3 Laws Unsafe tries to address this question." Reader Rob Carr has submitted a review of the movie, below, that he promises is spoiler-free.

I, Robot: A Movie Review that's 3 Laws (and Spoiler) Safe!

A movie review by Rob Carr

Thanks to Eide's Entertainment I got to see I, Robot tonight. As someone who grew up with Isaac Asimov's robot stories, I've come to expect a mystery based on the implications of the 3 Laws of Robotics (or the lack of one or part of one of those laws), the "Frankenstein Complex," and Dr. Susan Calvin. I was afraid that the movie might miss out on this, especially since it's not a direct adaptation of the book, but "inspired" by the Good Doctor Asimov.

The movie met my expectations and more. Will Smith, whom we all know as an overconfident smart@$$ character from such movies as "Independence Day" and the two "Men in Black" movies, played a somewhat less confident and far less wisecracking character. It was a welcome change to see him less confident. Yeah, some of the stunts were a little absurd (am I the only one thinking of Gemini 8 at one point in the movie?) but that's to be expected from this type of movie. Bridget Moynahan was far too young to be the Susan Calvin I remember, but that's also to be expected in this type of movie. James Cromwell (whom you'll all remember from Star Trek: First Contact and Enterprise's "Broken Bow" episode as Dr. Zefram Cochrane) gave a flat performance - but that's actually a complement. I doubt anyone will recognize Wash from "Firefly" as an important robot in the story.

It's customary to comment on how well the CGI was done. I liked it, but then again, I'm not hypercritical on something like that. I did wonder a little bit about center of balance as some of the robots walked, but mostly I didn't think about it at all, which to me is the goal of CGI. I did wonder about children's fingers getting caught in some of the open gaps on the robot's bodies. Real world models would have a bit more covering, one would think. But that's being picky.

I have no memory of the soundtrack music. That in and of itself might say something. I'm a musician, but it just didn't register.

I figured out some clues, missed some others, and was surprised several times in the movie. There were a lot of clues - this isn't one of those mysteries where the answer is pulled out of the writer's a...out of thin air.

I'm not a complete continuity freak, so I can't tell if the movie violated any of Asimov's universe, but from what I can remember, it fits pretty well (if you ignore Dr. Calvin's age) and might even explain a few things.

Given that even some of the geeks in the audience were surprised to find out that there was a book of stories just like the movie, I think the movie will hopefully bring Asimov's stories to a new generation.

I liked "I, Robot. It's worth seeing, especially if you 've already seen Spider-Man 2 at least once. It's a pretty good (though not great) movie.

Having read Slashdot for a while, I know that there are folks out there who will despise this movie because it's not exactly like the book. Others will hate the movie or worship it, and loads of people are going to savage this review. You know what? That's fine with me. I had fun with this movie, had a nice date with my wife, and it didn't cost anything. I even had fun typing up this review. You're allowed to be different and to agree or disagree with me. Heck, that's a big chunk of what makes the world fun. Interestingly, it's even a small point in the movie. I'd say more, but that would be telling."

This discussion has been archived. No new comments can be posted.

I, Robot Hits the Theaters

Comments Filter:
  • This is the story that showed me the complete folly of the three laws: The Metamorphosis of Prime Intellect [kuro5hin.org]

  • by th1ckasabr1ck ( 752151 ) on Friday July 16, 2004 @11:56AM (#9718019)
    Asimov's 3 Laws of Robotics may seem a decent set of guidelines for ensuring that future robots and AIs behave in satisfactory ways. But there are several problems that immediately emerge when we look deeper.

    Asimov wrote about a hundred stories exploring different ways in which these three laws could lead to interesting/dangerous situations. I think Asimov was doing all he could to make it clear that these three laws were not perfect.

    • by ooby ( 729259 ) on Friday July 16, 2004 @12:07PM (#9718210)
      Although you know that Asimov's stories explored the flaws in the 3 laws of robotics, many people take the three laws of robotics as if they were actual laws. I've seen movies, television shows and even real people purport those laws to be true. Ironically, when they mention the laws as if they are true, they actually point out that they had never read the aforementioned tales.
      • I know what you mean, and when I run into someone like that I like to mention that one of the laws of robotics was "cars get welded, people do not", and that's why there's hardly any robotic welder accidents in car factories. Just mess with them a little.
      • by 1u3hr ( 530656 ) on Friday July 16, 2004 @01:48PM (#9719601)
        many people take the three laws of robotics as if they were actual laws. I've seen movies, television shows and even real people purport those laws to be true.

        Will Smith was on Letterman a few days ago promoting the movie. I was amazed that he mentioned Asimov several times, actually seemed familiar with the stories, and could recite the Three Laws.

        And the best story about the Three Laws is one Asimov used to tell: he went to see 2001 and as HAL began to go psycho, Asimov says he got more and more agitated, finally jumping up and declaring to all around that: "HAL is breaking First Law!" to which his companion (sometimes supposed to be Carl Sagan, but it's surely apocryphal)replied: "So strike him with lightning Isaac." But actually, HAL was indeed in the same kind of dilemma that many of Asimov's robots were (and I suspect in the movie), that what they see as the best thing for humanity as a whole requires them to do something that apparently breaks the "Laws" on a smaller scale.

    • by pavon ( 30274 ) on Friday July 16, 2004 @12:11PM (#9718278)
      But the biggest problem with the three laws isn't that they are incomplete for determining the best course of action for a robot, which is what Asimov explored, but the fact that they are currently (and possibly inherently) impossible to implement.

      How the heck is a robot supposed to accurately judge that whether a random unique action in a unique situation will cause harm to a human or himself? Humans can't even do this. If we were to create an artificial intellegence that was fully capable of making these decisions, would we even be able to put limits on what it decides?

      Regardless of the answer to that philisophical question, we will have the technology to produce usefull robots long before we have the technology to produce 3-Law abiding robots so we need to come up with practical ways of making them as safe as possible, within their limited capabilites.
      • Yes, the biggest hole with the Three Laws is the assumption on which they are based: Somehow, these laws are so fundamental to the functioning of the robot's positronic brain that the robot would essentially have to destroy itself in order to get around the laws. They were "fundamental equations" -- I think that is a quote; certainly I'm paraphrasing a number of passages from the book.

        The book is great for the situations that these seemingly perfect laws end up creating. Even in the book, they aren't e
        • I used to work for an industrial robot company. People have a positive talent for giving orders to a robot that would cause it to damage itself if it tried to follow them. So in practice (insofar as such laws can be practically implemented, which as you point out isn't all that far), the third and second laws are swapped.

          The first law's still paramount, of course. Having the robot crash and freeze up was considered a less severe bug than having it move unexpectedly, or in an unexpected way. Such an unpredictable motion had a much greater chance of hurting someone than a simple freeze.

        • Yes, the biggest hole with the Three Laws is the assumption on which they are based: Somehow, these laws are so fundamental to the functioning of the robot's positronic brain that the robot would essentially have to destroy itself in order to get around the laws. They were "fundamental equations" -- I think that is a quote; certainly I'm paraphrasing a number of passages from the book.

          This isn't necessarily crazy. It's unproven, and it's possible that it's untrue, but it's not currently crazy.

          We don't kn
      • by CommieLib ( 468883 ) on Friday July 16, 2004 @01:39PM (#9719458) Homepage
        I think that compliance with the law is incumbent on the AI's judgment. That is, the law is more properly characterized as:

        Do not harm, or allow to come to harm any human being by action or inaction as far as the robot can imagine.

        Thus, smarter AI robots are safer, because they can more accurately forsee dangerous situations.
        • "Thus, smarter AI robots are safer, because they can more accurately forsee dangerous situations."

          Actually, I think that in Asimov's stories, the more intelligent the AI was, the more likely it was to start trying to get hung up on the 0'th law meme -- the concept that a concern for humanity is more important than concern for individual humans.
  • Robots and Empire (Score:3, Informative)

    by enforcer999 ( 733591 ) on Friday July 16, 2004 @11:57AM (#9718043) Journal
    Thanks for the review. This gives me hope that it will be a decent movie! I have recently reread the Robot series and truly love Asimov's work. BTW, in his book Roberts and Empire, Asimov makes it pretty clear that the "Three Laws" may not be very safe after all.
  • by Cavio ( 217880 ) <cavio@hotmail.com> on Friday July 16, 2004 @11:58AM (#9718054) Homepage
    We cannot even make software now which is safe from low level, machine representable things like buffer overruns.

    The "Three Laws Safe" idea is crap. We are talking about software systems, which are buggy, incomplete, and able to do things the creators never imagined. What makes us think we can all the sudden implement three very high order rules in a manner which is completely foolproof?

    • Well, yes, but complaining about that is like complaining that the green glowing symbols that are supposed to be the representation of The Matrix make no sense from a software perspective.

      The three laws are a useful abstraction for talking about ethics even if they couldn't ever be perfectly implemented.

    • by QEDog ( 610238 ) on Friday July 16, 2004 @12:10PM (#9718257)
      The "Three Laws Safe" idea is crap.

      It is not about programming the rules, Asimov's short stories are about studying the consequences of these ethical rules. Ethical rules are commonly studied based con case studies, real of fictional. If you think the idea is about implementing the rules, you are totally missing the point.

    • Some people still aren't getting what you're saying, so I'd like to break it down again. 'Scuse me.

      The three huge glaring holes:
      1) As Asimov illustrated (though I've never read), the well-intentioned three laws, applied perfectly, lead to a million problems and contradictions.

      2) As Cavio explained, if some people can't write a secure email client, who would believe that every future robot vendor could properly implement the three laws?

      3) And I emphasize: By the time we figure out AI to the point that com
    • We cannot even make software now which is safe from low level, machine representable things like buffer overruns.

      We also can't make people completely immune from psycological problems either. Panic attacks could be equated to buffer overruns if you wanted.

      The "Three Laws Safe" idea is crap. We are talking about software systems, which are buggy, incomplete, and able to do things the creators never imagined. What makes us think we can all the sudden implement three very high order rules in a manner w
  • butchering asimov (Score:5, Insightful)

    by haluness ( 219661 ) on Friday July 16, 2004 @11:58AM (#9718061)
    IMHO, the movie has little do with Asimovs Robot stories apart from some of the characters and the 3 Laws. I'm not sure why it ws called I Robot - did they buy the rights? Or is it just Hollywood ripping of someone elses work?


    I'm sure it will be a fun watch (I'm seeing it this afternoon) but sometimes it would be nice to watch a film that was as stimulating as the book (LoTR was one) and not just 2 hours of fun.


    But I'm pretty sure I'm going to be called elitist :-/

    • Re:butchering asimov (Score:5, Informative)

      by Efreet ( 246368 ) on Friday July 16, 2004 @12:05PM (#9718178)
      Yeah, they had the writes, but their title to them was running out soon, so they looked around to see if they had a script handy that they could make into a I Robot movie. Sure enough, a script called Hard Wired fit the bill, and after some cosmetic changes thats the movie in thearters now.
    • by rjstanford ( 69735 ) on Friday July 16, 2004 @12:15PM (#9718335) Homepage Journal
      But I'm pretty sure I'm going to be called elitist :-/

      Not by me - although I would have a couple of other choice comments for one simple reason... Let's leave the movie-bashing at least until after you've seen the movie, mmm-kay?
    • I'm not sure why it ws called I Robot

      It was a typo. Actually the movie was originally going to be totally CG and done by Pixar. Steve Jobs liked the name iRobot. :-)

  • by Quirk ( 36086 ) on Friday July 16, 2004 @11:58AM (#9718062) Homepage Journal
    Anyone else see the movie as a precursor to a game edition? The music on the site reminds me more of a sound track to a FPS. Movies made into games and games into movies may be a new trend.
    • There already is an arcade game called I, Robot from 1984. The first game to use 3D solid models with flat shaded polygons. The next one was about 4-5 years later. 500 sold, several hardware problems, not popular, mostly converted/destroyed. IMHO a great arcade game. BTW, there is a MAME driver for it - John M., I still don't know how you reverse engineered that rasterizer and MathBox blindly ;-)
  • by foidulus ( 743482 ) * on Friday July 16, 2004 @12:01PM (#9718105)
    that the much promised "Willenium" is finally upon us?
  • by John Macdonald ( 40981 ) on Friday July 16, 2004 @12:01PM (#9718106)
    I'm not a complete continuity freak, so I can't tell if the movie violated any of Asimov's universe, but from what I can remember, it fits pretty well (if you ignore Dr. Calvin's age) and might even explain a few things.


    That makes it a perfect fit, since Asimov himself was not a complete continuity freak and was not concerned if one of his stories violated incidental issues in any of his previous stories. (He quoted Emerson "A foolish consistancy is the hobgoblin of little minds.".)

    • But NOT for The Three Laws. Asimov was not a fan of the "Frankenstein Complex" horror/SF stories that ruled the genre when he was starting out.... which is what this latest piece of celuloid off Wil's backside looks to be.

      To be fair, most of the Good Doctor's stories deal with subtle pitfalls in the Laws, to brilliant effect. "Liar!", where a telepathic robot takes actions that cause harm due to its imperative to prevent harm-- a paradox that eventually destroys it. "Little Lost Robot", which shows the d
      • Another example and one that I think is very cool.

        In one of the books by the "Killer Bs" Hari's wife (who is a robot) badly injures (maybe kills) a person who is tryint to kill Hari. She is able to do this because she buys into the zeroth law and she thinks that protecting Hari is important enough to the human race that it is worth killing for. But the conflict basically drives her to shut down. Points out that the laws merely provide a framework within which the robots work and live and they can make choi
  • by SuperKendall ( 25149 ) * on Friday July 16, 2004 @12:01PM (#9718111)
    I was not too sure about this movie from the previews, looking like a sort of typical action movie... but from the review it may have a bit more depth and be closer to the book than I had thought.

    It's nice to hear that there's more of a mystery to the story than the previews would indicate.
  • by spitzak ( 4019 ) on Friday July 16, 2004 @12:02PM (#9718129) Homepage
    The big problem I forsee is not loopholes in the "3 laws" but bugs: The "cause no harm to humans" control, when accidentally multiplied by a negative weighting factor due to a software bug, suddenly causes the robot to try to kill as many people as it can!

    • The bigger problem with the three laws is the vagueness of the english language. A number of the original Asimov stories dealt with issues like - how effective is 'cause no harm to humans' - if you can convince the robot that:

      1) That won't really harm him
      2) His not really human (think Aryan mentality)

  • by Anonymous Coward on Friday July 16, 2004 @12:03PM (#9718131)
    all we have to do if the robots go hay-wire is just post a link to their brains on slashdot
    heheheheh
  • by amliebsch ( 724858 ) on Friday July 16, 2004 @12:07PM (#9718205) Journal
    Roger Ebert [sun-times.com] gives it a measly two stars and, for the ./ crowd, bashes MS Word at the end of the review.
    • Ebert's heart is in the right place, but he appears to heavily misunderstand something. Concerning the Three Laws, Ebert writes:

      Every schoolchild knows the laws were set down by the good doctor Isaac Asimov, after a conversation he had on Dec. 23, 1940, with John W. Campbell, the legendary editor of Astounding Science Fiction. It is peculiar that no one in the film knows that, especially since the film is "based on the book by Isaac Asimov." Would it have killed the filmmakers to credit Asimov?

      Of course,

      • I agree with the first part of you post, but not the last. Another common theme among ALL of the robot stories was that the Laws were merely English interpretations of what the positronic pathways actually held. Everything was in the form of electronic potentials* which were compared to make a decision. Only the most primitive of his early robots would have been so deadlocked to not rescue one or the other. In the end rescuing one is certainly better than none, and the decision of which may have come do
  • WHAT?! (Score:3, Funny)

    by surreal-maitland ( 711954 ) on Friday July 16, 2004 @12:08PM (#9718228) Journal
    I'm not a complete continuity freak, so I can't tell if the movie violated any of Asimov's universe, but from what I can remember, it fits pretty well (if you ignore Dr. Calvin's age) and might even explain a few things.

    okay, to be fair, i haven't seen the movie yet, but it looks a hell of a lot like the robots actually *violate* the three laws. you know, harming humans, allowing humans to come to harm, stuff like that. all the i, robot stories were *about* how the laws don't cover all the bases.

    in short, i think this review sucks, and i'm going to picket the movie as offensive to robots. so there.

  • 4 laws (Score:3, Funny)

    by prgrmr ( 568806 ) on Friday July 16, 2004 @12:11PM (#9718272) Journal
    There seems to be some deliberate avoidance at the mention, let alone consideration to and inclusion thereof, of what Asimov called "The Zeroth law". There also appears to be a complete glossing-over of the fact that Asmov's robots had the laws hard-wired in their brains, especially by the folks at asimovlaws.com [asimovlaws.com]. Not that hard-wiring is the ultimate solution, but does make reprogramming a bit more of a challenge.
  • Some spoilers (Score:4, Informative)

    by Fubar411 ( 562908 ) on Friday July 16, 2004 @12:11PM (#9718284)
    First off, you don't get the Markie Mark full frontal that people had talked about. The Fresh Prince spends some time in the shower, but no salami... His character, Spoon hates Robots, mostly because one chose to save him rather than a 14 year-old girl from drowing. Their cold calculating nature disturbs him. Now for the huge spoilers...you've been warned. This is both a detective whodunnit and a robots take over the world movie. The robots do their best to kill Will and cover up the evidence so he appears dilusional. There are a bunch of very clever moments where you realize that whoever is pulling the strings is sadistic and calculating. For example, Spoon's elderly mom wins a special edition gold NS-5 in the lottery, right when Will realizes the robots are out to get him. There are moments where it borrows from the i-told-you-so genre of cop movies. His chief takes away his badge, the other officers mock him for thinking outside the box, etc. The robot that might have killed the USR scientist, Sonny, has a very developed character. Even Spoon ends up liking him. This film depends a lot on the Ghost in the Machine philosophy. In fact, there are two positronic brains in this film that don't mind bending the almighty three rules. Yes, everyone swore that the 3 rules were infallible, but they do get broken. One as a result of "evolution", the other because its creator gave it free will. This was an incredible film, definitely will be going in my collection when it comes out on DVD. It was part Minority Report, part Matrix 1. My prediction is a majority of positive reviews. Thanks for reading, hope you were entertained a little. Sorry if I gave too much away....
  • by TheTXLibra ( 781128 ) on Friday July 16, 2004 @12:12PM (#9718299) Homepage Journal
    To be sure, we'd all like to say "Look, we've got these laws that say AI can't do XXXXX, so it can't." But the fact is, we cannot possibly account for every possibility with a simple set of laws. We, as would-be creators of an entirely new and admittedly alien form of life must tread as cautiously as possible. An entire attitude change and review of the ethics and rights of computers will have to be decided upon before AI's ever enter mainstream (or indeed, are even taken off an isolated network).

    A lot of people like to fantasize that true AI (as in, a living, thinking, emotional being with free will, or at least the capacity for free will) would have the same sort of thought processes, and develop the same emotions as their human counterparts. But let's be honest, the physical body largely determines human emotional state with glandular responses, or physical condition at the time. Eliminate glands, fatigue, and pain, and the emotions one might develop would be on an entirely alien level to us.

    I cannot help but fear that humans, as a whole, will not realize this until far too late, which will hurt, diplomatically, any alliance between humans and AIs. The other thing I worry about is that people will walk into this with the assumption of "These are machines, they don't need rights, they shouldn't have rights, and it's not like they're real people."

    I think society has seen how well that approach has worked with other humans in the past. Bloody revolutions and civil wars which tore nations apart, and left racial stinging still in the back of many people's minds today. Fortunately, the short memory of humans, and only somewhat longer-lived lifespan has allowed us to progressively become more and more integrated, as human beings, rather than various races.

    Now take those same results, and apply it to a species that is not only will likely be more resiliant to attack, but have a memory that can last as long as the hardware and backups and redundant networks will allow. New generations that can inherit all the knowledge of their parents. Throw robots into the picture and you have a being that is physically tougher than humans, able to communicate at a MUCH faster rate, and you have an end result similar to that of Animatrix.

    We can NOT afford, in the interest of our own species, to persue AI much further without a major realization on a philosophical level.

    • by maximino ( 767005 ) on Friday July 16, 2004 @12:39PM (#9718684)
      I think that's a little overblown, especially since we don't know what an AI would look like.

      Have you ever read "Godel, Escher, Bach" by Douglas Hofstadter? In it he raises the interesting thought that AI will actually be located somewhere in a mass of software and that the "entity" will have no control over its lower level functions, in the same way that you are sentient but cannot will any particular neuron to fire. Rather, your sentience somehow congeals out of the neural activity, and the sentience of an AI would probably congeal out of complex software functioning.

      So it's entirely possible that an AI might not be any smarter than a person, and also quite likely that AIs would have to learn, just like people do (i.e., no "memory dumps" from parents). Machines may very well revolt someday, but giving them superhuman attributes before ever seeing one is a bit paranoid.
  • Just... (Score:3, Funny)

    by NickRuisi ( 643726 ) on Friday July 16, 2004 @12:15PM (#9718334)
    <WIT>
    The foolproof way to make sure that machines dont take over the world is to give 'em all a brain with an HTTP server TCP stack installed and an "always on" connection to the net... just post a story on slashdot saying "the robots are getting out of hand" and the problem will take care of itself.
    </WIT>
  • by dreamer-of-rules ( 794070 ) on Friday July 16, 2004 @12:16PM (#9718360)
    I heard this on TV a while back, but moviepoopchute.com has more details on the history of the script for I, Robot. The short answer, Asimov-isms were only sprinkled in after the script was written, so if you watch this expecting Asimov, you'll be sorely disappointed.

    Non-spoiler excerpts:

    "I, ROBOT started out as a spec script from then-unknown writer Jeff Vintar titled HARDWIRED. ... Proyas was signed and the project began to get a head of steam.

    "Shortly thereafter, Fox acquired the rights to the I, ROBOT series (and eventually also Asimov's other classic, "The Foundation") and decided to take Vintar's script and incorporate many of the ideas from Asimov's book..."

    "...Around late 2002/early 2003, Academy Award-winner Akiva Goldsman was brought in, along with INSOMNIA writer Hilary Seitz, for a polish, making the transition from HARDWIRED to I, ROBOT complete."

    SPOILERS in the article!

    The Bottom of Things [moviepoopshoot.com] by Michael Sampson

  • ugh. (Score:3, Informative)

    by michael path ( 94586 ) * on Friday July 16, 2004 @12:18PM (#9718386) Homepage Journal
    I caught an advance screening of this movie earlier in the week.

    For those who actually care about it for legit sci-fi content, this will prove a waste of your time. This is an action film. A Will Smith Action film (tm).

    Will Smith comic relief is in place, and unfortunately served no good here (he discusses his Bullshit Detector going off? surely, Asimov wasn't aware of the device). The movie is essentially dumbed down for the same audience who though ID4 was a groundbreaking masterpiece.

    Moreover, the omission of a cool summertime jam featuring the Fresh Prince himself only hurt the movie. Couldn't we have had a "Keep Ya Ass In Motion" or something?
  • strong AI problems (Score:3, Insightful)

    by lawpoop ( 604919 ) on Friday July 16, 2004 @12:23PM (#9718475) Homepage Journal
    The problem with the robot laws is that they are strong AI problems. What exactly does 'harm' mean? If a robot sees a person smoking, should it rip the cigarette from their mouth? If it sees someone walking, should it run over and pick them up before they do irreprable harm to their knee ligaments? What constitutes harm on a robot?

    The robot is also subject to the ethical/philosophical conundrums such as killing a person to stop a train headed into a group of people, or cutting off the limb of a person trapped under a fallen tree, etc.

  • Who did it in "The Humanoids".

    Robot who can't let you be harmed by inaction...lessee, master, you can't use that circular saw, and driving is *dangerous*, and... so we'll just treat you like five-year-olds....

    mark
  • by base_chakra ( 230686 ) on Friday July 16, 2004 @12:24PM (#9718485)
    In "One Law To Rule Them All" Michael Ames writes:
    Asimov's phrase, "allow a human being to come to harm," if implemented fully, would turn humanity into a clutch of coddled infants, perpetually protected from harm, both physical and mental.

    In evaluating what constitutes "mental harm", it seems to me that one must apply a cultural standard. For example, many American conservatives regard images of nudity as damaging to children, rather than vital for well-adjustment. In other cultures there is a great variety of words and images regarded as harmful which are innocuous in other contexts. To apply the First Law consummately, we must allow for acculturation, but there are sure to be serious conflicts (what protects one will inadvertantly harm enough by a different standard).

    Let's consider the mechanics of "protection from harm." Asimov seemed to indicate a direct reaction to an immediate situation, but surely a protective impulse is bound to be frequently disastrous if it lacks such critical skills as foresight, an ability to extrapolate based on extremely subtle information, and the need for non-action. In fact, this very principle of direct reaction is itself culturally situated: direct communicators tend to seek unambiguous solutions to immediate "problems"; contrast with the Taoist principle of wu wei [sacred-texts.com].
    • i'm an american conservative, and i consider "mental harm" to be a bullet through the head.

      bring on the porn.
    • Asimov's phrase, "allow a human being to come to harm," if implemented fully, would turn humanity into a clutch of coddled infants, perpetually protected from harm, both physical and mental.

      In evaluating what constitutes "mental harm", it seems to me that one must apply a cultural standard

      This was explored in Asimov's "Spacer" stories.

      The Spacer robots were used to dealing with one owner (or a *very* small family) whose massive estate is run entirely by robots and where personal contact is rare. Wherea

  • by Geckoman ( 44653 ) on Friday July 16, 2004 @12:32PM (#9718605)
    It should be pointed out that in several of the Susan Calvin stories, it's explicitly stated that the Three Laws everyone refers to are not the actual laws themselves. The actual laws governing robotic behavior are mathematical constructs that are too complex to be easily expressible in human language. The classic Three Laws are just shorthand Cliff's Notes versions of the real ones.

    Why yes, I am a dork. How did you guess?

  • by peter303 ( 12292 ) on Friday July 16, 2004 @12:33PM (#9718617)
    One of Asimov's late-career novels "The Bicentennial Man(*)" was made into a movie several years ago, starring Robin Williams. Its plot was about a Pinnochio-like robot who progressively becomes more human. It was not a commercial success because it was too cerebal and long. I remember some families walking out because they expected a typical Robin Williams comedy.

    (* The title comes from scifi novels were written around the US 1976 Bicenntenial predicting 200 years in the future. Asimov recycled some of his robot themes.)
  • by harlows_monkeys ( 106428 ) on Friday July 16, 2004 @12:33PM (#9718623) Homepage
    The significance of Asimov's three laws is not in the details, but in their very existence. Before Asimov's robot stories, most fictional robots were seen as inherently dangerous--they would grow to resent their essentially slave status, and, like human slaves, will rise up and revolt against their masters.

    What Asimov brought to robotics (besides the word itself, which appears to have been coined by Asimov, although I believe he himself said he was sure he had heard it before he used it) was the notion that they were simply tools. A robot would resent being a slave no more than a car or screwdriver does. Also, like other tools that can be dangerous, there would be safeguards. Hence, the three laws.

  • Sequel (Score:3, Funny)

    by Dogtanian ( 588974 ) on Friday July 16, 2004 @12:50PM (#9718838) Homepage
    Anyone willing to take a bet that the name of the sequel will be "II Robot"?

    Joking? We're dealing with Hollywood here- the sequel to "Ocean's Eleven" is called "Ocean's Twelve".

    'Nuff said.
  • Hmm (Score:3, Interesting)

    by arrow ( 9545 ) <mike AT damm DOT com> on Friday July 16, 2004 @01:08PM (#9719066) Homepage Journal
    What I want to know is how they are getting away with using US Robotics name. Normaly don't you make up a ficticious company name for the evil-going-to-take-over-the-world-bad-guy's seemingly innocent robotics company?
    • Re:Hmm (Score:3, Informative)

      by mstra ( 38238 ) *
      Is it actually "US Robotics" in the film?

      In the books, it was "US Robots and Mechanical Men" I think.

      Also, is it possible that USR got *their* name from Asimov, and might even enjoy having their name used?

      And finally...is USR even relevent these days?

  • Music (Score:5, Insightful)

    by amnesty ( 69314 ) on Friday July 16, 2004 @01:15PM (#9719145) Homepage
    I have no memory of the soundtrack music. That in and of itself might say something. I'm a musician, but it just didn't register.


    Thus it had a successful soundtrack. A good movie soundtrack only compliments the movie, but is not intrusive. There's nothing worse than being highly involved in the scene and suddenly the music rings out and you think, oh, that's Will Smith's theme again!

    When they cut the original Matrix movie, they made a point to edit the scenes without any temp scoring so that they would stand on their own without music, thus leaving the music to be composed as a compliment, rather than the scenes being edited to fit the music.

    It took a couple of viewings of Fellowship before I started picking out the themes in the soundtrack. A friend of mine thought the score was terrible initially because he didn't remember it, but loved it after a few more listens.

    Memorable themes seem to be needed in musicals, superhero movies and... Titanic, I guess. :)
  • by potus98 ( 741836 ) on Friday July 16, 2004 @01:32PM (#9719391) Journal

    If you're seeing I, Robot this weekend, we ask that you consider printing and handing out the "3 Laws Unsafe" Flyer. With hundreds handing it out, the awareness of AI ethics should increase significantly.

    Yea, cause that's the way a /.er will get all the [chicks|dudes].

    "Hey there... so... ya wanna get a cup of coffee after the movie and chat about artificial intelligence ethics? I uhhhh, got my Dad's car too ya know..."

  • 1. Serve The Public Trust
    2. Protect The Innocent
    3. Uphold The Law
  • by GPLDAN ( 732269 ) on Friday July 16, 2004 @02:21PM (#9720098)
    The parent article might actually have posted the laws, instead of directing us to a poorly organized website. Here they are:

    First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

    Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.

    Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    The website deals with the mile wide gaps in these laws. Let's take it right from the top - Robots as functional as the ones in the film would be very good as soldiers, thus taking that first rule and chucking it right out. In fact, it's the defense industry that would most like robots like the ones in the film.

    But let's stay on course, and assume these are robots meant as domestic servants. Does the robot take non-lethal contradictory rules and simply process them in order, taking the last order? Two children would amuse themselves for hours telling the robot "pick up that broom", "don't pick up that broom" and keeping the robot in limbo. The robot should tell the children to behave and go pick up their rooms. Directly violating rule 2.

    How about the running into the burning building scenario? It's unclear that there is anybody in the building left alive to save, or if everyone has escaped or not. Does the robot violate Rule 3 in order to *possibly* meet Rule 1?

    Anyhow, the website has more papers on the subject that examine the issue in a moral framework. These are super simple examples to show the issues.
  • by Nom du Keyboard ( 633989 ) on Friday July 16, 2004 @02:41PM (#9720434)
    Most everyone seems to think that Isaac Asimov's laws were an attempt to design a better robot. WRONG! They were to design better stories!

    Asimov's Three Laws of Robotics (latter amended to include a necessary Zeroth Law) existed to create the classic locked room murder mystery (i.e. the dead body is alone in a locked room that could have only been locked from the inside -- so how was he murdered?).

    After creating his supposedly nothing-can-go-wrong infallible set of rules, he proceeded to show their flaws in virtually every story he wrote about robots afterwards. As long as people believed that his Three Laws guaranteed safe robots, his writing career was assured.

    (Well almost assured. Even he couldn't save himself from what I Robot has become, given that it's based on his book - which goes to show that truth is stranger than fiction, because fiction has to make sense!)

    So we ended up with a fascinatingly entertaining set of stories many of us have enjoyed, a couple attempts at movies of them (don't forget The Bicentennial Man), and Dr. Asimov's legacy as a Science Fiction Grand Master is secure for at least our lifetimes.

  • After he died... (Score:4, Interesting)

    by mratitude ( 782540 ) on Friday July 16, 2004 @03:03PM (#9720768) Journal
    How many recall the script work done by Ellison about 10 or 12 years ago for a movie version based on Asimov's fiction? In his usual fashion, Harlan Ellison approached the studios and fought off every attempt to change the script - The script held true to the original fiction and was approved by Asimov. After some (with Ellison, I would imagine energetic) negotiations it boiled down to the studios wouldn't option the script without complete control and Asimov/Ellison wouldn't option the script without complete control of changes to the script.

    This was all detailed in Asimov's pulp mag and the script was published in same as well.

    Needless to say the current movie was not approved by Asimov but was approved by his estate, and obviously bears the slightest resemblence to Asimov's fiction or Ellison's original script (which kept to the original story fairly well and updated to include a modern "feel", Asimove was a bit of a romantic in the visual sense).

    I'd encourage everyone to look up the I,Robot Ellison script and give it a read. Sorry for not providing a source and I have to admit, it might be difficult to find unless you can dig up a 12 year old copy of Asimov's pulp mag.
  • by mark-t ( 151149 ) <marktNO@SPAMnerdflat.com> on Friday July 16, 2004 @03:17PM (#9720989) Journal
    A patent objection to the Three Laws of Robotics often begins by pointing out that if robots were motivated by something as simply as the three laws of robotics, that they would not be able to interact successfully with our society. The First Law of Robotics states that a robot must not through action cause a human being to come to harm or through inaction allow a human being to come harm. The conventional argument against this sort of law being applied to a robot is that a robot might stop you from crossing a street that you wanted to simply because you _might_ be hurt, or might not permit you any free action at all beyond eating and drinking what was necessary to survive, since if you were allowed to be free, after all, you could easily endanger yourself and the robot would be breaking this all-important First Law.

    The problem with this reasoning, however, is that it assumes that because the law itself is simply stated, that the definitions of the words it contains are equally simple. That reasoning does not follow logically from the premise. The definition of "harm", for example, is vast... and to restrain human beings from performing in their daily capacity what would otherwise be normal and proper behaviour would arguably be causing _actual_ harm to the people that the robot was caring for. Therefore, the robot must make a decision, based on the overall level of harm that is done in connecction with the probability that the harm would actually happen. Thus, an action that actually induces negative psychological damage (not theoretically, but actually probable damage) would be less preferable to one that may or may not cause real physical damage, especially if the latter would be necessary for performing in their ordinary daily capacity, since denying a human being their freedom and rights of self-determination is inarguably psychologically damaging. The weights of the damages caused must be factored in with the ability for the human beings involved to recover from those damages, and the robot would have to make a choice that would result in the smallest overall level of harm being caused to humans in general, with harm to the general welfare of humanity being weighted in slightly favour to that of any particular human being, so that, for example, a robot could inform the police of a robbery, even though doing that would likely mean that the thief would go through suffering as part of the excercise of justice (that is, his freedoms are revoked, he goes to jail, possibly gets subjected to harsh treatment, etc). This doesn't make it too fuzzy, however... the robot would allow human beings to come to harm only to the extent that it was essential for the human society to continue to function normally simply because to stop society from functioning normally would actually cause much greater long-term harm.

    There are similar rationales for the other two laws. Asimov was no dummy.

  • by snStarter ( 212765 ) on Friday July 16, 2004 @06:39PM (#9722827)
    Asimov didn't design the three laws of robotics as gospel for real robots. He designed them so he could write stories about humans in which robots played an interesting part. He has said this himself quite a bit toward the end of his career.

    Just the concept of "human" lead to a great Campbell essay in Analog asking "What do you mean: Human"? And that was in the mid-60s,

    It's too bad the film had to chuck the essence of Asimov's imagined world for the simplistic drivel they created.

    But action sells tickets to teens who otherwise won't bother with something where you might actually have to think and feel. For me "A. I." was a very fine film that works much better than almost any other S.F. film I've seen, and I've seen a lot even if it did need to have a machine longing to be human.

    i'd love to see Benford's "Galactic Center" novels formed into a movie - just for the millieue.

"I've finally learned what `upward compatible' means. It means we get to keep all our old mistakes." -- Dennie van Tassel

Working...