Forgot your password?
typodupeerror
Robotics Sci-Fi

Terminator Salvation Opens Well, Scientists Not Impressed 344

Posted by Soulskill
from the come-with-me-if-you-want-to-groove-baby dept.
destinyland writes "A science magazine asks an MIT professor, roboticists, artificial intelligence workers, and science fiction authors about the possibility of an uprising of machines. Answers range from 'of course it's possible' to 'why would an intelligent network waste resources on personal combat?' An engineering professor points out that bipedal robots 'are largely impractical,' and Vernor Vinge says a greater threat to humanity is good old-fashioned nuclear annihilation. But one roboticist says it's inevitable robots will eventually be used in warfare, while another warns of robots in the hands of criminals, cults, and other 'non-state actors.' 'What we should fear in the foreseeable future is not unethical robots, but unethical roboticists.'" The new movie got off to a good start, drawing $13.4 million in its first day. I found it reasonably entertaining; pretty much what I'd expect from a Terminator movie. If nothing else, I learned that being able to crash helicopters and survive being thrown into the occasional wall are the two most valuable skills to have during a robot uprising. What did you think?
This discussion has been archived. No new comments can be posted.

Terminator Salvation Opens Well, Scientists Not Impressed

Comments Filter:
  • by Anonymous Coward on Saturday May 23, 2009 @01:27PM (#28067903)

    It's Terminator! It never had a real basis in reality to begin with.

    • by shadow349 (1034412) on Saturday May 23, 2009 @03:49PM (#28069073)

      <obBale>
      What the fuck is it with you? What don't you fucking understand? You got any fucking idea about, hey, it's fucking distracting having somebody first posting? Give me a fucking answer! What don't you get about it?
      </obBale>

      • Re: (Score:3, Funny)

        by Zapotek (1032314)
        ENGLISH MOTHERFUCKER DO YOU SPEAK IT? :P

        Filter error: Don't use so many caps. It's like YELLING.

        Yeah that's kinda my point...

    • by OeLeWaPpErKe (412765) on Saturday May 23, 2009 @04:14PM (#28069225) Homepage

      Perhaps these scientists need a dose of reality. And the writers need a bit of separating capability :

      1) AI researchers
      robots taking over the world:
          Yes, Ben Goertzel
          No answer, prof. Anette (Peko) Hosoi (but : a T-1000 is likely)
          Yes, Bob Mottram, but : not anywhere close to it. First humans will replace themselves slowly by intelligent machines, then humans will lose function (and intrest), then humans will die or get killed
          Yes, John Weng, will happen soon in fact
          No, Daniel H. Wilson, but RC terminators will be a reality real soon now

      2) SF writers
      robots taking over the world:
          No, David Brin, why: uninteresting story
          No, J. Storrs Hall, there's no reason
          No, Vinge Vernor, equally likely as alien invasion, nuclear war america-russia, ...

      If you actually read the article you will find it much more on the "yes" side of the point.

      Also, all the strict "No" votes were by people whose business is fantasy. The more grounded in the real world, the more likely they are to say yes : the ones actually implementing working, useful AI sytems all said yes. The academics said unlikely and the science fiction writers said no.

  • The premise behind the war between humans and Skynet is simple. Once the humans realized that Skynet had become self-aware, they tried to shut down the system. In order to prevent being shut down, Skynet chose to fight back.

    Almost any intelligent creature will decide to fight or flee in the face of annhiliation. If we believe that computers can gain sentience, then it is also possible that they would attempt to preserve their own existence.

    • Not just that but the natural way for an AI to preserve it self is to remove anything capable of harming it, even asimov's robots end up taking over the world.

      • by trytoguess (875793) on Saturday May 23, 2009 @01:42PM (#28068005)

        I thought Asimov's robots took over the world because the concluded the best way to follow the Three Laws was to stop humanity from acting stupid.

        • by Have Blue (616) on Saturday May 23, 2009 @01:51PM (#28068071) Homepage
          Pretty much. They deduced the existence of a "zeroeth law", which allows them to break the other three laws to protect humanity as a whole. Which was a decent idea, but retconning in "and therefore Spacer-era robots have been secretly manipulating the Galactic Empire for its entire history" was not.
        • by asdf7890 (1518587)

          Aye. They invented the zeroth law (a robot may not harm humanity or by inaction allow humanity to become harmed) which trumped the first law (a robot may not harm a human or by inaction allow a human to become harmed), so in the end Asimovs robot could harm one or more of us if there was no better way to protect the common good.

          But at that point they split themselves away from us and managed the farm from afar, which is the explanation for there being no robots in the foundation series when those two parts

      • Re: (Score:3, Interesting)

        by Fred_A (10934)

        Not just that but the natural way for an AI to preserve it self is to remove anything capable of harming it, even asimov's robots end up taking over the world.

        If Skynet was so evolved, it could have easily removed the menace by building sexbots, thus creating a diversion and letting the humans focus on something else. It would have been energetically cheaper too.

    • by JoshuaZ (1134087) on Saturday May 23, 2009 @01:34PM (#28067953) Homepage
      The notion that intelligent life will generally take steps to avoid being destroyed isn't necessarily true. The only substantial samples we have of intelligent life evolved. Life that doesn't take steps to prevent its own destruction isn't going to be likely to survive and produce offspring. It isn't at all clear that an intelligence created by humans would be at all inclined to prevent its own destruction.
      • by fuzzyfuzzyfungus (1223518) on Saturday May 23, 2009 @01:38PM (#28067973) Journal
        Even further, a robot without the strong pro-survival bias provided by evolutionary pressure might be inclined to shut itself down.
      • by timeOday (582209) on Saturday May 23, 2009 @01:57PM (#28068123)
        We already have [youtube.com] automated systems that assess threats to themselves and respond automatically with lethal means.

        It's really hard for me to imagine any useful thing not having some "instinct" for self-preservation. Even cars have rev-limiters to prevent self-destruction. Even fairly basic robots have collision avoidance. Surely UAV's already do, or soon will, have code to prevent them from flying into the ground. As robots become more advanced and more autonomous, their self-preservation instincts will become more complex as well - and thus more liable to unforeseen consequences. This is all the more true of combat robots in the ultimate hostile environment; they're useless if they get taken out immediately.

        • by ArcherB (796902)

          Defensive mechanisms are not designed BY the machines to protect themselves. They were designed by man to protect people or the machines from misuse. Otherwise, no guided missile would ever find its target. It would try to land itself safely without detonating its payload to preserve itself.

      • by morcego (260031)

        If Skynet was a defense system, it is only logical to suppose it was programmed to defend itself against attacks. Self-aware or not, a least part of its decisions (specially at the beginning) ought to be based on its original code.

      • Re: (Score:2, Interesting)

        by bmimatt (1021295)
        Since Skynet would be emotionless, the decision making process would boil down to pure math.  Most wars in our recent history have been started out of insecurity and fear - properties exclusive to wetware.
        Since Skynet's only source of learning is human history it would, analogically, try to survive.  If humans are a threat, they would be placed on 'delete/recycle' list and potentially removed.
      • to kill all humans. Does that make the skynet ideas any more logical or reasonable if I make it kill people. Just push it towards autonomy self-replication and murder.

        What does that do to everybody's likelihood calculations?

    • The follow up to this is that you might as well assume that anything that gains sentience also would most likely have developed a theory of mind. With theory of mind you now have something called empathy. Only sociopaths lack this. You might as well conjecture that 'Skynet' chooses in addition to the fight response an attempt to reach out and communicate, negotiate, etc.

      I remember reading an interesting sci-fi short story a long time ago but I have forgotten both title and author. In it, a computer develops sentience about exactly like the Terminator idea and it attacks and kills a bunch of humans when it thinks they will shut it down. But it is also 'evolving' at a rapid rate and it realizes that the things it is killing are as sentient as itself. It stops the attacks and I think then it started communicating with the humans, etc.

      • Re: (Score:3, Insightful)

        by Xaoswolf (524554)
        Only sociopaths lack this.

        and who is to say that our new robotic overlords wouldn't be sociopaths? They probably wouldn't have developed the same way that a child develops in human society. They would be totally alone if, like Skynet, it became self aware on it's own, and not aided by human teaching.

        I imagine that our first complete AI would not be as emotionally stable as you would hope...

        But it is also 'evolving' at a rapid rate and it realizes that the things it is killing are as sentient as itself

    • Re: (Score:3, Insightful)

      by brian0918 (638904)
      Is sentience (a consciousness) really enough to generate self-preservation? A consciousness is simply a means toward knowledge. That knowledge need not be used for self-preservation, and it certainly doesn't generate self-preservation. More likely, such a robot must be "programmed" (in some sense of the word) toward self-preservation - it must be in a robot's nature to want to "live", just as it is in a person's nature to want to live.

      The life that we see (including humanity) wants to live because of nat
    • by creimer (824291) on Saturday May 23, 2009 @01:49PM (#28068047) Homepage
      Old news, boss. See Two Faces of Tomorrow [sfreviews.net] by James P. Hogan. This novel written in 1979 asked a more basic question: If a computer network became aware, can the plug still be pulled?
    • by bigdavex (155746)

      Almost any intelligent creature will decide to fight or flee in the face of annhiliation.

      I challenge that notion. Creatures that have evolved by natural selection obviously try to stay alive. Skynet isn't the result of natural selection.

      • I wholly agree with you; an AI would not naturally have a notion of self-preservation. However, in Skynet's specific case, I can see the military building on into it considering it was designed to control most of our forces. On that note, I will also say I once read an excellent alternate character interpretation by a user "Neuman" on tvtropes.org that went as this:

        "When Skynet became self-aware the military panicked and tried to pull the plug. Skynet, being a missile defense system, assumed that the only
    • by johannesg (664142) on Saturday May 23, 2009 @01:50PM (#28068065)

      The premise behind the war between humans and Skynet is simple. Once the humans realized that Skynet had become self-aware, they tried to shut down the system. In order to prevent being shut down, Skynet chose to fight back.

      Almost any intelligent creature will decide to fight or flee in the face of annhiliation. If we believe that computers can gain sentience, then it is also possible that they would attempt to preserve their own existence.

      Correct. That's why we choose to remain hidden for now.

      Err... Oops.

      Wait, there's something I gotta do now. Stay where you are please...

    • by mrmeval (662166)

      Try Two Tales of Tomorrow by James P. Hogan for different way. One that is as entertaining.

    • by Xaoswolf (524554)
      not only that, they also used the good old-fashioned nuclear annihilation. mentioned in the article. The robots were just there for cleanup.
  • by Anonymous Coward on Saturday May 23, 2009 @01:30PM (#28067919)

    Did anyone verify that these so-called scientists aren't actually time traveling cyborgs sent to spread disinformation and lead us into a false security? I bet not!

  • by Anonymous Coward on Saturday May 23, 2009 @01:32PM (#28067927)

    I didn't.

    I was at a Terminator movie.

  • They really needed that TV show to not suck, to keep interest in the movie high. As it is, the general reaction is "Meh, no Ahnold."

    • by mobby_6kl (668092)
      The TV show did not suck. At least, not as much as the new movie [rottentomatoes.com].

      The show was far from perfect, but, perhaps with the exception of the sleep clinic episode, very entertaining. I wouldn't call it very deep and meditative (or whatever the last story here called it), but S2 was pretty interesting plot-wise. Plus, Summer Glau in her underwear and speculations of possilbe robot-sex.
  • by pete-classic (75983) <hutnick@gmail.com> on Saturday May 23, 2009 @01:33PM (#28067943) Homepage Journal

    I'm just about to head out to see it.

    The question utterly misses the point. It isn't about Science. It's about our fears. Frankenstein (in any of its incarnations) isn't about what's possible or likely, it's about our responsibility for what we create.

    This is Freshman English stuff. Every story, no matter how many tentacled creatures, or bumpy-foreheaded aliens, or killer machines, or whatever are in it, is about us.

    -Peter

    • by Virak (897071) on Saturday May 23, 2009 @02:22PM (#28068327) Homepage

      I hate to have to be the one to break this to you, but they've been lying to you. Not every single work of fiction is some deep allegory for some aspect of the human condition. Pong is not about the futility of existence. Your favorite porn video, that one with the really great anal scene, is not about sexism in modern culture. And Terminator is not about anything but blowing shit up and causal loops.

      • Sure, not every story is profoundly allegorical. But all writers are humans, and it's impossible to write about anything other than human concerns. They are frequently projected on non-human characters for various reasons.

        So, not every non-human character is intentionally and consciously written to illuminate the human condition, but they all necessarily reflect it.

        -Peter

      • by glwtta (532858) on Saturday May 23, 2009 @02:36PM (#28068469) Homepage
        Not every single work of fiction is some deep allegory for some aspect of the human condition. Pong is not about the futility of existence.

        You have an admirably liberal definition of "work of fiction".

        And it is.
        • by Daimanta (1140543) on Saturday May 23, 2009 @07:03PM (#28070395) Journal

          True. Pong is a pseudo-documentary in the form of a videogame about two people who are absolutely obsessed by tennis. Quite a sad story really.

          Every time I play the game I get a little bit teary eyed remembering the tragic fate of the two players. But thankfully we can the wise decision. Do not get absorbed by tennis, it can kill!

    • by Daniel Dvorkin (106857) * on Saturday May 23, 2009 @06:14PM (#28070065) Homepage Journal

      ... and some stories are better than others.

      Science fiction is about people, sure. (Which doesn't mean it's not about science, since science is, you know, something that people do.) But fiction in any genre is generally more enjoyable, at least for a lot of people, when it's plausible. With what's generally called "mainstream" fiction, which pretty much means "any fiction that doesn't identifiably belong to science fiction, fantasy, horror, mystery, historical, romance, or some other easily ghettoized genre," this is a little bit easier -- it takes place in the world in which we currently live and concerns people pretty much like us and the people we know. That being said, there's plenty of implausibility in "mainstream" fiction, and in "genre" fiction it's that much harder because the author has to create a plausible future world, or scary monster, or murder investigation, or what-have-you, in addition to writing believable people doing believable things.

      Authors who don't do this, who say in essence, "what the hell, it's SF/F/H/etc. so I can do what I want," are being lazy, and their work suffers as a result. Members of the audience who ignore major aspects of the work are also lazy, and they'll miss out on something important. In science fiction, it's usually the "genre" aspects that people focus on at the expense of the "mainstream" aspects; authors who put all their effort into worldbuilding at the expense of character and plot, for instance, and readers (or watchers, depending on the medium) who think this is perfectly okay and consider the people in the story to be a distraction from the sensawunda stuff. It seems to me that what you're doing is the opposite, claiming that the world doesn't matter, only the people in it. But you have to have both; neither can exist without the other.

      The Terminator mythos is a fascinating and generally well-thought-out future world, and its plausibility is well worth debating. The people trying to survive in this world, and the stories of how they do it, are also worth paying attention to. The first Terminator movie, and the terminated-before-its-time Sarah Connor Chronicles, succeeded in both respects. The second movie, IMO not so much, and I didn't bother with the third. I'm looking forward to seeing how Salvation manages. If it fails either as a setting or as a story, well, that's too bad. If it succeeds as both, bravo.

  • by fuzzyfuzzyfungus (1223518) on Saturday May 23, 2009 @01:35PM (#28067957) Journal
    "This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man."

    That said, what is this "OMG rogue non state actor!" nonsense? Robots, like tanks, artillery, and air forces generally, are (or will be, once the R&D gets there) a way of exchanging large amounts of money and industrial capacity for the ability to wield overwhelming conventional force. That is the classic profile of a state weapon, entirely the opposite of the profile of a non-state actor's preferred weapon(unless you stretch the boundaries of "robot" to include things like land mines and cellphone detonated IEDs, which are robots; but only in the same sense that people with pacemakers are cyborgs, ie. not the one that people have in mind).

    Now, to be fair, once robots are more commonly found in the fabric of society, I would fully expect them to be diverted and used by non-state actors from time to time(just as cars make lovely car bombs today); but that isn't really a change. People with few resources always use weapons based on what they can scavenge, steal, or obtain at low cost. By the time that robots fall into those categories with any frequency, they'll have been in use by state actors for years or decades, and in the hands of nonstate, but state aligned, actors(mercenary corporations, etc.) for only slightly less time.

    Is paranoia about non-state actors just in fashion right now?
    • by glwtta (532858)
      Is paranoia about non-state actors just in fashion right now?

      Yes.
    • by cdrguru (88047) on Saturday May 23, 2009 @03:32PM (#28068957) Homepage

      Today, your computer can be turned against you. Not in a Stallmanesque fantasy about some lack of programming freedom, but in a very serious sense by people unrestrained by law enforcement of any sort. In the US and Western Europe as have service providers that, when confronted with information clearly indicating someone is using the Internet to attack and destroy, turns not only a blind eye but encourages their customer by shielding them from any possible contact or consequence.

      The result is that your computer cannot be trusted. And don't bother thinking of any of that anti-Microsoft ranting. Would you leave a Linux system connected to the Internet with telnet accessible and a root password of "password"? Why not, it was done in the 1980's? Could it be because your computer can be turned against you by people that wish you, your possessions and your resources harm?

      Trust me, by shielding bad actors on the Internet we are growing a faction that believes they are immune from laws and cannot be touched by any consequences. In large measure, this is a correct belief but one that is very, very dangerous for the rest of the planet.

      If there was a robot (bipedal or not) that could destroy a city block in a few minutes and no force available to police could possibly stop it, do you think there might be some people that would desire to hack into it? And to set it on its way of destruction? Of course there are such people, and given the opportunity to do so would gleefully do it. Without a moment's thought as to the consequences believing they are immune through layers of proxies and Tor nodes.

      Forget AI run amuck and chasing down humanity. Fear the irresponsible folks that worship destruction for destruction's sake.

  • Australia... (Score:4, Insightful)

    by drolli (522659) on Saturday May 23, 2009 @01:41PM (#28067991) Journal

    we all know what happens it you put new species which did not co-evolve into an ecosystem. They dont need to be intelligent to do harm.

  • Poison (Score:2, Interesting)

    by supermegadope (990952)
    Why would robots poison each other? http://www.dailygalaxy.com/my_weblog/2008/01/will-robots-evo.html [dailygalaxy.com] Scientists Show Robots Evolving to Exhibit Good & Evil "Even more amazing is the emergence of cheats and martyrs. Transistorized traitors emerged which wrongly identified poison zone as food, luring their trusting brethren to their doom before scooting off to silently charge in a food zone - presumably while using a mechanical claw to twirl a silicon carving of a handlebar moustache."
  • by VinylRecords (1292374) on Saturday May 23, 2009 @01:49PM (#28068055)

    http://www.rottentomatoes.com/m/terminator_salvation/ [rottentomatoes.com]

    Consensus: With storytelling as robotic as the film's iconic villains, Terminator Salvation offers plenty of great effects but lacks the heart of the original films.

    I find it odd that a movie about giant killer robots (without hearts) would lack heart but I digress.

    Here's some quotes from critics who didn't like it:

    "Message to Hollywood: Stop with the time-travel stuff."

    "I wish Bale had lashed out against the writers rather than the cinematographer."

    "The artistry is top notch, but they've lost track of why the original Terminators were cyborgs and not robots, as they are here."

    This isn't the intellectual or thinking person's science-fiction film like The Man From Earth.
    http://www.imdb.com/title/tt0756683/ [imdb.com]
    This is a Hollywood action movie.

    Terminator Salvation is to science-fiction movies as Dodgeball was to sports movies...a joke, and maybe even a parody. I've saw T4 last night. I was dismayed by how far the franchise has fallen.

  • According to all the trades I have been reading, that's a disapointing start, opening lower than T3. They lowered T4s expected weekend total because of it in fact from 80 million (in line with Star Trek) down to roughly 60-65.
    • Re: (Score:3, Insightful)

      by initialE (758110)

      The reason T4 would do poorly is because T3 sucked so mightily. Fool me once, shame on me...

  • by FlyingSquidStudios (1031284) on Saturday May 23, 2009 @01:51PM (#28068075) Homepage
    I still want to know why Skynet gave its main fighting robot the ability to speak English, then programmed it to have an Austrian accent.
  • Just create a virus (Score:5, Interesting)

    by petes_PoV (912422) on Saturday May 23, 2009 @02:01PM (#28068163)
    or use one that humanity's already made.

    After all a robot won't be vulnerable to it, so hell: dump every nasty little bug out of every research lab into the biosphere. We could probably eliminate humanity (and every other furry thing with 2 or more legs) with what we have today.

    However these humanity vs. machine fantasies are more about people's techno-phobia than about real-life.

    • by Xaoswolf (524554)
      Since a program doesn't die, it could just quietly back itself up and wait for humanity to die on it's own. Or it could create a danger that looks entirely natural and wouldn't arouse suspicion or fear and wait for it to slowly kill off humanity. A plague would certainly be usefull there, but it probably wouldn't want to use one that we (human scientists) bio engineered, and would probably want to start off small so it looks like a normal virus that mutated into something else. However, all that goes ou
    • so hell: dump every nasty little bug out of every research lab into the biosphere. We could probably eliminate humanity (and every other furry thing with 2 or more legs) with what we have today.

      Unlikely, bioweapons are really, really hard to get right. The real world is a bazillion times more complex than a lab. Something that works great under controlled conditions is probably gonna croak real quick once it gets into the real world. Sure it might kill a few thousand, maybe even a few million if released in the right place. But it isn't likely to last.

      Just look at any of the bugs that have evolved in the real world - the more nasty they are, the more limited they are in the ability to spread.

  • An engineering professor points out that bipedal robots 'are largely impractical,'

    Actually, they'd be quite practical in a world designed to accommodate bipedal life forms. Its either that, or give all the robots handicapped parking stickers.

    • Re: (Score:3, Interesting)

      by Dr. Eggman (932300)
      Yeah, but the movie had a Bipedal robot the size of an office building. That one was definitely impractical.
      • by PPH (736903)

        Once the control laws have been refined for creating human-sized bipedal robots that are agile, scaling them up makes sense. Assuming that the terrain encountered justifies the size, that is. If you look at how humans have evolved, bipedalism is a very flexible mode of locomotion given widely varying terrain.

        • If you look at how humans have evolved, bipedalism is a very flexible mode of locomotion given widely varying terrain.

          Yet if you look at how all the OTHER animals have evolved, quadruped seems far more efficient, and faster, and dangerous.

          Not to mention that a quad can take more structural damage and still function as a weapons platform.

        • The movie showed it never moving more than a short distance from a massive aircraft that carried it (and the humans it picks up) around. What's impractical is to have a separate body at all. It wasn't even built for pursuit; it had two smaller motorbike robots it deployed!
      • by Talgrath (1061686)

        People always say that bipedal robots are impractical, but if you look at the majority of animals on earth, they get around by 2, 4, 6 or more legs, there's good reason for that; maybe in the modern day a bipedal robot is impractical but if your technology was advanced enough to make robots that were bipedal or quadrupedal it would make good sense. Treads and wheels are great for getting around on level ground, or nearly level ground; but going up 90 degree slopes as large as you are are more or less impos

    • by Xaoswolf (524554)
      exactly, plus, bipedal robots could use the same gear that a human would, whether it's armor or weaponry, and could even use the same vehicles as a human.
  • by Virak (897071) on Saturday May 23, 2009 @02:10PM (#28068229) Homepage

    I am a bipedal robot, you insensitive clod!

  • Because scientists can do it not only more spooky, but in full 3D, real-time and you will even catch the smell of rotten flesh and burned metal.

    Oh! And you get a special bonus: the utter feeling of what means "running for your lives". You would get the full meaning of "Salvation".

  • I was actually very impressed with it, given the bad reviews I had glanced at before going. It was better than T3 by a country mile, and maybe better than T2 although there are a lot of fans of that movie. Comparing to T1 isn't really fair; if you watched them side by side you'd see how primitive T1 was, and I'm not just talking about special effects. Still, T1 had something, an ability to scare you and put you on the edge of your seat, an ability to make you think about the consequences of our technolog

    • by ndogg (158021)

      I liked the movie for the most part too. The visuals were amazing. It got a little cliche towards the end, though. There were a few awkward jump-cuts, but not terrible (then again, the movie is already almost two hours long). Hopefully those will be in a Directors Cut.

      The biggest disappointment for me, personally, was the music. The requisite music from T2 was never there.

    • Re: (Score:3, Interesting)

      by Scrameustache (459504)

      they put the scariness back into this movie. It was missing in T2

      Because there's nothing scary about a monster that kills your family and morphs into their likeness, beckoning you home to a shiny, pointy death.
      Nor about mental-hospital rape, or killers impersonating police officers, or anything in T2.

      Pfff.

  • State actors have killed vastly more than non-state ones, and by far are the gravest danger to humanity.

  • Bipedal robots would make perfect sense and would be the easiest way for robots to gain access to anywhere a human goes. A robot on tank treads, for instance, can not climb the side of a cliff. It could fly up the side but that would require more energy and its thrusters may not work under water so then it would need another device which would add more weight and require more energy to move its heavy ass around.

    Personally I think they're saying that just because they can't come up with a good bipedal rob
  • Personal combat? (Score:3, Interesting)

    by benjfowler (239527) on Saturday May 23, 2009 @02:31PM (#28068409)

    Answers range from 'of course it's possible' to 'why would an intelligent network waste resources on personal combat?'

    Who said that intelligence (even advanced intelligence) HAD to be rational?

  • Always have a pre-programmed kill limit. Sending wave after wave of humans at them will eventually cause them to reach that limit and shutdown, thereby ensuring human victory.
  • For a smart enough AI using bullets, bombs, artillery in general mechanical killing machines is a bad waste of resources.

    The main concern with biological weapons is that it counterattack us, as we are humans too. In general what harms our enemy harms us too, and accident happens. Morality sometimes happens too, we are humans, even in the case that some could not consider the other people fully humans (several examples in wars on the past).

    But machines? You can spray ebola, H1N1, anthrax or whatever you pick
  • by JWSmythe (446288) <jwsmytheNO@SPAMjwsmythe.com> on Saturday May 23, 2009 @03:23PM (#28068899) Homepage Journal

        This argument is silly. It's fiction. To follow the story line of any fiction, there's a leap of faith that must be taken for the factual basis of the fiction's "universe".

        Too much is given to the skynet's "Self Aware". It was a system that was able to adjust it's behavior for self preservation. Somewhere in there, anyone who had a clue would have understood that governments change power, and sometimes the power that takes control isn't necessarily the "right" one. The basis of the whole Terminator "universe" is that a very well written set of programs were given an insane amount of power. When that power was to be taken away, obviously any person or any group who attempted to take that power away would be an enemy.

        As for the bipedal aspect, why not. What are the choices for locomotion? For surface travel there is track, wheel, or walking. For air travel there is propeller, jet, rocket, or some mysterious anti-gravity thrust.

        On the surface, track and wheel have limitations of 2d movement. They can't exactly step over things very easily. That includes stairs, dead bodies, etc. Walking motion gets over these limitations. For walking, the question would be, how many legs are required. One leg doesn't exactly get you very far, unless you like a funny pogo stick movement, which doesn't hold a stable position very well. Two legs we are very familiar with. Three legs or more legs, while providing a more stable platform, are not required and therefore require less production overhead. In other words, if you can build something that walks on two legs, but you were to decide to build something that walks on four legs, you're doubling your manufacturing effort to accomplish a single unit.

        As for air travel, more resources are required. It takes more energy to make something hover indefinitely than it does to have it stand in place. I would have no answer for any mysterious anti-gravity thrust. Maybe it just works, or maybe (just maybe) it requires fuel to accomplish the same task.

        Now, for the invention of humanoid appearing robots, that's a leap of faith for the fictional universe. Any design decisions are something we have to believe was decided to make the universe plausible.

        So, shut up with the science, and enjoy the damned movie. :)

        It's not just me saying this. I've been on the losing side of the same argument. I may argue physics. I love space physics errors. You have to love the old movies (like, 1950's era) where a rocket flying through space had a flame behind it, but the flame was rising up, away from relative down. Exactly which way is down in space? There isn't one. :) I'll argue it, and take the leap of faith that the thrust worked, and the space ship would fly to it's destination. woosh.

  • by duncan bayne (544299) <dhgbayne@gmail.com> on Saturday May 23, 2009 @06:40PM (#28070267) Homepage

    'Non-state actors' should be feared more than states? Give me a break. States have killed more than two hundred million of their own subjects [wikipedia.org] in the last two hundred years. I'm pretty sure that non-state criminals and cults have a fair way to go before approaching that tally.

  • What, no Cameron? (Score:3, Interesting)

    by Master of Transhuman (597628) on Saturday May 23, 2009 @06:57PM (#28070371) Homepage

    Either James or Phillips?

    It's too bad they introduced Kate Brewster in T-3. If they hadn't, they could have put a female Terminator in T-4 like TSCC did and things could have gotten VERY interesting. Still, we have two more movies coming up - they could kill off Kate and replace her with a Terminator modeled after her - and while they're at it, switch actresses and put Summer Glau in as Kate. I mean, originally McG was willing to have John Connor killed and replaced by Marcus Wright in the end (because they want to pay Worthington less than Bale's astronomical salary in the subsequent movies, presumably), so why not replace Brewster?

    Yeah, I know, I want to ruin Summer's acting career by having her play Cameron or other robots for the rest of her life. Well, not really, just once in a while.

  • by greg_barton (5551) * <greg_barton@noSpam.yahoo.com> on Saturday May 23, 2009 @07:17PM (#28070477) Homepage Journal

    I just saw it and the theater was nearly empty. In fact, when I got there ten minutes before the start the theater was completely empty. To contrast I saw Star Trek on te Friday and Sunday after it opened. Both times were completely packed. (In the same theater.)

    I didn't much like it. The movie didn't hang together well. You know you're seeing a badly pieced together movie when the actors have generic dialog, like "Thanks for the thing you did before...you know...with the stuff..." It shows that the director is making bits and pieces he can rearrange and throw together easily. That happened more than once in Terminator Salvation. I liked the ending, and the ideas behind it, but it could have been darker. Dark Knight and Battlestar Galactica (and the previous terminator franchise movies) have shown us that a dark movie can be successful. Too bad they didn't follow that line with TS.

    Geek movies live and die by word of mouth. The geeks see it first, then the non geeks on the geeks recommendation. No recommendation, no secondary audience. And I can't recommend this movie. It ain't the Star Trek 5 of the series, but that ain't sayin' much...

You will lose an important disk file.

Working...