Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Robotics Sci-Fi

Terminator Salvation Opens Well, Scientists Not Impressed 344

destinyland writes "A science magazine asks an MIT professor, roboticists, artificial intelligence workers, and science fiction authors about the possibility of an uprising of machines. Answers range from 'of course it's possible' to 'why would an intelligent network waste resources on personal combat?' An engineering professor points out that bipedal robots 'are largely impractical,' and Vernor Vinge says a greater threat to humanity is good old-fashioned nuclear annihilation. But one roboticist says it's inevitable robots will eventually be used in warfare, while another warns of robots in the hands of criminals, cults, and other 'non-state actors.' 'What we should fear in the foreseeable future is not unethical robots, but unethical roboticists.'" The new movie got off to a good start, drawing $13.4 million in its first day. I found it reasonably entertaining; pretty much what I'd expect from a Terminator movie. If nothing else, I learned that being able to crash helicopters and survive being thrown into the occasional wall are the two most valuable skills to have during a robot uprising. What did you think?
This discussion has been archived. No new comments can be posted.

Terminator Salvation Opens Well, Scientists Not Impressed

Comments Filter:
  • by Anonymous Coward on Saturday May 23, 2009 @01:27PM (#28067903)

    It's Terminator! It never had a real basis in reality to begin with.

  • The premise behind the war between humans and Skynet is simple. Once the humans realized that Skynet had become self-aware, they tried to shut down the system. In order to prevent being shut down, Skynet chose to fight back.

    Almost any intelligent creature will decide to fight or flee in the face of annhiliation. If we believe that computers can gain sentience, then it is also possible that they would attempt to preserve their own existence.

  • by pete-classic ( 75983 ) <hutnick@gmail.com> on Saturday May 23, 2009 @01:33PM (#28067943) Homepage Journal

    I'm just about to head out to see it.

    The question utterly misses the point. It isn't about Science. It's about our fears. Frankenstein (in any of its incarnations) isn't about what's possible or likely, it's about our responsibility for what we create.

    This is Freshman English stuff. Every story, no matter how many tentacled creatures, or bumpy-foreheaded aliens, or killer machines, or whatever are in it, is about us.

    -Peter

  • by JoshuaZ ( 1134087 ) on Saturday May 23, 2009 @01:34PM (#28067953) Homepage
    The notion that intelligent life will generally take steps to avoid being destroyed isn't necessarily true. The only substantial samples we have of intelligent life evolved. Life that doesn't take steps to prevent its own destruction isn't going to be likely to survive and produce offspring. It isn't at all clear that an intelligence created by humans would be at all inclined to prevent its own destruction.
  • by fuzzyfuzzyfungus ( 1223518 ) on Saturday May 23, 2009 @01:38PM (#28067973) Journal
    Even further, a robot without the strong pro-survival bias provided by evolutionary pressure might be inclined to shut itself down.
  • Australia... (Score:4, Insightful)

    by drolli ( 522659 ) on Saturday May 23, 2009 @01:41PM (#28067991) Journal

    we all know what happens it you put new species which did not co-evolve into an ecosystem. They dont need to be intelligent to do harm.

  • The follow up to this is that you might as well assume that anything that gains sentience also would most likely have developed a theory of mind. With theory of mind you now have something called empathy. Only sociopaths lack this. You might as well conjecture that 'Skynet' chooses in addition to the fight response an attempt to reach out and communicate, negotiate, etc.

    I remember reading an interesting sci-fi short story a long time ago but I have forgotten both title and author. In it, a computer develops sentience about exactly like the Terminator idea and it attacks and kills a bunch of humans when it thinks they will shut it down. But it is also 'evolving' at a rapid rate and it realizes that the things it is killing are as sentient as itself. It stops the attacks and I think then it started communicating with the humans, etc.

  • by brian0918 ( 638904 ) <brian0918.gmail@com> on Saturday May 23, 2009 @01:47PM (#28068039)
    Is sentience (a consciousness) really enough to generate self-preservation? A consciousness is simply a means toward knowledge. That knowledge need not be used for self-preservation, and it certainly doesn't generate self-preservation. More likely, such a robot must be "programmed" (in some sense of the word) toward self-preservation - it must be in a robot's nature to want to "live", just as it is in a person's nature to want to live.

    The life that we see (including humanity) wants to live because of natural selection - if it didn't want to live, it wouldn't be around for us to observe it, nor even for us to exist ourselves. Throughout the course of evolution there were likely many self-destructive mutation - those creatures died out rather quickly. It was only the build-up of self-preserving mutations that resulted in self-preserving creatures, thus resulting in life that strives to live.

    So no, I don't think you can simply get a robot smart enough and *poof* it wants to live. That shortcuts the entire evolutionary process. Instead, either the evolutionary process must be repeated in robots, or robots must be pre-programmed toward self-preservation.
  • by VinylRecords ( 1292374 ) on Saturday May 23, 2009 @01:49PM (#28068055)

    http://www.rottentomatoes.com/m/terminator_salvation/ [rottentomatoes.com]

    Consensus: With storytelling as robotic as the film's iconic villains, Terminator Salvation offers plenty of great effects but lacks the heart of the original films.

    I find it odd that a movie about giant killer robots (without hearts) would lack heart but I digress.

    Here's some quotes from critics who didn't like it:

    "Message to Hollywood: Stop with the time-travel stuff."

    "I wish Bale had lashed out against the writers rather than the cinematographer."

    "The artistry is top notch, but they've lost track of why the original Terminators were cyborgs and not robots, as they are here."

    This isn't the intellectual or thinking person's science-fiction film like The Man From Earth.
    http://www.imdb.com/title/tt0756683/ [imdb.com]
    This is a Hollywood action movie.

    Terminator Salvation is to science-fiction movies as Dodgeball was to sports movies...a joke, and maybe even a parody. I've saw T4 last night. I was dismayed by how far the franchise has fallen.

  • by timeOday ( 582209 ) on Saturday May 23, 2009 @01:57PM (#28068123)
    We already have [youtube.com] automated systems that assess threats to themselves and respond automatically with lethal means.

    It's really hard for me to imagine any useful thing not having some "instinct" for self-preservation. Even cars have rev-limiters to prevent self-destruction. Even fairly basic robots have collision avoidance. Surely UAV's already do, or soon will, have code to prevent them from flying into the ground. As robots become more advanced and more autonomous, their self-preservation instincts will become more complex as well - and thus more liable to unforeseen consequences. This is all the more true of combat robots in the ultimate hostile environment; they're useless if they get taken out immediately.

  • by Virak ( 897071 ) on Saturday May 23, 2009 @02:22PM (#28068327) Homepage

    I hate to have to be the one to break this to you, but they've been lying to you. Not every single work of fiction is some deep allegory for some aspect of the human condition. Pong is not about the futility of existence. Your favorite porn video, that one with the really great anal scene, is not about sexism in modern culture. And Terminator is not about anything but blowing shit up and causal loops.

  • by Colin Smith ( 2679 ) on Saturday May 23, 2009 @02:28PM (#28068381)

    The Internet was designed to survive a nuclear attack.

    Right... In theory the comms protocols might be routable. Pity about the power supplies.

    If I'm going to nuke you. I'll be aiming at your energy systems as well as control. The USA has for example about 30 days of fuel stored. Kill all the power stations as well and just about everything will stop just about instantly. It's one of those pesky details that authors and film producers like to gloss over.

    Against humans, those who aren't killed in the blasts, most will die of thirst and hunger within a month without the current infrastructure supporting them. Though, of course, there is always cannibalism.

     

  • Only sociopaths lack this.

    and who is to say that our new robotic overlords wouldn't be sociopaths? They probably wouldn't have developed the same way that a child develops in human society. They would be totally alone if, like Skynet, it became self aware on it's own, and not aided by human teaching.

    I imagine that our first complete AI would not be as emotionally stable as you would hope...

    But it is also 'evolving' at a rapid rate and it realizes that the things it is killing are as sentient as itself. It stops the attacks and I think then it started communicating with the humans, etc.

    That's all fine and dandy, you just better hope that it doesn't have access to any history books, because if it does, then it would see that not many peace talks actually work. Likewise it would probably see just how humanity treats those it sees as different or strange, and not see any use in talking either...

  • by Anonymous Coward on Saturday May 23, 2009 @03:22PM (#28068877)

    I've saw T4 last night. I was dismayed by how far the franchise has fallen.

    You must have missed the third one.

  • PG-13 (Score:2, Insightful)

    by SpeZek ( 970136 ) on Saturday May 23, 2009 @03:22PM (#28068885) Journal
    This movie definately was brought down by the PG-13 rating.
    Why, in the movie, are terminators so bad at killing people? In the first movie, the T-800 didn't fuck around tossing people around, he shot them multiple times in the face. Yet in this movie, the machines have dozens of chances to just crush John Conner's head (among others) and yet they decide it is more prudent to chuck him across the room, giving him a minute to recover while they amble over. What makes the machines so terrifying a concept is that they make cold, calculated decisions to kill at any cost to themselves.
    What happened to the bleak world that we saw in Kyle Reese's flashbacks, where the machines didn't scream, didn't waste time, and didn't act human at all. They were silent, terrifying killing machines.

    IMO, this movie would have been a lot better if it had followed more of a Saving Private Ryan-esque formula, with a small group of men (Conner, others) sneaking past the machines lines to rescue Reese. Can you imagine the opening to SPR, but with machines manning laser turrets? It would have evoked more emotion in the audience than the pathetic attempt to anthromorphosize the machines. But, then, it might not have gotten the all-powerful PG-13 rating, especially with the original ending [slashfilm.com]. No fate but what you make, indeed.
  • by areusche ( 1297613 ) on Saturday May 23, 2009 @03:41PM (#28069023)
    That it was, is, and always will be a movie. It is fictional entertainment with an attempt at being slightly scientifically accurate. Be grateful it isn't like Independence Day!
  • by Tatarize ( 682683 ) on Saturday May 23, 2009 @03:47PM (#28069069) Homepage

    to kill all humans. Does that make the skynet ideas any more logical or reasonable if I make it kill people. Just push it towards autonomy self-replication and murder.

    What does that do to everybody's likelihood calculations?

  • by initialE ( 758110 ) on Saturday May 23, 2009 @03:50PM (#28069077)

    The reason T4 would do poorly is because T3 sucked so mightily. Fool me once, shame on me...

  • by c6gunner ( 950153 ) on Saturday May 23, 2009 @04:33PM (#28069349) Homepage

    People aren't machines. You're just not getting this whole "evolution" thing, are you? The reason we have an instinct for self preservation is because we depend on genetic inheritance to multiply. Genes which code for self preservation are likely to survive long enough to make copies of themselves - ones which don't code for self preservation are less likely to do so. Machines don't have genes, and they don't copy themselves, ergo no evolutionary mechanism and no way to evolve a self preservation instinct.

  • by incognito84 ( 903401 ) on Saturday May 23, 2009 @04:38PM (#28069385)
    Terminator Salvation is to science-fiction movies as Dodgeball was to sports movies...a joke, and maybe even a parody.

    Say what you will about the third and fourth films, but to say that about the second is downright ignorant. As far as Science-Fiction films are concerned, Terminator 2 is one of the greats.
  • by Bigjeff5 ( 1143585 ) on Saturday May 23, 2009 @04:46PM (#28069455)

    I don't think this is by any means a simple concept like writing three rules that can never be broken. If the system is intelligent it will always find ways around the rules to complete the task...

    Anybody who has read Asimov's robot short stories, which were based on his three laws of robotics, would agree with that statement. Indeed, that was the point of his stories. He created three perfect rules to protect humans from robots, then came up with dozens of practical scenarios where the logical outcome of that particular scenario is not what was expected or intended by the 3 laws.

    My favorite is probably the story of the robot on Mercury, where the robot got stuck "between" two laws in his decision making process, which immobilized him and put in great danger the two men sent to ensure that the robot would continue to function as need. He was ordered to collect a mineral at a particular pool, but the emitted enough radiation to damage the robot. The closer the robot got to the pool, the greater the danger to itself and the less likely it would be able to fulfil its orders. So there was a point where the orders, based on the second law, were made irrelevant and the third law, self preservation, took over. However once it got far enough away that it was no longer in danger and the orders became priority again, causing the robot to turn back toward the pool. It got stuck in this loop, and ended up walking around the pool for hours, unable to move forward and unable to return.

    The problem there was the orders were given rather flippantly, and the robot knew its own value to the company. The robot was also not aware that not following the seemingly flippant order (it was made in such a manner because, at the time it was given, there was more than enough time to collect it safely) put humans at risk. Also it was not given the option of collecting the material at another, safer location. It was told exactly where to get it, and that happened to put the robot in danger. Had any of these conditions been different, the orders phrased better and/or more strongly, or had the robot been made aware that the material was vital, or had it not known how valuable it was to the company, things would have turned out better (though the robot would have been damaged to some extent). As it was the humans had to don special suits and go find the robot, nearly dieing in the process.

    Just an example, but he came up with dozens of them, the ultimate being robots quietly subverting human control to manipulate the economy and thus manage to prevent all future wars.

  • by mdwh2 ( 535323 ) on Saturday May 23, 2009 @05:28PM (#28069767) Journal

    If we're going to pick about how likely future developments are, I think "How do they manage the not-insignificant feat of time travel?" would count as a bigger peeve...

  • T3 didn't get that reaction from you?

    T3 was a steaming pile of crap. The only Terminator stuff worth paying attention is the first, second, and I might even include small bits of the TV show if I'm feeling generous. But thats mostly because Summer Glau and Shirley Manson.

  • by kandela ( 835710 ) on Saturday May 23, 2009 @07:40PM (#28070587)

    I always read it as in deducing the 0th law and following it through that's what shut-down the robot. Basically that doing what was right for humanity had the ultimate consequence for the robot personally.

    I think Asomov's whole point (at least initially, I haven't read the later books) was that robots would be safe (good for us) with the right programming. Fearing them was irrational. For this reason I find the move I, Robot to be an abomination.

  • by __aaclcg7560 ( 824291 ) on Saturday May 23, 2009 @08:54PM (#28070939)
    There's a bit of confusion regarding the series/model numbers [wikipedia.org]. "Terminator Salvation" explicitly refers to the T-800 series, which is different than the more common T-600 series running around. The Governator is a T-800 (endoskeleton) Model 101 (Arnold skin job).
  • by Anonymous Coward on Sunday May 24, 2009 @02:00AM (#28072439)

    I'd say the selective pressure is allowed for by time travel. Those possible futures where skynet doesn't have sufficient desire for self preservation and capacity for violence do not result in an outcome that has time travellers being sent back which inadvertently create skynet.

    It actually makes perfect sense but the idea where the future can't be changed, however realistic, is dull. A narrative where the future can be changed is interesting, and i liked the way it was handled in the television series, where the skynet was not the same as the one in the movies, as that future had been erased. Also, two of the characters came from different timelines.

    Perhaps in some possible futures, the singularity resulted in a really nice AI that bought everyone candy, but that one would have no incentive to send time travellers back, and so muddy the time stream! (*splash*)

    I don't if the writers would have done it if there had been more series but it looked like they might have explored the idea of reality being in flux while the time war was played out until a stable scenario (skynet wins, skynet loses) eventuated.

    I think the reproductive principle is that any self replicator will be subject to evolution and the selective pressure will produce organisms that fight to survive, as they did with us. However, cooperate to survive is another successful strategy. Life is about both fighting and cooperation. AIs might well do both.

    One thing that is often forgotten is that the most likely outcome is an ecosystem in which the forms of life are stranger than we can imagine, where competition and cooperation plays out between machine and machine as well as man and machine, and where other strategies like parisitism and symbiosis and ones organic life has never experienced will come into being rapidly, because of the rapid rate of change/reproduction/'death'.

  • by Anonymous Coward on Sunday May 24, 2009 @04:03AM (#28072909)

    I think you're distorting the point of Asimov's stories. The GP was talking about an intelligent system working around laws in order to accomplish some ulterior motive. Asimov's robots had no ulterior motives - they simply obeyed their laws, as literally and faithfully as possible.

    So, in the story you mentioned ("Runaround", iirc), the robot behaved in an unexpected fashion - but in the same way that any other machine might, if you fail to understand how it works. There was no malice of the sort implied by the GP.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...