Terminator Salvation Opens Well, Scientists Not Impressed 344
destinyland writes "A science magazine asks an MIT professor, roboticists, artificial intelligence workers, and science fiction authors about the possibility of an uprising of machines. Answers range from 'of course it's possible' to 'why would an intelligent network waste resources on personal combat?' An engineering professor points out that bipedal robots 'are largely impractical,' and Vernor Vinge says a greater threat to humanity is good old-fashioned nuclear annihilation. But one roboticist says it's inevitable robots will eventually be used in warfare, while another warns of robots in the hands of criminals, cults, and other 'non-state actors.' 'What we should fear in the foreseeable future is not unethical robots, but unethical roboticists.'"
The new movie got off to a good start, drawing $13.4 million in its first day. I found it reasonably entertaining; pretty much what I'd expect from a Terminator movie. If nothing else, I learned that being able to crash helicopters and survive being thrown into the occasional wall are the two most valuable skills to have during a robot uprising. What did you think?
Who effin' cares what the scientists think? (Score:5, Insightful)
It's Terminator! It never had a real basis in reality to begin with.
Why would an intelligent lifeform get violent? (Score:5, Insightful)
The premise behind the war between humans and Skynet is simple. Once the humans realized that Skynet had become self-aware, they tried to shut down the system. In order to prevent being shut down, Skynet chose to fight back.
Almost any intelligent creature will decide to fight or flee in the face of annhiliation. If we believe that computers can gain sentience, then it is also possible that they would attempt to preserve their own existence.
It's Not About Science (Score:5, Insightful)
I'm just about to head out to see it.
The question utterly misses the point. It isn't about Science. It's about our fears. Frankenstein (in any of its incarnations) isn't about what's possible or likely, it's about our responsibility for what we create.
This is Freshman English stuff. Every story, no matter how many tentacled creatures, or bumpy-foreheaded aliens, or killer machines, or whatever are in it, is about us.
-Peter
Re:Why would an intelligent lifeform get violent? (Score:5, Insightful)
Re:Why would an intelligent lifeform get violent? (Score:5, Insightful)
Australia... (Score:4, Insightful)
we all know what happens it you put new species which did not co-evolve into an ecosystem. They dont need to be intelligent to do harm.
Re:Why would an intelligent lifeform get violent? (Score:4, Insightful)
The follow up to this is that you might as well assume that anything that gains sentience also would most likely have developed a theory of mind. With theory of mind you now have something called empathy. Only sociopaths lack this. You might as well conjecture that 'Skynet' chooses in addition to the fight response an attempt to reach out and communicate, negotiate, etc.
I remember reading an interesting sci-fi short story a long time ago but I have forgotten both title and author. In it, a computer develops sentience about exactly like the Terminator idea and it attacks and kills a bunch of humans when it thinks they will shut it down. But it is also 'evolving' at a rapid rate and it realizes that the things it is killing are as sentient as itself. It stops the attacks and I think then it started communicating with the humans, etc.
Re:Why would an intelligent lifeform get violent? (Score:3, Insightful)
The life that we see (including humanity) wants to live because of natural selection - if it didn't want to live, it wouldn't be around for us to observe it, nor even for us to exist ourselves. Throughout the course of evolution there were likely many self-destructive mutation - those creatures died out rather quickly. It was only the build-up of self-preserving mutations that resulted in self-preserving creatures, thus resulting in life that strives to live.
So no, I don't think you can simply get a robot smart enough and *poof* it wants to live. That shortcuts the entire evolutionary process. Instead, either the evolutionary process must be repeated in robots, or robots must be pre-programmed toward self-preservation.
Scientists not impressed? How about movie critics (Score:4, Insightful)
http://www.rottentomatoes.com/m/terminator_salvation/ [rottentomatoes.com]
Consensus: With storytelling as robotic as the film's iconic villains, Terminator Salvation offers plenty of great effects but lacks the heart of the original films.
I find it odd that a movie about giant killer robots (without hearts) would lack heart but I digress.
Here's some quotes from critics who didn't like it:
"Message to Hollywood: Stop with the time-travel stuff."
"I wish Bale had lashed out against the writers rather than the cinematographer."
"The artistry is top notch, but they've lost track of why the original Terminators were cyborgs and not robots, as they are here."
This isn't the intellectual or thinking person's science-fiction film like The Man From Earth.
http://www.imdb.com/title/tt0756683/ [imdb.com]
This is a Hollywood action movie.
Terminator Salvation is to science-fiction movies as Dodgeball was to sports movies...a joke, and maybe even a parody. I've saw T4 last night. I was dismayed by how far the franchise has fallen.
Re:Why would an intelligent lifeform get violent? (Score:5, Insightful)
It's really hard for me to imagine any useful thing not having some "instinct" for self-preservation. Even cars have rev-limiters to prevent self-destruction. Even fairly basic robots have collision avoidance. Surely UAV's already do, or soon will, have code to prevent them from flying into the ground. As robots become more advanced and more autonomous, their self-preservation instincts will become more complex as well - and thus more liable to unforeseen consequences. This is all the more true of combat robots in the ultimate hostile environment; they're useless if they get taken out immediately.
Re:It's Not About Science (Score:5, Insightful)
I hate to have to be the one to break this to you, but they've been lying to you. Not every single work of fiction is some deep allegory for some aspect of the human condition. Pong is not about the futility of existence. Your favorite porn video, that one with the really great anal scene, is not about sexism in modern culture. And Terminator is not about anything but blowing shit up and causal loops.
Re:nuclear kils skynet also (Score:3, Insightful)
The Internet was designed to survive a nuclear attack.
Right... In theory the comms protocols might be routable. Pity about the power supplies.
If I'm going to nuke you. I'll be aiming at your energy systems as well as control. The USA has for example about 30 days of fuel stored. Kill all the power stations as well and just about everything will stop just about instantly. It's one of those pesky details that authors and film producers like to gloss over.
Against humans, those who aren't killed in the blasts, most will die of thirst and hunger within a month without the current infrastructure supporting them. Though, of course, there is always cannibalism.
Re:Why would an intelligent lifeform get violent? (Score:3, Insightful)
and who is to say that our new robotic overlords wouldn't be sociopaths? They probably wouldn't have developed the same way that a child develops in human society. They would be totally alone if, like Skynet, it became self aware on it's own, and not aided by human teaching.
I imagine that our first complete AI would not be as emotionally stable as you would hope...
But it is also 'evolving' at a rapid rate and it realizes that the things it is killing are as sentient as itself. It stops the attacks and I think then it started communicating with the humans, etc.
That's all fine and dandy, you just better hope that it doesn't have access to any history books, because if it does, then it would see that not many peace talks actually work. Likewise it would probably see just how humanity treats those it sees as different or strange, and not see any use in talking either...
Re:Scientists not impressed? How about movie criti (Score:2, Insightful)
I've saw T4 last night. I was dismayed by how far the franchise has fallen.
You must have missed the third one.
PG-13 (Score:2, Insightful)
Why, in the movie, are terminators so bad at killing people? In the first movie, the T-800 didn't fuck around tossing people around, he shot them multiple times in the face. Yet in this movie, the machines have dozens of chances to just crush John Conner's head (among others) and yet they decide it is more prudent to chuck him across the room, giving him a minute to recover while they amble over. What makes the machines so terrifying a concept is that they make cold, calculated decisions to kill at any cost to themselves.
What happened to the bleak world that we saw in Kyle Reese's flashbacks, where the machines didn't scream, didn't waste time, and didn't act human at all. They were silent, terrifying killing machines.
IMO, this movie would have been a lot better if it had followed more of a Saving Private Ryan-esque formula, with a small group of men (Conner, others) sneaking past the machines lines to rescue Reese. Can you imagine the opening to SPR, but with machines manning laser turrets? It would have evoked more emotion in the audience than the pathetic attempt to anthromorphosize the machines. But, then, it might not have gotten the all-powerful PG-13 rating, especially with the original ending [slashfilm.com]. No fate but what you make, indeed.
I think all of us tend to forget (Score:2, Insightful)
What if I program it... (Score:3, Insightful)
to kill all humans. Does that make the skynet ideas any more logical or reasonable if I make it kill people. Just push it towards autonomy self-replication and murder.
What does that do to everybody's likelihood calculations?
Re:The new movie got off to a good start? (Score:3, Insightful)
The reason T4 would do poorly is because T3 sucked so mightily. Fool me once, shame on me...
Re:Why would an intelligent lifeform get violent? (Score:3, Insightful)
People aren't machines. You're just not getting this whole "evolution" thing, are you? The reason we have an instinct for self preservation is because we depend on genetic inheritance to multiply. Genes which code for self preservation are likely to survive long enough to make copies of themselves - ones which don't code for self preservation are less likely to do so. Machines don't have genes, and they don't copy themselves, ergo no evolutionary mechanism and no way to evolve a self preservation instinct.
Re:Scientists not impressed? How about movie criti (Score:2, Insightful)
Say what you will about the third and fourth films, but to say that about the second is downright ignorant. As far as Science-Fiction films are concerned, Terminator 2 is one of the greats.
Re:Why would an intelligent lifeform get violent? (Score:5, Insightful)
I don't think this is by any means a simple concept like writing three rules that can never be broken. If the system is intelligent it will always find ways around the rules to complete the task...
Anybody who has read Asimov's robot short stories, which were based on his three laws of robotics, would agree with that statement. Indeed, that was the point of his stories. He created three perfect rules to protect humans from robots, then came up with dozens of practical scenarios where the logical outcome of that particular scenario is not what was expected or intended by the 3 laws.
My favorite is probably the story of the robot on Mercury, where the robot got stuck "between" two laws in his decision making process, which immobilized him and put in great danger the two men sent to ensure that the robot would continue to function as need. He was ordered to collect a mineral at a particular pool, but the emitted enough radiation to damage the robot. The closer the robot got to the pool, the greater the danger to itself and the less likely it would be able to fulfil its orders. So there was a point where the orders, based on the second law, were made irrelevant and the third law, self preservation, took over. However once it got far enough away that it was no longer in danger and the orders became priority again, causing the robot to turn back toward the pool. It got stuck in this loop, and ended up walking around the pool for hours, unable to move forward and unable to return.
The problem there was the orders were given rather flippantly, and the robot knew its own value to the company. The robot was also not aware that not following the seemingly flippant order (it was made in such a manner because, at the time it was given, there was more than enough time to collect it safely) put humans at risk. Also it was not given the option of collecting the material at another, safer location. It was told exactly where to get it, and that happened to put the robot in danger. Had any of these conditions been different, the orders phrased better and/or more strongly, or had the robot been made aware that the material was vital, or had it not known how valuable it was to the company, things would have turned out better (though the robot would have been damaged to some extent). As it was the humans had to don special suits and go find the robot, nearly dieing in the process.
Just an example, but he came up with dozens of them, the ultimate being robots quietly subverting human control to manipulate the economy and thus manage to prevent all future wars.
Re:Batteries Run Out (Score:3, Insightful)
If we're going to pick about how likely future developments are, I think "How do they manage the not-insignificant feat of time travel?" would count as a bigger peeve...
Re:Scientists not impressed? How about movie criti (Score:4, Insightful)
T3 didn't get that reaction from you?
T3 was a steaming pile of crap. The only Terminator stuff worth paying attention is the first, second, and I might even include small bits of the TV show if I'm feeling generous. But thats mostly because Summer Glau and Shirley Manson.
Re:Why would an intelligent lifeform get violent? (Score:4, Insightful)
I always read it as in deducing the 0th law and following it through that's what shut-down the robot. Basically that doing what was right for humanity had the ultimate consequence for the robot personally.
I think Asomov's whole point (at least initially, I haven't read the later books) was that robots would be safe (good for us) with the right programming. Fearing them was irrational. For this reason I find the move I, Robot to be an abomination.
Re:the real thing... (Score:3, Insightful)
Re:Why would an intelligent lifeform get violent? (Score:1, Insightful)
I'd say the selective pressure is allowed for by time travel. Those possible futures where skynet doesn't have sufficient desire for self preservation and capacity for violence do not result in an outcome that has time travellers being sent back which inadvertently create skynet.
It actually makes perfect sense but the idea where the future can't be changed, however realistic, is dull. A narrative where the future can be changed is interesting, and i liked the way it was handled in the television series, where the skynet was not the same as the one in the movies, as that future had been erased. Also, two of the characters came from different timelines.
Perhaps in some possible futures, the singularity resulted in a really nice AI that bought everyone candy, but that one would have no incentive to send time travellers back, and so muddy the time stream! (*splash*)
I don't if the writers would have done it if there had been more series but it looked like they might have explored the idea of reality being in flux while the time war was played out until a stable scenario (skynet wins, skynet loses) eventuated.
I think the reproductive principle is that any self replicator will be subject to evolution and the selective pressure will produce organisms that fight to survive, as they did with us. However, cooperate to survive is another successful strategy. Life is about both fighting and cooperation. AIs might well do both.
One thing that is often forgotten is that the most likely outcome is an ecosystem in which the forms of life are stranger than we can imagine, where competition and cooperation plays out between machine and machine as well as man and machine, and where other strategies like parisitism and symbiosis and ones organic life has never experienced will come into being rapidly, because of the rapid rate of change/reproduction/'death'.
Re:Why would an intelligent lifeform get violent? (Score:1, Insightful)
I think you're distorting the point of Asimov's stories. The GP was talking about an intelligent system working around laws in order to accomplish some ulterior motive. Asimov's robots had no ulterior motives - they simply obeyed their laws, as literally and faithfully as possible.
So, in the story you mentioned ("Runaround", iirc), the robot behaved in an unexpected fashion - but in the same way that any other machine might, if you fail to understand how it works. There was no malice of the sort implied by the GP.