The Sci-Fi Myth of Killer Machines 222
malachiorion writes: "Remember when, about a month ago, Stephen Hawking warned that artificial intelligence could destroy all humans? It wasn't because of some stunning breakthrough in AI or robotics research. It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil, and even the most brilliant minds can't talk about modern robotics without drawing from SF creation myths. This article on the biggest sci-fi-inspired myths of robotics focuses on R.U.R, Skynet, and the ongoing impact of allowing make-believe villains to pollute our discussion of actual automated systems."
It's not really a myth anymore (Score:5, Insightful)
Read Asimov (Score:3, Insightful)
Really the man that invented the term robotics did not fall into the trap.
BTW the movie of I Robot in no way qualifies as a work of Asimov. It in now way reflects his books.
Way to long to read. (Score:5, Insightful)
Comment removed (Score:5, Insightful)
Genocide is rational (Score:1, Insightful)
If humans and a sentient AI were competing for the same resources or if humans were subjegating the AI, it is rational to exterminate the humans. Without a God to value humans, they are, at best only as good as the use the AI derives from them. This is actually true for human human relations. Humans are evolutionary dirt. Just because we say we're worth more doesn't mean it's true. Nothing in a purely materialistic world has value.
Given that and that the AI will recognize the truth in the earlier statement there is no bad. There is no wrong. Killing humans isn't a moral decision. It a utilitarian calculus. Assuming the computers can do lambda calculus, they can do utilitarian calculus.
ugh (Score:5, Insightful)
Why does slashdot keep linking to this popsci website? These are basically blog posts that make very little sense. I've yet to read anything on there that's anything more than this dude ranting on some scientific topic he's not qualified to comment on.
There are robots RIGHT NOW killing people. They're drones. Yes, they're under human control. But so will future robots. Robots aren't going to decide to kill humanity. Humanity is going to use robots to kill humanity. Eventually we'll give up direct control and they'll target tanks on their own. Then small arms. Then people talking about Jihad. Then criminals? The death penalty shouldn't be decided by algorithm.
This guy argues that Stephen Hawkings is basically just making an oped because there was a movie about killer robots. Why should we listen to him? We're listing to him because he's STEPHEN HAWKINGS. He's one of the smartest people who's ever lived. He made his point after the movie because, being smart, he understood the popular movie would have peoples attention focused on the issue. Hawkings is qualified, smart and has my respect. He also has a point. Popsci? What a joke.
Re:It's not really a myth anymore (Score:2, Insightful)
""drones"", controlled almost exclusively by humans, probably not the best example of killer AI
Re:It's not really a myth anymore (Score:3, Insightful)
I'm with you 100%. I've just got one thing to add -- what a lot of people portray as "evil" is really just the absence of a moral code -- more accurately called "amoral". An AI system that has no moral code and no ethical code, and purely responds to a limited set of recognized external imputs could ceonceivably kill off humanity -- not through any malicious intent, or even an unemotional decision that humanity is a blight and must be eradicated, but as we become more dependent on AI machinery, it could eliminate us purely through oversight. All that has to happen is for AI to be integrated into some globally effected system of management in a way that if it doesn't understand the input, it can set off the wrong chain of events -- one that a human would never take, but the AI isn't smart enough to understand the consequences of. For example, if an immunology lab was controlled by an AI, and there was a leak of some deadly virus, the AI could end up venting the air to protect the beings alive inside the facility (unlikely, but it's an example -- apply it elsewhere). End result: humanity dies of airborne pathogen, except for those quarantined inside the facility, who starve instead.
I think this is the premise behind much SciFi entertainment too (not all, but some of the better stuff): the core of the issue isn't the inherent malignency of AI, but the inherent fallibility of humanity in designing AI, combined with an always-deficient information set available to AI and the ability of humanity to put faith in that which isn't fully understood.