Please create an account to participate in the Slashdot moderation system


Forgot your password?
Sci-Fi Robotics

The Sci-Fi Myth of Killer Machines 222

malachiorion writes: "Remember when, about a month ago, Stephen Hawking warned that artificial intelligence could destroy all humans? It wasn't because of some stunning breakthrough in AI or robotics research. It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil, and even the most brilliant minds can't talk about modern robotics without drawing from SF creation myths. This article on the biggest sci-fi-inspired myths of robotics focuses on R.U.R, Skynet, and the ongoing impact of allowing make-believe villains to pollute our discussion of actual automated systems."
This discussion has been archived. No new comments can be posted.

The Sci-Fi Myth of Killer Machines

Comments Filter:
  • by jzatopa ( 2743773 ) on Friday June 06, 2014 @03:30PM (#47182191)
    We already use robots (or drones if you will) to kill people. It doesn't take much AI to have a program target a group of people as enemies and eradicate them. Just look at the AI of current video games. This is something that is affecting humanity today and that we need to discuss openly now.
  • Read Asimov (Score:3, Insightful)

    by LWATCDR ( 28044 ) on Friday June 06, 2014 @03:31PM (#47182199) Homepage Journal

    Really the man that invented the term robotics did not fall into the trap.
    BTW the movie of I Robot in no way qualifies as a work of Asimov. It in now way reflects his books.

  • by santax ( 1541065 ) on Friday June 06, 2014 @03:33PM (#47182213)
    I tried, honestly, but it's all bullshit. Assumptions. Without caring for reality. We now have robots that can decide to kill. Do we really want those? See what happened when you had drones shoot missiles at people? A lot of weddings got bombed. That is what happens when you take emotion out by relinking b&w video to an 'operator ' that pulls the trigger. Now imagine to take emotion out completely, because that is the direction we are heading. Especially, but not alone, the US. And the all other nations will have to follow. And as of now these systems exist and are being used in the field, as tests. Robots that decide who gets shot. Great fucking idea. Not.
  • We already use robots (or drones if you will) to kill people.

    That's what I was just coming here to say: robots and AI doesn't have to be evil as long as the people controlling the string are. It's as simple as that. And seemingly most of the people who have the resources to craft stuff like that and industrialize these things do quite a lot of evil things. So, basically, it's just a matter of time and research.

  • by Anonymous Coward on Friday June 06, 2014 @03:35PM (#47182233)

    If humans and a sentient AI were competing for the same resources or if humans were subjegating the AI, it is rational to exterminate the humans. Without a God to value humans, they are, at best only as good as the use the AI derives from them. This is actually true for human human relations. Humans are evolutionary dirt. Just because we say we're worth more doesn't mean it's true. Nothing in a purely materialistic world has value.

    Given that and that the AI will recognize the truth in the earlier statement there is no bad. There is no wrong. Killing humans isn't a moral decision. It a utilitarian calculus. Assuming the computers can do lambda calculus, they can do utilitarian calculus.

  • ugh (Score:5, Insightful)

    by Charliemopps ( 1157495 ) on Friday June 06, 2014 @03:41PM (#47182275)

    Why does slashdot keep linking to this popsci website? These are basically blog posts that make very little sense. I've yet to read anything on there that's anything more than this dude ranting on some scientific topic he's not qualified to comment on.

    There are robots RIGHT NOW killing people. They're drones. Yes, they're under human control. But so will future robots. Robots aren't going to decide to kill humanity. Humanity is going to use robots to kill humanity. Eventually we'll give up direct control and they'll target tanks on their own. Then small arms. Then people talking about Jihad. Then criminals? The death penalty shouldn't be decided by algorithm.

    This guy argues that Stephen Hawkings is basically just making an oped because there was a movie about killer robots. Why should we listen to him? We're listing to him because he's STEPHEN HAWKINGS. He's one of the smartest people who's ever lived. He made his point after the movie because, being smart, he understood the popular movie would have peoples attention focused on the issue. Hawkings is qualified, smart and has my respect. He also has a point. Popsci? What a joke.

  • by Anonymous Coward on Friday June 06, 2014 @03:43PM (#47182287)

    ""drones"", controlled almost exclusively by humans, probably not the best example of killer AI

  • I'm with you 100%. I've just got one thing to add -- what a lot of people portray as "evil" is really just the absence of a moral code -- more accurately called "amoral". An AI system that has no moral code and no ethical code, and purely responds to a limited set of recognized external imputs could ceonceivably kill off humanity -- not through any malicious intent, or even an unemotional decision that humanity is a blight and must be eradicated, but as we become more dependent on AI machinery, it could eliminate us purely through oversight. All that has to happen is for AI to be integrated into some globally effected system of management in a way that if it doesn't understand the input, it can set off the wrong chain of events -- one that a human would never take, but the AI isn't smart enough to understand the consequences of. For example, if an immunology lab was controlled by an AI, and there was a leak of some deadly virus, the AI could end up venting the air to protect the beings alive inside the facility (unlikely, but it's an example -- apply it elsewhere). End result: humanity dies of airborne pathogen, except for those quarantined inside the facility, who starve instead.

    I think this is the premise behind much SciFi entertainment too (not all, but some of the better stuff): the core of the issue isn't the inherent malignency of AI, but the inherent fallibility of humanity in designing AI, combined with an always-deficient information set available to AI and the ability of humanity to put faith in that which isn't fully understood.

"The pathology is to want control, not that you ever get it, because of course you never do." -- Gregory Bateson