Become a fan of Slashdot on Facebook


Forgot your password?
Sci-Fi Robotics

The Sci-Fi Myth of Killer Machines 222

malachiorion writes: "Remember when, about a month ago, Stephen Hawking warned that artificial intelligence could destroy all humans? It wasn't because of some stunning breakthrough in AI or robotics research. It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil, and even the most brilliant minds can't talk about modern robotics without drawing from SF creation myths. This article on the biggest sci-fi-inspired myths of robotics focuses on R.U.R, Skynet, and the ongoing impact of allowing make-believe villains to pollute our discussion of actual automated systems."
This discussion has been archived. No new comments can be posted.

The Sci-Fi Myth of Killer Machines

Comments Filter:
  • by Anonymous Coward on Friday June 06, 2014 @03:35PM (#47182239)

    The beginning and the end of the discussion: Colossus: The Forbin Project. (Wikipedia []) (YouTube [])

  • The problem is not who controls the strings, it is what happens when the strings are no longer needed.

    A.I. will present little danger (except A.I. the movie, which is so bad it ought to be banned as a WMD) as long as a human can pull the plug. Two decades ago, the Internet was a novelty. Now, the economic consequences would be catastrophic if the Internet suddenly went dark. Similarly if/when A.I. actually arrives, it will be useful and helpful. It will become more and more critical such that a decade or two after it arrives, the act of unplugging it would have catastrophic consequences. So, if Skynet goes bad, then bad things will happen whether you unplug it or not.

    To me, what it all comes down to is will. Can an artificial personality actually have a will? Can it become afraid of its own demise? Even if it is theoretically possible, can our researchers and programmers achieve it? Will it be able to reach outside its own programming and decide to eliminate humans? Maybe, maybe not.

    On the other hand, once A.I. becomes common, can a rogue state task the A.I. with eliminating all humans on a certain continent? Almost certainly. What happens then is simply a battle of A.I. agents. Who can outsmart the other?

    Just my opinion, and worth every penny that you paid for it.

  • by Paul Fernhout ( 109597 ) on Friday June 06, 2014 @10:50PM (#47184889) Homepage

    ... simulated cannibalistic robot killers in the 1980s on a Symbolics running ZetaLisp. I gave a couple conference talks about it, plus one at NC State (where I wrote the simulation) that I think even may have influenced Marshall Brain. I had created a simulation of self-replicating robots that reconstructed themselves to an ideal from spare parts in their simulated environment (something proposed first by von Neumann, but I may have been the first to make such a simulation). The idea was that a robot that was essentially half of an "ideal" robot would make its other half by adding parts to itself, then split in two by cutting some links, and then do it again. The very first one assembled its other half, cut the links to divide itself, and then proceeded (unexpectedly to me) to then start cutting apart its offspring for parts to do it again. I had to add a sense of "smell" so robots would set the smell of parts they used and then not try to take parts that smelled the same. I also mention that simulation here: []

    Decades later, I still got a bit freaked out when our chickens would sometimes eat their own eggs...

    My point though is that completely unintentionally, these devices I designed to create ended up destroying things -- even their own offspring. It was a big lesson for me, and has informed my work and learning in various directions ever since. Things you build can act in totally unexpected ways. And since creation involves changing the universe, any change also involves to some extent destroying something that is already there.

    James P. Hogan in his 1982 book "The Two Faces of Tomorrow" which I had read earlier should have been a warning. In it he makes clear how any AI could gain a survival instinct and then could perceive things like power fluctuations as threats -- even if there was not intent on the part of the original programmers for that to happen. []

    Langdon Winner's book "Autonomous Technology: Technics-out-of-control as a theme in political thought" assigned as reading in college also should have been another warning. []

    It's been sad to watch the progression of real killer autonomous robots since the 1980s... Here is just one example, and the exciting, upbeat music in the video shows the political and social problem more than anything:
    "Samsung robotic sentry (South Korea, live ammo)" []

    Just because we can do something does not mean we should...

    I was impressed that this recent Indian Bollywood film about an AI-powered robot took such a nuanced view of the problems. A bit violent for me, but otherwise an excellent and thought provoking film: []
    "Enthiran is a 2010 Indian Tamil science fiction techno thriller, co-written and directed by Shankar.The film features Rajinikanth in dual roles, as a scientist and an andro humanoid robot, alongside Aishwarya Rai while Danny Denzongpa, Santhanam, Karunas, Kalabhavan Mani, Devadarshini, and Cochin Haneefa play supporting roles. The film's story revolves around the scientist's struggle to control his creation, the android robot whose software was upgraded to give it the ability to comprehend and generate human emotions. The plan backfires as the robot falls in love with the scientist's fiancee and is further manipulated to bring destruction to the world when it lands in the hands of a rival scientist."

    But yes, the Beserker Series is another signpost in that direction -- perhaps countered a bit by the Bolo series by Keith Laumer? :-)
    ht []

BLISS is ignorance.