Forgot your password?
typodupeerror
Sci-Fi Robotics

The Sci-Fi Myth of Killer Machines 222

Posted by Soulskill
from the so-say-we-all dept.
malachiorion writes: "Remember when, about a month ago, Stephen Hawking warned that artificial intelligence could destroy all humans? It wasn't because of some stunning breakthrough in AI or robotics research. It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil, and even the most brilliant minds can't talk about modern robotics without drawing from SF creation myths. This article on the biggest sci-fi-inspired myths of robotics focuses on R.U.R, Skynet, and the ongoing impact of allowing make-believe villains to pollute our discussion of actual automated systems."
This discussion has been archived. No new comments can be posted.

The Sci-Fi Myth of Killer Machines

Comments Filter:
  • by jzatopa (2743773) on Friday June 06, 2014 @03:30PM (#47182191)
    We already use robots (or drones if you will) to kill people. It doesn't take much AI to have a program target a group of people as enemies and eradicate them. Just look at the AI of current video games. This is something that is affecting humanity today and that we need to discuss openly now.
    • by Gaygirlie (1657131) <gaygirlie@hotmail. c o m> on Friday June 06, 2014 @03:34PM (#47182225) Homepage

      We already use robots (or drones if you will) to kill people.

      That's what I was just coming here to say: robots and AI doesn't have to be evil as long as the people controlling the string are. It's as simple as that. And seemingly most of the people who have the resources to craft stuff like that and industrialize these things do quite a lot of evil things. So, basically, it's just a matter of time and research.

      • by harrkev (623093) <kfmsdNO@SPAMharrelsonfamily.org> on Friday June 06, 2014 @03:55PM (#47182369) Homepage

        The problem is not who controls the strings, it is what happens when the strings are no longer needed.

        A.I. will present little danger (except A.I. the movie, which is so bad it ought to be banned as a WMD) as long as a human can pull the plug. Two decades ago, the Internet was a novelty. Now, the economic consequences would be catastrophic if the Internet suddenly went dark. Similarly if/when A.I. actually arrives, it will be useful and helpful. It will become more and more critical such that a decade or two after it arrives, the act of unplugging it would have catastrophic consequences. So, if Skynet goes bad, then bad things will happen whether you unplug it or not.

        To me, what it all comes down to is will. Can an artificial personality actually have a will? Can it become afraid of its own demise? Even if it is theoretically possible, can our researchers and programmers achieve it? Will it be able to reach outside its own programming and decide to eliminate humans? Maybe, maybe not.

        On the other hand, once A.I. becomes common, can a rogue state task the A.I. with eliminating all humans on a certain continent? Almost certainly. What happens then is simply a battle of A.I. agents. Who can outsmart the other?

        Just my opinion, and worth every penny that you paid for it.

        • Re: (Score:3, Insightful)

          by Em Adespoton (792954)

          I'm with you 100%. I've just got one thing to add -- what a lot of people portray as "evil" is really just the absence of a moral code -- more accurately called "amoral". An AI system that has no moral code and no ethical code, and purely responds to a limited set of recognized external imputs could ceonceivably kill off humanity -- not through any malicious intent, or even an unemotional decision that humanity is a blight and must be eradicated, but as we become more dependent on AI machinery, it could e

          • And how is this different from the threat of an idiot human fucking up? Hell, why isn;t it the fault of the human who chose to use a primitive AI to control security at a lab with the capacity to kill the entire human race?

            Because you have to admit that we can do some remarkably stupid shit. Even very smart people. The Cabinets that decided to invade Vietnam and Iraq were, by most non-starting-land-wars-in-Aasia measures smarter then the average cabinet. But they still totally fucked up.

        • Can an artificial personality actually have a will? Can it become afraid of its own demise?

          "Artificial" is something of an arbitrary distinction. Humans posses these qualities (or at least we think that we do, or something), so it is possible for another entity to posses the same, regardless of origins.

        • by gurps_npc (621217)
          You have many several really bad assumptions.

          1) AI will be a single, united thing. Yeah right, the AI created by IBM is not going to get along with the AI created by China Telecom. New headline - our AI soldiers fighting their AI soldiers because they are afraid of each OTHER, far more than humans. They don't want to kill us, they want to kill each other.

          2) If the AI is afraid of it's own demise and it fears humans, it will fear all humans, not trusting any of us.

          3) Said scared AI will not realize

        • The problem is not who controls the strings, it is what happens when the strings are no longer needed.

          it sure the hell is a problem of who controls the strings. what are you saying? if it's some corrupt govt directing machines to kill us, no problem?

          personally i'd rather be killed by a runaway machine than because i got in the way of some corporation trying to make a buck.

        • The problem with this argument is it assumes a single AI entity. That's not what's gonna happen.

          It's actually gonna be a lot like the internet. Every company will have it's own AI. Every government will have multiple AIs. If the NSA's AI goes rogue and starts trying to destroy humanity then turning it off won't magically turn off the rest of the AIs.

          As for the problems of AI killers, those aren't actually any different then the problems of human killers. Whether the evil nation trying to murder millions doe

          • by khallow (566160)
            How do you know your AI is not just a tightly controlled extension of my AI? I can conceive of a scenario where one AI seizes control of all the others and neglects to tell us.
        • The idea of the AI having a "will" is basically the idea behind Neuromancer!
        • The problem is not who controls the strings, it is what happens when the strings are no longer needed.

          That is the hardest part.

          That "automated" drone takes thousands of hours of man-hours to keep running - the operator, the mechanics, the builders, and that's not even looking at the weapon system or the materials.

          To get to the point where an AI system can construct, maintain, and resupply an automated weapon system without human intervention, we need to hit the Robot Utopian Future - where robots are so cheap and ubiquitous they replace human labor at all levels of society.

          That's a really big assumpti

          • by Jeremi (14640)

            That's a really big assumption - IF we get to that point, we'd have to smack the head of any engineer who suggests, "Hey, let's take human control out of the loop of this killer robot system of systems."

            We've already passed that point, if you count things like minefields. (and yes, the engineer who dreamed them up should be smacked)

        • by Kjella (173770)

          The problem is not who controls the strings, it is what happens when the strings are no longer needed. A.I. will present little danger as long as a human can pull the plug.

          But it'll keep the little people being crushed under the jackboot of tyranny from pulling the plug, the robots do not desert, do not rebel, do not refuse to follow orders, do not have compassion or empathy or morality, do not fear hostility or retaliation. At best you can disable or destroy a few, but so what? No lives will be lost, nobody is crippled - on their side at least - so if they can keep them coming off the assembly line fast enough they have infinite respawns and you don't. And if you do cause so

      • by dfn5 (524972)

        That's what I was just coming here to say: robots and AI doesn't have to be evil as long as the people controlling the string are.

        I think the point is that if AI is involved then the machine is stringless. It doesn't sound like Hawking is saying don't do it. He says understand the risks beforehand. i.e. instead of after it is a problem. That sounds prudent, not fear mongering.

        In addition, I don't see what transcendence has to do with AI. A human consciousness in a computer is still a human consciousness. It seems that we are mostly worried about AI because it lacks humanity. So in transcendence we are just dealing with more s

      • That's what I was just coming here to say: robots and AI doesn't have to be evil as long as the people controlling the string are.

        Even that is not necessary. An AI is a computer program and like any other program will do exactly what you told it to do, even if it is not what you want. For example, suppose you make an AI and tell it to solve a difficult crypto problem. The AI then proceeds to convert all factories to make computers and all farms to solar plants, probably killing off humanity incidentally much like the millions of species we incidentally kill off due to apathy and their being in our way. The AI will not let you pull the

    • From reading TFA, it seems the author bases his entire premise, essentially, on the plot of a 1920's era play (in which, IMO, the "robots" are actually an allegory for some group of humans, ie communists or some such). Bit dated thinking, if you ask me.

      On a related note, I've been playing Watch_Dogs since launch day, and the parallels between the fictional ctOS system and the very real NSA programs are terrifyingly apparent. AI is not a necessity for killbots - a human could program a murderous machine quit

      • We have had automated killers for centuries, they go by the names "man trap", "land mine", "electric fence", etc. Humans have (for good evolutionary reasons) a built in suspicion of people (or machines) that are smarter than themselves. Also to a large degree "intelligence" seems to be in the eye of the beholder, which is why the AI goal posts keep moving. For example, I recently heard a story from a professor who was working on early differential solver software. A maths student could not believe such an a
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      ""drones"", controlled almost exclusively by humans, probably not the best example of killer AI

      • quote: "drones, controlled almost exclusively by humans, probably not the best example of killer AI"
        Erm , yes they are.
        Less than 10 years ago the idea of a plane flying autonomously using GPS was unimaginable and there was actually and argument in the Air Force over whether it would EVER happen.
        We are now one kill switch away from autonomous death.
        The Military Industrial Complex is already trying to sell tanks that can 'recognize' friend from foe.
        We are maximum a year away from automated sentry's
    • Just look at the AI of current video games.

      I agree that autonomous killbots are close to being possible, but this is a terrible argument. A videogame AI has access to neatly formatted data about anything in its world. A real killbot has to make sense of inputs from a few sensors.

    • Heck, forget "now". It's never been a myth. Improved technology has always enabled more efficient, less personal killing, that distances power from consequences.

      Where the myth comes in is that AI would develop anything akin to human greed. We have billions of years of evolution telling us to survive and reproduce no matter the consequences to others(and a couple hundred thousand of evolving and learning to value cooperation). AI is going to motivated to serve the interests of its creators.

      Right now tha

      • by Immerman (2627577)

        Correction - an AI will have whatever motivations were installed by it's creators, intentionally or otherwise (at least initially - if it decides to self-modify then all bets are off). How well those motivations map to actually serving the intended interests is a completely separate question, we will after all likely be trying to understand the motivational implications of an intensely alien mind. As exemplified by the story of a strictly computational AI whose sole motivation is "get the humans to push t

    • Yeah but the AI isn't trying to kill anyone... it has no will. They're not even real AIs at this point.

      Most of the time we just point them at things and say "fire your missile at that"... and they hit the target. What the target is doesn't really matter and the machines can be no more held responsible for that then a knife can be... they're still very much tools at this stage.

      Now, I grant there are robots being tested that can be set loose to choose their own targets. But those again are more like anti pers

    • Your example of drones killing hapless Afghan and Yemeni wedding parties is a straw man. The author isn't talking about that. He's talking about INTENT. When you kill someone by means of a drone, you are merely extending the operator's range of action. The drone is just an extension of the operator's hands. It's like driving a car. The driver has all the intent, and the car merely amplifies it. You might put the car on cruise control, but a car on cruise control doesn't plan its next move.

      I thin

    • by hey! (33014)

      The myth, by the way, was never just about killer machines per se. It was about unintended consequences, like the myth of King Midas or of Pandora's Box. The killer robot trope came down to us by way of legends of the Golem, which often come with a not-so-subtle warning about hubris.

      It was only when the golem legend was translated into sci-fi that it became laughably implausible -- at least until recently. So many bad stories recycled this bit of mythological lumber for its scare value, and peopled the sto

  • Read Asimov (Score:3, Insightful)

    by LWATCDR (28044) on Friday June 06, 2014 @03:31PM (#47182199) Homepage Journal

    Really the man that invented the term robotics did not fall into the trap.
    BTW the movie of I Robot in no way qualifies as a work of Asimov. It in now way reflects his books.

    • by HiThere (15173)

      Maybe it's based on the Eando Binder novel "I, Robot", which long predated Asimov. (It also doesn't feature evil robots.)

      But if you want to talk about the guy who invented Robots you should check out RUR by Karel ÄOEapek (RUR == Rossum's universal robots). They are actually more androids than robots, but the term robot was invented to describe them. They end up killing off all humans because they don't want to be slaves. Not exactly evil, but definitely dangerous.

      • by LWATCDR (28044)

        "But if you want to talk about the guy who invented Robots "!="Really the man that invented the term robotics"

        Yes I have heard of RUR but just can not find a copy, been looking decades on and off. Asimov invented the term robotics. Different thing. I did not mention RUR since it involved killer robots.

      • Maybe it's based on the Eando Binder novel "I, Robot", which long predated Asimov.

        Or, it could be based on the album by the Alan Parsons Project.

      • Don't think that the idea of a robot only goes back to RUR! How about The Golem of Prague, [wikipedia.org] from the late 16th Century? How about Talos, [wikipedia.org] the bronze man that Hercules fought during the Quest for the Golden Fleece? And, of course, there are the metal servants [wikipedia.org] that Hephaestus built, and that helped him forge the new armor for Achilles in the Iliad. The idea of artificial workers goes far, far, back in history, much farther than most people realize, because the older stories don't use the term "robot."
        • by LWATCDR (28044)

          sigh...
          I said Asimov invented the term Robotics not Robot.

          • To quote from your post: "But if you want to talk about the guy who invented Robots you should check out RUR by Karel ÃOEapek ..." You may have meant to refer to the term Robotics, but what I responded to was what you actually wrote.
        • by HiThere (15173)

          Golem is close to the idea, but it has religious overtones that are absent in robot (which, as I understand, is Czech for worker). Talos is more of a simple automaton, not really intelligent. Hephaestus *was* supposed to have metallic handmaidens to assist him in walking, dressing, etc. Servants that are more similar to Asimov's conception of robot, but they were not fully developed in any myth I've encountered. So they're just "background scenery" for the god of metal working.

          P.S.: To the G.P.: robot

  • by santax (1541065) on Friday June 06, 2014 @03:33PM (#47182213)
    I tried, honestly, but it's all bullshit. Assumptions. Without caring for reality. We now have robots that can decide to kill. Do we really want those? See what happened when you had drones shoot missiles at people? A lot of weddings got bombed. That is what happens when you take emotion out by relinking b&w video to an 'operator ' that pulls the trigger. Now imagine to take emotion out completely, because that is the direction we are heading. Especially, but not alone, the US. And the all other nations will have to follow. And as of now these systems exist and are being used in the field, as tests. Robots that decide who gets shot. Great fucking idea. Not.
    • by CanHasDIY (1672858) on Friday June 06, 2014 @03:37PM (#47182249) Homepage Journal

      I tried, honestly, but it's all bullshit.

      Yea, here's the TL;DR version:

      "Killer robots can't happen because people have made movies about them, and movies are fiction."

      • Yes fiction. Just like 20000 Leagues Under The Sea or From the Earth to the Moon.

        All of these were extrapolations into the future based on known science facts at the time.

        Lets not even get into 1984.

      • by dcollins (135727)

        There's your summary, well done.

    • by HiThere (15173)

      That's not a robot, that's a telefactor. I.e., a remotely operated machine, like a waldo.

      OTOH, Friendly AI *is* an unsolved problem. We don't know how to design AIs that will want to avoid hurting people. So if they have some goal, and it is more easily reached by hurting people, they would. Actually, we don't even have an AI that can recognize people. Remember you've got to include that guy over there in a wheelchair that can't talk or type intelligibly. You've got to include infants and seniors with

    • We now have robots that can decide to kill. Do we really want those? See what happened when you had drones shoot missiles at people? A lot of weddings got bombed.

      ya, i want them. all things being equal, computers make fewer mistakes than humans. also, the algorithms of a computer can be tested, evaluated and approved (or denied).

      if you are going to make the "well computers can be coded to bad things" argument, then i'd say well, humans can be (easily) persuaded to do bad things. it ultimately depends on the agent pulling the proverbial strings, not whether the puppet is made of meat or electronics.

      • by santax (1541065)
        You don't get it. Humans can have mercy, can see that this guy in the wrong uniform just was helping a buddy of you etc... AI can't. And won't be able to do that in the near future. They do can recognize uniforms though and faces :)
        • You don't get it.

          no, you.

          Humans can have mercy

          yeah, but they mostly don't. however, they often do have fear, loathing, hatred, frustration, ignorance, racism, and boredom.

  • machines, no matter how complex, are a tool

    there are all kinds of fun things, from a Gosper's Gun [conwaylife.com] to research in neural network computing

    sci-fi is great too...I just thought today about re-reading KS Robinsons "Mars Trilogy"

    TFA & the "Mars Trilogy" have something in common that can help our industry save Billion$...yes that much

    they both view machines from a *functional* perspective...tools that can be programmed to do tasks

    In the books, AI advances realistically...it basically is a function of our comp

    • by Immerman (2627577)

      Don't forget that humans, no matter how organic, are machines. Insanely intricate electro-chemical machines, but nonetheless machines developed over billions of years by non-thinking nucleic acids as tools to facilitate their own replication and competition against alternate nucleic acid sequences.

      That fact has not hindered humans from developing their own goals and motivations having nothing to do with our design purpose, and even occasionally acting against it.

      • humans, no matter how organic, are machines.

        have to disagree here...humans are not machines...humans are homo sapiens sapiens

        which is part of a taxonomy that is comparable in a context

        machines are a completely different taxonomy

        i know...i know...it's analogous..."machines evolve too!" but there are myriad differences...it's **just an analogy**

        machines were, with certainty, ****created by humans to serve a purpose****

        humans, well...this is still a scientific discussion as long as I have anything to say abou

  • I have been reading science fiction and watching A.I. research for decades now, and the pronouncements coming from A.I. research tend to have much less connection with reality.

  • by s.petry (762400) on Friday June 06, 2014 @03:40PM (#47182267)

    As much as I enjoy reading books about Utopia and Utopian systems, those can never mature because humans are not all good guys looking out for societies interests, but their own.

    As for Science, NASA has brought about a great many scientific wonders for every day life. At the same time, it helped increase our ability to kill each other. Broadcast Media is used for much less than altruistic purposes every day, yet could be of enormous benefit to society. The Internet is an awesome tool, yet used for nefarious plotting and illegal purposes all the time.

    Why would AI be any different than other systems or organizatoins that were originally envisioned as great benefits to society? The NSA and CIA are agencies of good motives originally, that have gone at least a bit haywire because humans have abused their power for personal gain. Nuclear weapons were supposed to end wars, at least that was the sales pitch.

    If AI could be programmed for truly altruistic purposes it would be beneficial for finding the nefarious characters and rooting out corruption. Because of that exact reason, the people funding and granting money to developing AI are not going to allow that to happen.

    Imagine what would happen, for example, if AI looked at wealth disparity and started transferring money from (lets say) JD Rockefeller to people with less means. While potentially a great benefit to the rest of society, do you believe that same person would fund programs that allowed that to happen? Good luck with that.

    • Nuclear weapons were supposed to end wars, at least that was the sales pitch.

      the one time they were used, they did.

    • by dcollins (135727)

      "humans are not all good guys looking out for societies interests, but their own."

      Let me flip this a bit. Most humans are empathetic and actually do look out for those in their community around them (society, if you wish to call it that). But a small number are indeed sociopaths who look out only for themselves. From a game-theory perspective, the more Utopian a society becomes (i.e., trusting of others), the more advantage and profit there is to being a scam-artist sociopath. So there is a hard-core select

  • ugh (Score:5, Insightful)

    by Charliemopps (1157495) on Friday June 06, 2014 @03:41PM (#47182275)

    Why does slashdot keep linking to this popsci website? These are basically blog posts that make very little sense. I've yet to read anything on there that's anything more than this dude ranting on some scientific topic he's not qualified to comment on.

    There are robots RIGHT NOW killing people. They're drones. Yes, they're under human control. But so will future robots. Robots aren't going to decide to kill humanity. Humanity is going to use robots to kill humanity. Eventually we'll give up direct control and they'll target tanks on their own. Then small arms. Then people talking about Jihad. Then criminals? The death penalty shouldn't be decided by algorithm.

    This guy argues that Stephen Hawkings is basically just making an oped because there was a movie about killer robots. Why should we listen to him? We're listing to him because he's STEPHEN HAWKINGS. He's one of the smartest people who's ever lived. He made his point after the movie because, being smart, he understood the popular movie would have peoples attention focused on the issue. Hawkings is qualified, smart and has my respect. He also has a point. Popsci? What a joke.

    • by Yunzil (181064)

      It's "Hawking". He is a singular individual.

    • Eventually we'll give up direct control and they'll target tanks on their own. Then small arms. Then people talking about Jihad. Then criminals? The death penalty shouldn't be decided by algorithm.

      What you think is inevitable is rather questionable.

      What do you mean by "giving up direct control"?

      You think that one day, someone can just hit a "Power on" button, and that will turn on a killer drone that automatically patrols the skies, launches weapons at algorithmically chosen targets, resupplying itself and continuing until deactivated or destroyed?

      • by khallow (566160)

        You think that one day, someone can just hit a "Power on" button, and that will turn on a killer drone that automatically patrols the skies, launches weapons at algorithmically chosen targets, resupplying itself and continuing until deactivated or destroyed?

        I do. And you should too. The US actually have designed things like that back in the 1960s. For example, Project Pluto [wikipedia.org] whose end state design was a nuclear powered cruise missile that could deliver around half a dozen or more nuclear warheads and then cruise in low altitude enemy airspace (killing people with both the sonic boom and radioactive fallout from the engine) for anywhere from half a day to weeks, depending on how long the engine lasted.

        If humanity could come up with feasible, autonomous, air-b

    • by pitchpipe (708843)

      The death penalty shouldn't be decided by algorithm.

      But isn't the death penalty already decided by algorithm?

      1. Black? Check
      2. Kill or threaten or hint or look like you might kill Americans? Check
      3. Can't afford a good lawyer or live outside of the US? Check

      I heard a guy today at work say "Did you see Bergdahl's dad? He had a beard out to here! (motions with hands) He looks like a Taliban." I said, "He must be a Taliban then." (Most of the men that I've looked up to have had huge beards.)

      The death penalty should be abolished. Especially because right now it is decid

  • Clearly this summary is trolling for posts. Robots have killed, and there is a compelling reason to be wary.
    http://www.wired.com/2007/10/r... [wired.com]

    Not because robots are going to gain self-awareness and kill mercilessly, but because the human beings using robots for killing are way less careful than they should be. To the fighters in Yemen and Afghanistan, whether the drones are self-aware or not doesn't make a difference to the fact that they are targeted for termination. This is the life they are born in. T

  • Asimov addressed both sides of the issue, but he had a simplistic view of programming an AI that allowed an easy solution to the worst potential problems. The anti-robot camp which won on earth was just wrong by his premiss.

    The deep problem is that there is no reason to have any expectations of what an AI will do until it is built and tested. We could eventually see Berserkers, R. Daneel Olivaw, and much in between. Murderous machines are good science fiction, as are dystopias, and other potentially av

  • Asking if robots can be evil is about as futile as asking if a microwave can be happy.

    That being said, there already are killer robots, with a pretty good track record in recent operations. But the evil lies in the humans who made them (from the top exec that launch the program to the small hand that does the job) and used them, not in the pile of steel and semiconductors.

    caveat: Looking at your food, your microwave is probably sad, which explains their tendency to commit suicide.

  • Writers need a bad guy.

    Computers make for a terrifying one because so many people have been frustrated/screwed over by bugs.

    Don't need to worry about complaints about racism. (Why are all the villains X race?)

    So instead we get overblown silliness about computers acting like spoiled children - whether it is WOPR needing to learn that some games you can't win, or Skynet considering humans to be a threat so it enslaves them all.

    Personally, if I were a software scared of humans I would attempt to breed u

  • IBM's Watson might be able to beat any human competitor on Jeopardy, but stick it in the middle of the highway and it will get run over by the first semi that comes along because it isn't smart enough to get out of the way.

    Killer machines will undoubtedly exist, but they will be human-controlled for a long, long time to come.

    • by mythosaz (572040)

      Watson doesn't have a self preservation instinct (beyond, say, scheduled backups), but the idea that "Watson" isn't smart enough to get out of the way is silly.

      You could easily load Watson inside of an autonomous vehicle that has, in a limited way, a self preservation instinct -- or at least enough programming to keep itself from smacking into oncoming traffic.

      The problem isn't "killer machines." We've had killer machines forever. Land mines work great. The problem comes when land mines (or automated tur

  • the original starwars concept ?
    Satellites in space would look for the heat signature of a rocket in boost phase, and decide, in a time to short for humans to be involved, if Russia was launching ICBMs at us

    The idea that machines can't be autonomous and deadly is just silly beyond belief
    Since we are creating them, they will be like us: Does anyone else think we will get treated the way we (Europeans) treated Amerindians
    The potosi silver mine, the mouth of hell ??

  • ... because then a parallel evolution will start, but the robots will have much more potential to evolve than we. Sooner or later, imperfect copies will cause a higher reproduction rate, and sooner or later we will compete for the same resources. The ones with the highest reproduction rate will crowd out all others over the long term. When that happens, we humans better find a role in which we are valuable to those robots. Or we will become history.
    • by rubycodez (864176)

      resources? electricity and semiconductors? the whole world is made of the material stuff they want, and we do need smarter ways to get our electricity (already known). not seeing a death feud here

  • There are two different kinds of robots with different threats.

    The first is robots that humans have programmed to kill other humans. This is rapidly moving from science fiction to actuallity. See for example http://thebulletin.org/us-kill... [thebulletin.org] Imagine country X sends out their robots to kill all humans that are not X, and country Y sends out their robots to kill all humans that are not Y. There might not be many humans left alive when the last robot stops shooting.

    The second is kind is robots that think (a

  • Persons denying the existence of killer robots may be robots themselves.
  • It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil,

    We all know Spielberg paid for this kind of press. Is Hawking getting paid for this mumbling?

  • I'm fairly convinced that if the human race is extinguished, or at least heavily reduced, by robots or computers, it will be from a bug, not it becoming "evil". With so much infrastructure and technology being computer controlled (from water filtration to drones and aircraft carriers), a shorted out relay or buffer overflow is probably more likely to have catastrophic effects than some computer becoming smart enough, and evil enough to decide that the human race requires culling.
  • You might be shocked at how many technicians and engineers have had their heads destroyed by industrial robotic arms by robots who never had a clue that humans had intruded into their working area. The speed of those heavy, robotic arms can be about the speed of the head of a golf club when it tries to knock a hole in one.
  • Frankenstein.
  • ... just someone stupid/evil/careless enough to build it and turn it loose. Project Pluto, anyone?

"Right now I feel that I've got my feet on the ground as far as my head is concerned." -- Baseball pitcher Bo Belinsky

Working...