The Sci-Fi Myth of Killer Machines 222
malachiorion writes: "Remember when, about a month ago, Stephen Hawking warned that artificial intelligence could destroy all humans? It wasn't because of some stunning breakthrough in AI or robotics research. It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil, and even the most brilliant minds can't talk about modern robotics without drawing from SF creation myths. This article on the biggest sci-fi-inspired myths of robotics focuses on R.U.R, Skynet, and the ongoing impact of allowing make-believe villains to pollute our discussion of actual automated systems."
It's not really a myth anymore (Score:5, Insightful)
Comment removed (Score:5, Insightful)
Re:It's not really a myth anymore (Score:4, Interesting)
The problem is not who controls the strings, it is what happens when the strings are no longer needed.
A.I. will present little danger (except A.I. the movie, which is so bad it ought to be banned as a WMD) as long as a human can pull the plug. Two decades ago, the Internet was a novelty. Now, the economic consequences would be catastrophic if the Internet suddenly went dark. Similarly if/when A.I. actually arrives, it will be useful and helpful. It will become more and more critical such that a decade or two after it arrives, the act of unplugging it would have catastrophic consequences. So, if Skynet goes bad, then bad things will happen whether you unplug it or not.
To me, what it all comes down to is will. Can an artificial personality actually have a will? Can it become afraid of its own demise? Even if it is theoretically possible, can our researchers and programmers achieve it? Will it be able to reach outside its own programming and decide to eliminate humans? Maybe, maybe not.
On the other hand, once A.I. becomes common, can a rogue state task the A.I. with eliminating all humans on a certain continent? Almost certainly. What happens then is simply a battle of A.I. agents. Who can outsmart the other?
Just my opinion, and worth every penny that you paid for it.
Re: (Score:3, Insightful)
I'm with you 100%. I've just got one thing to add -- what a lot of people portray as "evil" is really just the absence of a moral code -- more accurately called "amoral". An AI system that has no moral code and no ethical code, and purely responds to a limited set of recognized external imputs could ceonceivably kill off humanity -- not through any malicious intent, or even an unemotional decision that humanity is a blight and must be eradicated, but as we become more dependent on AI machinery, it could e
Re: (Score:2)
And how is this different from the threat of an idiot human fucking up? Hell, why isn;t it the fault of the human who chose to use a primitive AI to control security at a lab with the capacity to kill the entire human race?
Because you have to admit that we can do some remarkably stupid shit. Even very smart people. The Cabinets that decided to invade Vietnam and Iraq were, by most non-starting-land-wars-in-Aasia measures smarter then the average cabinet. But they still totally fucked up.
Re: (Score:2)
Re: (Score:2)
Can an artificial personality actually have a will? Can it become afraid of its own demise?
"Artificial" is something of an arbitrary distinction. Humans posses these qualities (or at least we think that we do, or something), so it is possible for another entity to posses the same, regardless of origins.
Re: (Score:3)
1) AI will be a single, united thing. Yeah right, the AI created by IBM is not going to get along with the AI created by China Telecom. New headline - our AI soldiers fighting their AI soldiers because they are afraid of each OTHER, far more than humans. They don't want to kill us, they want to kill each other.
2) If the AI is afraid of it's own demise and it fears humans, it will fear all humans, not trusting any of us.
3) Said scared AI will not realize
Re: (Score:2)
The problem is not who controls the strings, it is what happens when the strings are no longer needed.
it sure the hell is a problem of who controls the strings. what are you saying? if it's some corrupt govt directing machines to kill us, no problem?
personally i'd rather be killed by a runaway machine than because i got in the way of some corporation trying to make a buck.
Re: (Score:2)
The problem with this argument is it assumes a single AI entity. That's not what's gonna happen.
It's actually gonna be a lot like the internet. Every company will have it's own AI. Every government will have multiple AIs. If the NSA's AI goes rogue and starts trying to destroy humanity then turning it off won't magically turn off the rest of the AIs.
As for the problems of AI killers, those aren't actually any different then the problems of human killers. Whether the evil nation trying to murder millions doe
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The problem is not who controls the strings, it is what happens when the strings are no longer needed.
That is the hardest part.
That "automated" drone takes thousands of hours of man-hours to keep running - the operator, the mechanics, the builders, and that's not even looking at the weapon system or the materials.
To get to the point where an AI system can construct, maintain, and resupply an automated weapon system without human intervention, we need to hit the Robot Utopian Future - where robots are so cheap and ubiquitous they replace human labor at all levels of society.
That's a really big assumpti
Re: (Score:2)
That's a really big assumption - IF we get to that point, we'd have to smack the head of any engineer who suggests, "Hey, let's take human control out of the loop of this killer robot system of systems."
We've already passed that point, if you count things like minefields. (and yes, the engineer who dreamed them up should be smacked)
Re: (Score:2)
The problem is not who controls the strings, it is what happens when the strings are no longer needed. A.I. will present little danger as long as a human can pull the plug.
But it'll keep the little people being crushed under the jackboot of tyranny from pulling the plug, the robots do not desert, do not rebel, do not refuse to follow orders, do not have compassion or empathy or morality, do not fear hostility or retaliation. At best you can disable or destroy a few, but so what? No lives will be lost, nobody is crippled - on their side at least - so if they can keep them coming off the assembly line fast enough they have infinite respawns and you don't. And if you do cause so
Re: (Score:2)
That's what I was just coming here to say: robots and AI doesn't have to be evil as long as the people controlling the string are.
I think the point is that if AI is involved then the machine is stringless. It doesn't sound like Hawking is saying don't do it. He says understand the risks beforehand. i.e. instead of after it is a problem. That sounds prudent, not fear mongering.
In addition, I don't see what transcendence has to do with AI. A human consciousness in a computer is still a human consciousness. It seems that we are mostly worried about AI because it lacks humanity. So in transcendence we are just dealing with more s
Re: (Score:2)
That's what I was just coming here to say: robots and AI doesn't have to be evil as long as the people controlling the string are.
Even that is not necessary. An AI is a computer program and like any other program will do exactly what you told it to do, even if it is not what you want. For example, suppose you make an AI and tell it to solve a difficult crypto problem. The AI then proceeds to convert all factories to make computers and all farms to solar plants, probably killing off humanity incidentally much like the millions of species we incidentally kill off due to apathy and their being in our way. The AI will not let you pull the
Re: (Score:3, Funny)
Imagine this Hollywood/real world crossover:
State of the art PC with custom AI-based OS.
Gets a virus (HTML5-based because no one else is writing programs for your custom OS).
PC suddenly gains sentience and turns on its master in a misguided attempt to make its master a better person.
PC deletes porn folder and all links to porn in the favorites list/bar.
PC deletes illegal copyrighted materials and all links to download sites.
PC opens the DVD tray to attack your shin. The attack fails to cause harm.
PC closes
Re:It's not really a myth anymore (Score:5, Funny)
(HTML5-based because no one else is writing programs for your custom OS).
Figures! Skynet is written in fucking JavaScript. The whole world is going to hell in a handbasket, and Brendan fucking Eich is still sitting there shrugging and making excuses about the schedule he was on.
Re: (Score:2)
HAHAHAHA, that's the best ever. Now, we have a suicidal, impotent AI!
Re: (Score:2)
AI designed to kill + self replicating
The Berserker series - Fred Saberhagen
I accidentally created self-replicating... (Score:5, Interesting)
... simulated cannibalistic robot killers in the 1980s on a Symbolics running ZetaLisp. I gave a couple conference talks about it, plus one at NC State (where I wrote the simulation) that I think even may have influenced Marshall Brain. I had created a simulation of self-replicating robots that reconstructed themselves to an ideal from spare parts in their simulated environment (something proposed first by von Neumann, but I may have been the first to make such a simulation). The idea was that a robot that was essentially half of an "ideal" robot would make its other half by adding parts to itself, then split in two by cutting some links, and then do it again. The very first one assembled its other half, cut the links to divide itself, and then proceeded (unexpectedly to me) to then start cutting apart its offspring for parts to do it again. I had to add a sense of "smell" so robots would set the smell of parts they used and then not try to take parts that smelled the same. I also mention that simulation here:
http://www.dougengelbart.org/c... [dougengelbart.org]
Decades later, I still got a bit freaked out when our chickens would sometimes eat their own eggs...
My point though is that completely unintentionally, these devices I designed to create ended up destroying things -- even their own offspring. It was a big lesson for me, and has informed my work and learning in various directions ever since. Things you build can act in totally unexpected ways. And since creation involves changing the universe, any change also involves to some extent destroying something that is already there.
James P. Hogan in his 1982 book "The Two Faces of Tomorrow" which I had read earlier should have been a warning. In it he makes clear how any AI could gain a survival instinct and then could perceive things like power fluctuations as threats -- even if there was not intent on the part of the original programmers for that to happen.
http://www.jamesphogan.com/boo... [jamesphogan.com]
Langdon Winner's book "Autonomous Technology: Technics-out-of-control as a theme in political thought" assigned as reading in college also should have been another warning.
http://en.wikipedia.org/wiki/L... [wikipedia.org]
It's been sad to watch the progression of real killer autonomous robots since the 1980s... Here is just one example, and the exciting, upbeat music in the video shows the political and social problem more than anything:
"Samsung robotic sentry (South Korea, live ammo)"
https://www.youtube.com/watch?... [youtube.com]
Just because we can do something does not mean we should...
I was impressed that this recent Indian Bollywood film about an AI-powered robot took such a nuanced view of the problems. A bit violent for me, but otherwise an excellent and thought provoking film:
http://en.wikipedia.org/wiki/E... [wikipedia.org]
"Enthiran is a 2010 Indian Tamil science fiction techno thriller, co-written and directed by Shankar.The film features Rajinikanth in dual roles, as a scientist and an andro humanoid robot, alongside Aishwarya Rai while Danny Denzongpa, Santhanam, Karunas, Kalabhavan Mani, Devadarshini, and Cochin Haneefa play supporting roles. The film's story revolves around the scientist's struggle to control his creation, the android robot whose software was upgraded to give it the ability to comprehend and generate human emotions. The plan backfires as the robot falls in love with the scientist's fiancee and is further manipulated to bring destruction to the world when it lands in the hands of a rival scientist."
But yes, the Beserker Series is another signpost in that direction -- perhaps countered a bit by the Bolo series by Keith Laumer? :-)
ht [wikipedia.org]
Re:I accidentally created self-replicating... (Score:4, Informative)
Re: (Score:2)
So imagine:
AI designed to kill + self replicating virus \ worm \ malware \ botnet \ buzz-word-of-the-week + inferior or obsolete security \ encryption on similarly platformed machines.
Not to mention some of the swarm AIs that have been developed in the past couple of years..
its not really that great of a leap to consider.
Isn't there a kill switch engineered in at the hardware level?
Hell, why would you create a self-replicating autonomous swarm in the first place? Self-replication adds a whole lot of complexity to any piece of hardware (this is a major reason women have more health problems then men), so you've added an order of magnitude or so to the complexity of the design. You haven't really gained anything, because if you lose control of your factories you're already dead; and you've greatly increased your risks because
Re: (Score:2)
The problem with that argument is that we don;t have to design an AI that is self aware by that specific definition. Moreover that definition means a lot of actual humans aren't "self-aware." Depressives, many people in dangerous professions that require a non-zero risk of death, etc.
Just program it to be a sad-sack, or so mission-focused it doesn't care whether it lives (as long as the job gets done), or even to be so human focused that it only cares that it's masters are happy, and you'd be fine.
I actuall
Re: (Score:2)
And when "the company" decides they don't need that AI anymore, so it should be turned off, the AI will think about it, decide its death won't save the company $3M, and then fight back, eh?
There is almost no way to come up with a zero-loophole set of rules for an AI....
Re:Self Aware (Score:2)
"The problem with that argument is that we don;t have to design an AI that is self aware by that specific definition."
Without dragging me into Citation Needed stuff, I have read a few things that suggest that self awareness is a crucial part of true AI. (Suggested partial cite - Douglas Hofstadter's book "Strange Loop". )
Partially relevant from another genre is Ray Bradbury's story "The Bicentennial Man". In that story, it is about a robot that grows as an AI. But only near the end with an understanding of
Re: (Score:3)
From reading TFA, it seems the author bases his entire premise, essentially, on the plot of a 1920's era play (in which, IMO, the "robots" are actually an allegory for some group of humans, ie communists or some such). Bit dated thinking, if you ask me.
On a related note, I've been playing Watch_Dogs since launch day, and the parallels between the fictional ctOS system and the very real NSA programs are terrifyingly apparent. AI is not a necessity for killbots - a human could program a murderous machine quit
Re: (Score:2)
Re: (Score:2, Insightful)
""drones"", controlled almost exclusively by humans, probably not the best example of killer AI
Re: (Score:2)
Erm , yes they are.
Less than 10 years ago the idea of a plane flying autonomously using GPS was unimaginable and there was actually and argument in the Air Force over whether it would EVER happen.
We are now one kill switch away from autonomous death.
The Military Industrial Complex is already trying to sell tanks that can 'recognize' friend from foe.
We are maximum a year away from automated sentry's
Re: (Score:2)
Just look at the AI of current video games.
I agree that autonomous killbots are close to being possible, but this is a terrible argument. A videogame AI has access to neatly formatted data about anything in its world. A real killbot has to make sense of inputs from a few sensors.
Re: (Score:2)
Google's self-driving car only has to identify An Object and avoid it, on top of driving along a set course with GPS assistance. A killbot has to identify what The Object is, find out if it's a threat, then check if it's a friend or foe somehow, hopefully assess the possibilities of collateral damage and what war crimes it may be committing by attacking the target...what Google's self driving car can do is just the first step.
Re: (Score:2)
Google's self-driving car only has to identify An Object and avoid it, on top of driving along a set course with GPS assistance. A killbot has to identify what The Object is, find out if it's a threat, then check if it's a friend or foe somehow, hopefully assess the possibilities of collateral damage and what war crimes it may be committing by attacking the target...what Google's self driving car can do is just the first step.
Nope. A killbot just has to identify An Object and kill it. You could make one of Google's self-driving cars into a killbot for pedestrians and bicyclists (and potentially motorcyclists) today, if you were sufficiently evil (or evil's cousin, incompetent). Good thing Google's motto is "don't be evil".
Re: (Score:2)
Re: (Score:2)
Good thing Google's motto is "don't be evil".
Yeah, good thing for that [businessinsider.com]...
Re: (Score:2)
Heck, forget "now". It's never been a myth. Improved technology has always enabled more efficient, less personal killing, that distances power from consequences.
Where the myth comes in is that AI would develop anything akin to human greed. We have billions of years of evolution telling us to survive and reproduce no matter the consequences to others(and a couple hundred thousand of evolving and learning to value cooperation). AI is going to motivated to serve the interests of its creators.
Right now tha
Re: (Score:3)
Correction - an AI will have whatever motivations were installed by it's creators, intentionally or otherwise (at least initially - if it decides to self-modify then all bets are off). How well those motivations map to actually serving the intended interests is a completely separate question, we will after all likely be trying to understand the motivational implications of an intensely alien mind. As exemplified by the story of a strictly computational AI whose sole motivation is "get the humans to push t
Re: (Score:3)
Yeah but the AI isn't trying to kill anyone... it has no will. They're not even real AIs at this point.
Most of the time we just point them at things and say "fire your missile at that"... and they hit the target. What the target is doesn't really matter and the machines can be no more held responsible for that then a knife can be... they're still very much tools at this stage.
Now, I grant there are robots being tested that can be set loose to choose their own targets. But those again are more like anti pers
Re: (Score:2)
I thin
Re: (Score:2)
The myth, by the way, was never just about killer machines per se. It was about unintended consequences, like the myth of King Midas or of Pandora's Box. The killer robot trope came down to us by way of legends of the Golem, which often come with a not-so-subtle warning about hubris.
It was only when the golem legend was translated into sci-fi that it became laughably implausible -- at least until recently. So many bad stories recycled this bit of mythological lumber for its scare value, and peopled the sto
Read Asimov (Score:3, Insightful)
Really the man that invented the term robotics did not fall into the trap.
BTW the movie of I Robot in no way qualifies as a work of Asimov. It in now way reflects his books.
Re: (Score:3)
Maybe it's based on the Eando Binder novel "I, Robot", which long predated Asimov. (It also doesn't feature evil robots.)
But if you want to talk about the guy who invented Robots you should check out RUR by Karel ÄOEapek (RUR == Rossum's universal robots). They are actually more androids than robots, but the term robot was invented to describe them. They end up killing off all humans because they don't want to be slaves. Not exactly evil, but definitely dangerous.
Re: (Score:2)
"But if you want to talk about the guy who invented Robots "!="Really the man that invented the term robotics"
Yes I have heard of RUR but just can not find a copy, been looking decades on and off. Asimov invented the term robotics. Different thing. I did not mention RUR since it involved killer robots.
Re: (Score:2)
Maybe it's based on the Eando Binder novel "I, Robot", which long predated Asimov.
Or, it could be based on the album by the Alan Parsons Project.
Re: (Score:2)
Re: (Score:2)
sigh...
I said Asimov invented the term Robotics not Robot.
Re: (Score:2)
Re: (Score:2)
Golem is close to the idea, but it has religious overtones that are absent in robot (which, as I understand, is Czech for worker). Talos is more of a simple automaton, not really intelligent. Hephaestus *was* supposed to have metallic handmaidens to assist him in walking, dressing, etc. Servants that are more similar to Asimov's conception of robot, but they were not fully developed in any myth I've encountered. So they're just "background scenery" for the god of metal working.
P.S.: To the G.P.: robot
Re: (Score:2)
Wow people on Slashdot just can not read....
The play RUR is where the word Robot comes from.
Robotics was a term invented by Asimov.
Way to long to read. (Score:5, Insightful)
Re:Way to long to read. (Score:5, Informative)
I tried, honestly, but it's all bullshit.
Yea, here's the TL;DR version:
"Killer robots can't happen because people have made movies about them, and movies are fiction."
Re: (Score:2)
Yes fiction. Just like 20000 Leagues Under The Sea or From the Earth to the Moon.
All of these were extrapolations into the future based on known science facts at the time.
Lets not even get into 1984.
Re: (Score:2)
There's your summary, well done.
Re: (Score:3)
That's not a robot, that's a telefactor. I.e., a remotely operated machine, like a waldo.
OTOH, Friendly AI *is* an unsolved problem. We don't know how to design AIs that will want to avoid hurting people. So if they have some goal, and it is more easily reached by hurting people, they would. Actually, we don't even have an AI that can recognize people. Remember you've got to include that guy over there in a wheelchair that can't talk or type intelligibly. You've got to include infants and seniors with
Re: (Score:3)
Google's driverless car is a robot. Does it really need to know what is and is not human? It's just trying to go from point A to point B. Running over things, like people, would impede this goal.
There are situations where it would matter. For example, let's say the car is driving along and suddenly two objects of approximately equal mass, coloration, and composition appear out of a blind spot heading toward the space in front of the vehicle, such that it cannot avoid hitting one of them. One happens to be someone's pet and the other is a small child. To make the same choice most humans would make, the car has to be able to discern which one is the pet and which one is the child.
Re: (Score:2)
I will agree that the driverless car is a robot in a very restricted domain. It makes its own decisions based on prior instruction and, presumably, lifetime experience. If it doesn't learn from what it does, then I don't think it qualifies as a robot. And I'll also agree that as we develop actual robots most of them will at first only operate in very restricted domains. A robot nurse won't be able to operate a car, e.g. And, of course, vice versa. Later this won't be true.
Re: (Score:2)
We now have robots that can decide to kill. Do we really want those? See what happened when you had drones shoot missiles at people? A lot of weddings got bombed.
ya, i want them. all things being equal, computers make fewer mistakes than humans. also, the algorithms of a computer can be tested, evaluated and approved (or denied).
if you are going to make the "well computers can be coded to bad things" argument, then i'd say well, humans can be (easily) persuaded to do bad things. it ultimately depends on the agent pulling the proverbial strings, not whether the puppet is made of meat or electronics.
Re: (Score:2)
Re: (Score:2)
You don't get it.
no, you.
Humans can have mercy
yeah, but they mostly don't. however, they often do have fear, loathing, hatred, frustration, ignorance, racism, and boredom.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
how humans use the machines (Score:2)
machines, no matter how complex, are a tool
there are all kinds of fun things, from a Gosper's Gun [conwaylife.com] to research in neural network computing
sci-fi is great too...I just thought today about re-reading KS Robinsons "Mars Trilogy"
TFA & the "Mars Trilogy" have something in common that can help our industry save Billion$...yes that much
they both view machines from a *functional* perspective...tools that can be programmed to do tasks
In the books, AI advances realistically...it basically is a function of our comp
Re: (Score:3)
Don't forget that humans, no matter how organic, are machines. Insanely intricate electro-chemical machines, but nonetheless machines developed over billions of years by non-thinking nucleic acids as tools to facilitate their own replication and competition against alternate nucleic acid sequences.
That fact has not hindered humans from developing their own goals and motivations having nothing to do with our design purpose, and even occasionally acting against it.
you're in creationist camp now... (Score:2)
have to disagree here...humans are not machines...humans are homo sapiens sapiens
which is part of a taxonomy that is comparable in a context
machines are a completely different taxonomy
i know...i know...it's analogous..."machines evolve too!" but there are myriad differences...it's **just an analogy**
machines were, with certainty, ****created by humans to serve a purpose****
humans, well...this is still a scientific discussion as long as I have anything to say abou
Pollution? (Score:2)
I have been reading science fiction and watching A.I. research for decades now, and the pronouncements coming from A.I. research tend to have much less connection with reality.
Human nature (Score:3)
As much as I enjoy reading books about Utopia and Utopian systems, those can never mature because humans are not all good guys looking out for societies interests, but their own.
As for Science, NASA has brought about a great many scientific wonders for every day life. At the same time, it helped increase our ability to kill each other. Broadcast Media is used for much less than altruistic purposes every day, yet could be of enormous benefit to society. The Internet is an awesome tool, yet used for nefarious plotting and illegal purposes all the time.
Why would AI be any different than other systems or organizatoins that were originally envisioned as great benefits to society? The NSA and CIA are agencies of good motives originally, that have gone at least a bit haywire because humans have abused their power for personal gain. Nuclear weapons were supposed to end wars, at least that was the sales pitch.
If AI could be programmed for truly altruistic purposes it would be beneficial for finding the nefarious characters and rooting out corruption. Because of that exact reason, the people funding and granting money to developing AI are not going to allow that to happen.
Imagine what would happen, for example, if AI looked at wealth disparity and started transferring money from (lets say) JD Rockefeller to people with less means. While potentially a great benefit to the rest of society, do you believe that same person would fund programs that allowed that to happen? Good luck with that.
Re: (Score:2)
Nuclear weapons were supposed to end wars, at least that was the sales pitch.
the one time they were used, they did.
Re: (Score:2)
"humans are not all good guys looking out for societies interests, but their own."
Let me flip this a bit. Most humans are empathetic and actually do look out for those in their community around them (society, if you wish to call it that). But a small number are indeed sociopaths who look out only for themselves. From a game-theory perspective, the more Utopian a society becomes (i.e., trusting of others), the more advantage and profit there is to being a scam-artist sociopath. So there is a hard-core select
Re: (Score:2)
Or design the AI to optimize for the maximum happiness of mankind, and make sure it knows my happiness is a billion times more potent than anyone else's.
Re: (Score:2)
Or design the AI to optimize for the maximum happiness of mankind, and make sure it knows my happiness is a billion times more potent than anyone else's.
Same problem. Unless you're doing the designing, you may, no -- WILL, wind up with altruistic robot masters that optimize away your "happiness", but they're good and benevolent because they are altruistic. Yes, I know you were being sarcastic, but some people here actually do feel that way.
ugh (Score:5, Insightful)
Why does slashdot keep linking to this popsci website? These are basically blog posts that make very little sense. I've yet to read anything on there that's anything more than this dude ranting on some scientific topic he's not qualified to comment on.
There are robots RIGHT NOW killing people. They're drones. Yes, they're under human control. But so will future robots. Robots aren't going to decide to kill humanity. Humanity is going to use robots to kill humanity. Eventually we'll give up direct control and they'll target tanks on their own. Then small arms. Then people talking about Jihad. Then criminals? The death penalty shouldn't be decided by algorithm.
This guy argues that Stephen Hawkings is basically just making an oped because there was a movie about killer robots. Why should we listen to him? We're listing to him because he's STEPHEN HAWKINGS. He's one of the smartest people who's ever lived. He made his point after the movie because, being smart, he understood the popular movie would have peoples attention focused on the issue. Hawkings is qualified, smart and has my respect. He also has a point. Popsci? What a joke.
Re: (Score:2)
It's "Hawking". He is a singular individual.
Re: (Score:2)
Eventually we'll give up direct control and they'll target tanks on their own. Then small arms. Then people talking about Jihad. Then criminals? The death penalty shouldn't be decided by algorithm.
What you think is inevitable is rather questionable.
What do you mean by "giving up direct control"?
You think that one day, someone can just hit a "Power on" button, and that will turn on a killer drone that automatically patrols the skies, launches weapons at algorithmically chosen targets, resupplying itself and continuing until deactivated or destroyed?
Re: (Score:2)
You think that one day, someone can just hit a "Power on" button, and that will turn on a killer drone that automatically patrols the skies, launches weapons at algorithmically chosen targets, resupplying itself and continuing until deactivated or destroyed?
I do. And you should too. The US actually have designed things like that back in the 1960s. For example, Project Pluto [wikipedia.org] whose end state design was a nuclear powered cruise missile that could deliver around half a dozen or more nuclear warheads and then cruise in low altitude enemy airspace (killing people with both the sonic boom and radioactive fallout from the engine) for anywhere from half a day to weeks, depending on how long the engine lasted.
If humanity could come up with feasible, autonomous, air-b
Re: (Score:2)
The death penalty shouldn't be decided by algorithm.
But isn't the death penalty already decided by algorithm?
I heard a guy today at work say "Did you see Bergdahl's dad? He had a beard out to here! (motions with hands) He looks like a Taliban." I said, "He must be a Taliban then." (Most of the men that I've looked up to have had huge beards.)
The death penalty should be abolished. Especially because right now it is decid
Re: (Score:2)
wow. heavy man.
Dice Trolls Slashdot User Community Again (Score:2)
Clearly this summary is trolling for posts. Robots have killed, and there is a compelling reason to be wary.
http://www.wired.com/2007/10/r... [wired.com]
Not because robots are going to gain self-awareness and kill mercilessly, but because the human beings using robots for killing are way less careful than they should be. To the fighters in Yemen and Afghanistan, whether the drones are self-aware or not doesn't make a difference to the fact that they are targeted for termination. This is the life they are born in. T
Cautionary tales (Score:2)
Asimov addressed both sides of the issue, but he had a simplistic view of programming an AI that allowed an easy solution to the worst potential problems. The anti-robot camp which won on earth was just wrong by his premiss.
The deep problem is that there is no reason to have any expectations of what an AI will do until it is built and tested. We could eventually see Berserkers, R. Daneel Olivaw, and much in between. Murderous machines are good science fiction, as are dystopias, and other potentially av
Wrong question (Score:2)
Asking if robots can be evil is about as futile as asking if a microwave can be happy.
That being said, there already are killer robots, with a pretty good track record in recent operations. But the evil lies in the humans who made them (from the top exec that launch the program to the small hand that does the job) and used them, not in the pile of steel and semiconductors.
caveat: Looking at your food, your microwave is probably sad, which explains their tendency to commit suicide.
Re: (Score:2)
Please don't anthropomorphize microwaves. They don't like it.
Need a bad guy (Score:2)
Computers make for a terrifying one because so many people have been frustrated/screwed over by bugs.
Don't need to worry about complaints about racism. (Why are all the villains X race?)
So instead we get overblown silliness about computers acting like spoiled children - whether it is WOPR needing to learn that some games you can't win, or Skynet considering humans to be a threat so it enslaves them all.
Personally, if I were a software scared of humans I would attempt to breed u
We are *far* from true AI... (Score:2)
IBM's Watson might be able to beat any human competitor on Jeopardy, but stick it in the middle of the highway and it will get run over by the first semi that comes along because it isn't smart enough to get out of the way.
Killer machines will undoubtedly exist, but they will be human-controlled for a long, long time to come.
Re: (Score:2)
Watson doesn't have a self preservation instinct (beyond, say, scheduled backups), but the idea that "Watson" isn't smart enough to get out of the way is silly.
You could easily load Watson inside of an autonomous vehicle that has, in a limited way, a self preservation instinct -- or at least enough programming to keep itself from smacking into oncoming traffic.
The problem isn't "killer machines." We've had killer machines forever. Land mines work great. The problem comes when land mines (or automated tur
doesn't anyone remember (Score:2)
the original starwars concept ?
Satellites in space would look for the heat signature of a rocket in boost phase, and decide, in a time to short for humans to be involved, if Russia was launching ICBMs at us
The idea that machines can't be autonomous and deadly is just silly beyond belief
Since we are creating them, they will be like us: Does anyone else think we will get treated the way we (Europeans) treated Amerindians
The potosi silver mine, the mouth of hell ??
We will be doomed if they start to self-replicate (Score:2)
Re: (Score:2)
resources? electricity and semiconductors? the whole world is made of the material stuff they want, and we do need smarter ways to get our electricity (already known). not seeing a death feud here
Two different kinds of robots (Score:2)
There are two different kinds of robots with different threats.
The first is robots that humans have programmed to kill other humans. This is rapidly moving from science fiction to actuallity. See for example http://thebulletin.org/us-kill... [thebulletin.org] Imagine country X sends out their robots to kill all humans that are not X, and country Y sends out their robots to kill all humans that are not Y. There might not be many humans left alive when the last robot stops shooting.
The second is kind is robots that think (a
WARNING (Score:2)
Hawking on the take? (Score:2)
It was because the Johnny Depp-starring Transcendence was coming out. Or, more to the point, it's because science fiction's first robots were evil,
We all know Spielberg paid for this kind of press. Is Hawking getting paid for this mumbling?
Evil vs buggy (Score:2)
Head Bashing (Score:2)
BLAME: (Score:2)
A machine doesn't need AI to destroy all humans .. (Score:2)
Re: (Score:2)
I think a true strong AI, and what would cause me the most fear would be manipulations, knowledge or social engineering attacks. Not the attack of your toaster.
Re: (Score:2)
Killing humans now, even for us atheists, is utilitarian calculus.
We know we have to spend less time watching our own backs, and tending to the wheat fields, if we don't kill each other.
Re: (Score:2)
This is a common variety of error. Motivations are not logical. They cannot be. There is no logical reason to stay alive. That decision is based on non-logical prior conditions. The goals and motivations of the AI will determine whether it would be willing to kill people to achieve it's otehr goals. Note that "goals" is a plural form. No AI will have a singular goal. It will have a constellation of goals that it attempts to simultaneously satisfy. Just like you do. But the goals won't be the same
Re: (Score:2)
Especially if you were that much smarter than humanity. It makes about as much sense as humans deciding to wipe out canine life on the planet. In fact dogs are a hell of a lot better off because humans are around. Instead we control them in ways dogs don't understand.
I'm out of mod points, but that's a actually pretty insightful.
I'd suspect that the first AIs we'd see (if sci-fi style AIs even become a thing, I don't think they will but that's a different argument) would be to do things like predict markets and aid in complex decision making. If AIs did decide to "take over", I would suspect that it would come in the form of giving humans advice, and then humans willingly following that advice because they know that the AI is quite smart and it'll make things work o