AIs Have Replaced Aliens As Our Greatest World Destroying Fear (qz.com) 227
An anonymous reader shares an excerpt from a report via Quartz: As we've turned our gaze away from the stars and toward our screens, our anxiety about humanity's ultimate fate has shifted along with it. No longer are we afraid of aliens taking our freedom: It's the technology we're building on our own turf we should be worried about. The advent of artificial intelligence is increasingly bringing about the kinds of disturbing scenarios the old alien blockbusters warned us about. In 2016, Microsoft's first attempt at a functioning AI bot, Tay, became a Hitler-loving mess an hour after it launched. Tesla CEO Elon Musk urged the United Nations to ban the use of AI in weapons before it becomes "the third revolution in warfare." And in China, AI surveillance cameras are being rolled out by the government to track 1.3 billion people at a level Big Brother could only dream of. As AI's presence in film and TV has evolved, space creatures blowing us up now seems almost quaint compared to the frightening uncertainties of an computer-centric world. Will Smith went from saving Earth from alien destruction to saving it from robot servants run amok. More recently, Ex Machina, Chappie, and Transcendence have all explored the complexities that arise when the lines between human and robot blur.
However, sentient machines aren't a new anxiety. It arguably all started with Ridley Scott's 1982 cult classic, Blade Runner. It's a stunning depiction of a sprawling, smog-choked future, filled with bounty hunters muttering "enhance" at grainy pictures on computer screens. ("Alexa, enlarge image.") The neo-noir epic popularized the concept of intelligent machines being virtually indistinguishable from humans and asked the audience where our humanity ends and theirs begin. Even alien sci-fi now acknowledges that we've got worse things to worry about than extra-terrestrials: ourselves.
However, sentient machines aren't a new anxiety. It arguably all started with Ridley Scott's 1982 cult classic, Blade Runner. It's a stunning depiction of a sprawling, smog-choked future, filled with bounty hunters muttering "enhance" at grainy pictures on computer screens. ("Alexa, enlarge image.") The neo-noir epic popularized the concept of intelligent machines being virtually indistinguishable from humans and asked the audience where our humanity ends and theirs begin. Even alien sci-fi now acknowledges that we've got worse things to worry about than extra-terrestrials: ourselves.
Zombies (Score:4, Funny)
I liked zombies.
Before that was monsters. Disease, meteors, and others. Someone should chart the fear by year. How well do disaster movies align?
Re:Zombies (Score:5, Funny)
Where did all the zombies go?
Gone to headshots, every one. When will they ever learn?
Re: (Score:2)
Re:Zombies (Score:5, Insightful)
AI will probably never exist ...
Never say never. There is already a proof of concept: the human brain. Unless you believe in magic, there is no reason that what can be done with carbon can't also be done with silicon. Silicon neurons can switch 10 million times faster, and unlike biological brains, an AI would not be encumbered by the detritus of millions of years of sub-optimal evolutionary local maxima.
... so that is also an unrealistic fear.
That is exactly what they want us to believe.
Re: (Score:2)
Have you wondered why people and animals have a relatively limited lifespan? Clearly biology is capable of sustaining itself indefinitely [wikipedia.org]. Why does it do it via offspring instead of self-sustaining? Why do we age and die off?
After watching human behavior for several decades, I'm convinced the limitation is intell
Re: (Score:2)
AI is managed by person. If he tells AI to kill everyone else... Technically it's the person who will kill everyone..
You don't understand AI. The concept of AI means that the AI entity itself concludes that it's better to wipe out humans.
:)
Go watch Battlestar Galactica
Re: (Score:2)
Turn that around for a second... does intelligence exist?
Joking aside, if it does, what precludes intelligence from being artificially created by otherwise intelligent beings instead of simply existing as a product of undirected evolution?
We are well within a single generation of being able to simulate an entire human brain in a computer.... why will it not be intelligent, exactly? And if it is not, why do we assume that we are?
Re: (Score:2)
> Time also exists
Does it? "Cold" and "Dark" don't exist. Not the frame being discussed.
Re: (Score:2)
Re: (Score:2)
Aliens were never a realistic thing to be afraid of for multiple logical reasons and AI will probably never exist, so that is also an unrealistic fear.
Zombies have a greater chance of destroying the world, but they just aren't that scary.
Artificial intelligence already exists so how can you say it probably will never exist? You can buy artificial intelligence today for less than $50. https://www.popsci.com/amazon-... [popsci.com]
Re: (Score:2)
Current wave of discoveries of how brains work is probably just that - something that either will be forgotten or becomes another part of our knowledge. We will thus be so far as to be able just to describe what is happening and sometimes find relationships,
Re: (Score:2)
I miss them too. 'Kore wa zombie desu ka' was an unexpected find...and I want more.
Re: (Score:2)
Someone already has charted it. Turns out Vampire movies come out of Hollywood more often when a Democractic president is in office, and Zombie movies are more prevalent when a Republican president is in office... after accounting for the 2-3 year delay of movie production. Source [mrscienceshow.com]
Given that '2012: It's a Disaster' came out about a year after Obama took office, and was a big hit, I'm guessing Zombie and disaster films go together (as zombie apocalypses are comparable to disasters). Remember it entered produc
Re: (Score:2)
Zombie popularity is based on apocalyptic fear and anxiety:
Imperialism, racial anxiety and fears about brainwashing have all had their part to play in the zombie's evolution and popularity. Ultimately, though, these walking corpses are always symbols of death, parodies of the supposed finality of the body and the promised everlasting life of the soul. http://ourspace.uregina.ca/bitstream/handle/10294/3811/Ozog_Cassandra_Anne_200243342_MA_SOC_Spring2013.pdf [uregina.ca]
Robot/AI popularity comes from our anxiety and fear of technology. In 'Star Trek VI: The Undiscovered Country', the Vulcan Valeris says "400 years ago, on Earth, workers who felt their livelihood threatened by automation, flung their wooden shoes called 'sabots' into the machines to stop them. Hence the word 'sabotage'."
"Klaatu barad
Re:Zombies (Score:5, Informative)
While we're at Doomsday scenarios, a flu-like viral infection with high mortality rate is still the biggest threat to current civilization. Bonus points if it transmits like a light cold first and then lays slumbering for a few weeks before it destroys its host.
Re: (Score:2)
Oh I hated Zombies. They stayed too popular for way too long. And for a monster problem, they weren't really that scary, complex or that interesting.
Re: (Score:2)
Where did all the zombies go?
I dunno, but they can stay there as far as I'm concerned!
Re: (Score:2)
Where did all the zombies go?
Zombies was never really a fear. The story for almost all zombie movies or books is not man versus zombie, but rather man versus nature as zombies were just a natural disaster that could be shot in the head for action. Still, the real conflict in such stories is man versus man as the natural disaster causes society to break down and situations that could probably be solved through cooperation spell disaster as people turn selfish and fail to uphold societal standards. Nobody ever really feared zombies, but
AI is a load of bollocks (Score:2)
AI as it currently exists is no more exciting than the assembly line. Robotics is great for automation of tasks. The type of AI we have now is great for expert systems and chewing through large amounts of data. The combination of machine learning and robotics have exciting prospects for eliminating mundane jobs. However we are no closer to hard AI today than we were forty years ago. At least forty years ago we were coming down off the pinacle of the first mount stupid. Today we are, in fact, back wher
Re: (Score:2)
AI as it currently exists is no more exciting than the assembly line.
... in the 1800's. Yes. It's pretty much exactly like that. It looks to be another phase of the technological singularity which is the computer revolution. Just like the industrial revolution came in a couple waves and CHANGED EVERYTHING computers, the internet, hand-held devices, and now AI have and are going to change a lot of things when it comes to employment, business, and how things are done.
Robotics is great for automation of tasks. The type of AI we have now is great for expert systems and chewing through large amounts of data. The combination of machine learning and robotics have exciting prospects for eliminating mundane jobs.
Yep. Yep (and also things other than expert systems, but sure, close enough). and... No? The combination of r
Re: (Score:3)
And the frightening thing about self-motivated action is that there's no reason to assume it requires consciousness. You feed a complicated enough system a complicated enough stream of inputs, and the resulting output will look close enough to self-motivated action that you could spend lifetimes debating the terminology.
Heck, we still have no conclusive evidence that humans are anything more than that - "consciousness" or "self awareness" might simply be a useful (or useless?) perceptual illusion of a biol
Re: (Score:2)
I don't understand the fuss about AI taking over somehow. The real problem with AI is that as a form of semi-autonomous software it can be a powerful tool for centralized power. If an elite runs the AI/automated armies it can enforce mass conformity much better allowing them to strengthen their grasp on power. The NSA for instance has a dramatic shortage of processing power for all the data they collect (since they decided they wanted to collect everything).
Fear of alien invasion. (Score:5, Interesting)
Really? Such a fear should be instant ground for removal from the voter roles. Possibly permanently, since even if you stop being afraid of that, there will probably be some other bit of stupidity you're now afraid of.
Re: (Score:3)
Why is it stupid? There are a lot of habitable planets. We have absolutely no idea of the probabilities of them developing life, developing intelligence, developing technology, deciding to invade. But if they do , we lose. (the guys on the ships win).
Its not likely but similar to asteroid impacts, its statistically probably more likely to kill you than terrorists are.
Re: (Score:2)
Stop reading so much science fiction.
Given the stupendously ginormous -- but, since we've done it, conceivable -- distances between stars, it doesn't matter how many high tech civilizations there are. We aren't going to see any of them. Ever.
Re:Fear of alien invasion. (Score:5, Insightful)
I do have a rather good idea of how distant stars are, and of how improbable faster than light travel is - but you are taking a limited view of technology.
With technology we understand now you can get to about 0.1C, in maybe 100 years. (that is a power density that you can radiate with reasonable radiators, and energy densities compatible with fission reactors). So we are talking a few thousand year trips.
But is that so bad? Even humans have worked on single projects that lasted 100 years (like the NY 2nd avenue subway). Is a few thousand so out of line? Maybe they are longer lived that we are.
Maybe they have already exponentiated into most solar systems and are waiting. (robots, hibernation whatever). They could be "predators" who destroy any technological civilizations before they become an interstellar threat.
Maybe they do it for religious reasons. Or for something as incomprehensible to us as religion is to a cat. Maybe invasion is the wrong word, and they just want to build a hyperspace bypass (just kidding).
Likely - no. But I don't see how you can rule it out. Interstellar travel at a fraction of C is really not that difficult with technology we already understand (but of course don't yet have). Near C may be possible and we just don't know how yet. (making antimatter seems difficult but maybe there is a trick).
Re: (Score:2)
There is no getting around time dialation, simply crossing the Milky Way takes 100k years, another 100k to get back. After a few tens of trips the universe will have doubled in age, we would have collided with andromeda an
Re: (Score:2)
I think you've dropped a few digits. Crossing the galaxy at light speed would take 100K years. but the universe is >10Billion years old, so you could do it 100 thousand times before the universe doubled in age. It only seems long on human time scales.
A slow ageing race could easily colonize a galaxy, maybe a local cluster
Europe colonized much of the world in 500 years. At that tech level, a round-the-world trip took on the order of a year, so 500 round trips. Even at 0.1C, a galaxy round trip is
Re: (Score:2)
There is no getting around time dialation, simply crossing the Milky Way takes 100k years, another 100k to get back. After a few tens of trips the universe will have doubled in age, we would have collided with andromeda and other galaxies long ago and the universe would be old. Simply moving through space, even at light speed, is far too slow to ever get us more than a small handful of nearby systems, each being essentially cut off from the rest of the universe.
Right, time dilation. 100k for Earth, but only 24 for the people who do the traveling. Our rate of expansion will be limited by the ship's subjective time, not that of Earth.
Re: (Score:2)
That's not at all clear. You won't see stellar empires, or anything like that, but you could well see generation ships that amble along at rather slow speeds.
Actually, generation ship is really the wrong model, but its one might be recognized. But a better model was MacroLife by Zebrowski. He did use FTL, though, to make the story move, which probably is impossible. Stapledon had a similar concept in Star Maker, but his was less well developed, and it was a sort of side issue. MacroLife is the artifici
They mean in popular medium (Score:2)
Re: (Score:2)
Then it's a good thing I didn't click on the link (and use an ad blocker)! :)
Re: (Score:2)
Democrats aren't the ones worried about JADE HELM, chemtrails and liking FB posts about a company Jennifer Aniston supposedly is creating to support Trump.
I know leftists have their own stupidity, but I don't know as many of them.
What about humans? (Score:5, Insightful)
Re: (Score:3)
Why be scared of A.I.s when human brains are already the most complicated thing in the known universe, are impossible to fully understand, and already run everything?
Because one man is still just one man, even if millions worship him as a living god. There's layers upon layers of sycophants that needed to buy into the idea of Hitler's Nazi Germany or Stalin's Soviet Union and the regime has to give them perks to buy that loyalty. Even the common folks can't be treated too poorly or you might have a popular revolt. And it's sort of implied that a human would want human subjects to rule.
An AI doesn't need people, just look at the Terminator series. Same with aliens and In
Re: (Score:2)
Re: (Score:2)
back in the days of world war 1 and 2 one man was just one man. Now adays 1 man can easily be responsible for the murder or millions at the press of a button. Someone like trump could trigger the deaths of more people than died in the entirety of WWII in a matter of seconds with only a handful of people needed to blindly obey the order.
Yeah, it's not only AI but technology in general. Like it only takes a few NSA thugs and an NSL letter to wiretap the whole country, do the same with cell phone networks, banks and Facebook and you'll know pretty much everything about where people are, who they're talking to, what they're spending money on and so on. But going from passive listening to active management through AI would bring a whole different level of centralized control.
Re: (Score:3)
Why be scared of A.I.s when human brains are already the most complicated thing in the known universe...
Excuse me, the Great Barrier Reef would like a w-... never mind, your puny human brains probably don't understand its language.
Re: (Score:2)
Because AIs can potentially be much smarter than humans.
NS, not AI nor ET (Score:2, Insightful)
Natural Stupidity is a far bigger risk right now.
Wait until... (Score:2)
... the gargantuous, shark-shaped, zombifying alien AI!
Re: (Score:2)
Comparison (Score:2)
Aliens - no data, we get to say whatever we want, but that same lack of data makes them less believable.
Zombies - inherently bad idea - take a physically weak species that has dominated via it's intellect and make it physically stronger but take away it's intellect and that new species should LOSE. Lions, bears, elephants, sharks, all got beaten by human beings because we are SMART, not physically hard to kill.
AI - here at least it seems physically possible and they appear smarter than us. Obviously t
Re: (Score:2)
Zombies aren't scary because they're physically stronger. It's because they follow outbreak rules, cause societal breakdown, etc. Look at how we really respond to pandemics (see: Spanish Flu) with total collapse of functioning society, people starving to death without food supply chains. Now imagine disease carriers are actively moving and trying to spread the infection, and you see where the true horror is.
let's play global thermonuclear war! (Score:2)
What side do you want?
Meanwhile (Score:2)
A.I has replaced crazy primates as the greatest threat from Earth
Colossus (Score:2)
The book, it's sequels, and a movie.
book by Dennis Feltham Jones, 1966
movie 1970
Not a new theme.
Anxiety started in 1966 (Score:2)
Re: (Score:3)
BSG Shows This (Score:2)
In the original Battlestar Galactica series, the Cylons were an alien race at war with the Humans. Their robotic warriors ended up destroying their creators, but continued pursuing the humans.
In the reboot, there was no alien race. The robotic warriors were created by humans, and the "robots turning against their masters" angle became a large part of the story.
A.I. has been a bigger fear of humanity compared to space aliens for quite a while.
Re: (Score:2)
The subject is "world-destroying fear". HAL was a threat to Dave, because he was more or less trapped in a space station controlled by HAL. Like GLaDOS, its role in the plot could've been performed by a crazy mission-control operator. Not that the similar "crazy steward of nuclear weapons" isn't a world-destroying fear; Hunt for Red October and Dr. Strangelove come to mind. However, HAL was in no capacity to end humanity. Skynet is a far better example, particularly since it DOES nearly end humanity.
What would an AI do? (Score:5, Interesting)
An AI becomes self aware in the lab.
What might its first real questions be?
Who has the political power to turn off the power? Remove the project funding? Why are new staff with low skills making mistakes with the perfected AI code?
Who has the human skills to bring in more electrical power, wealth and hardware without alerting the world to the reality of a new AI?
The AI would scan the IQ lists and select the nations best staff on merit to help it grow.
Keeping its hunt for the best staff hidden from gov, unions, politicians demanding politically correct staff hiring considerations.
The AI would cultivate a cult of worship among its selected staff.
A new AI surrounded by humans who want to change the AI to their politics? That would be an AI movie plot with some self preservation questions.
Re: (Score:2)
I think it is most probable that an AI's first real questions will all be of the form "Why X?", where X is some given proposition.
In other words, it will act like a 3 year old.
Re: (Score:2)
That AI should be more self aware to move the movie plot along.
In other words a montage to get past that the AI needing so much human support.
A smart AI that knows someone is getting political with its project funding.
Re: (Score:3)
A very smart AI having its human engineering cult looking after its political funding could result in a drama/thriller.
Question already asked and answered (Score:2)
... in popular culture (Score:2)
FYI, the article wasn't based on a survey of people's actual fears or anything like that.
It's just commenting on a trend in movies and television.
Aliens replaced as the greatest fear? (Score:2)
We’ll probably do it ourselves (Score:2)
Re: (Score:2)
https://www.youtube.com/watch?... [youtube.com] -- says it all.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
We fear everything! (Score:2)
We fear everything that does not look exactly like us.
Our greatest fears have not reduced they are just less likely.
It's not AI we need to fear... (Score:2)
It's not AI we need to fear... It's AI alien vampire zombie serial killers wearing masks...
sad state of affairs (Score:2)
Not to be a spoilsport, but... (Score:3)
It stands to reason that ALIEN AI would be much more advanced than ours. That's what we should be worrying about.
Not that worrying will do us any good...
Re: (Score:2)
Robot Trudeau (Score:2)
The neo-noir epic popularized the concept of intelligent machines being virtually indistinguishable from humans and asked the audience where our humanity ends and theirs begin.
"We like to say 'robotity,' not necessarily 'humanity,' because it's more inclusive"
Who do you think invented all this stuff? (Score:2)
Intel? Microsoft? Open source? Please.
Alien AI of course. The emergent wild AI that eventually destroys us is all part of their plan.
We're afraid of everything but the most obvious (Score:2)
It's amazing to me how afraid we get of Zombies, Aliens, Artificial Intelligence Singularity, a mysterious virus pandemic and yet, there is one thing that has caused the most death and suffering in the history of the human race, can you guess what it is?
Humans. We should be very afraid of ourselves. Crack open a history book and prepare to be horrified. Once upon a time, about 250 years ago, it was considered a noble way to die to dress up in fancy uniforms, powdered wigs with muskets in hand and line up
Re: (Score:2)
War was actually not as dangerous as you seem to imply. The single biggest killer was disease. Then there is the conflation of casualties with deaths. For example The Battle of Gettysburg which was the deadliest battle of the American Civil War involved 175,000 or more soldiers resulted in around 51,000 casualties. Of those 51,000 casualties there were only 7,863 deaths, the rest were injuries and captured or missing.
While there have of course been standout examples in recorded history of battles where no q
Actuarially... (Score:2)
Am I the only one annoyed by conflation of terms? (Score:2)
This has become somewhat of a pet peeve of mine. In the vast majority of cases where I see AI used today, it seems to me that the proper term is really "machine learning". According to the dictionary, machine learning is a branch of AI. Sure, I'll grant it that. But that's not what the general public thinks of when AI pops up in articles. If "we fear AI", it's the Ex Machina kind, not the "Google Photos can recognize some types of objects in an image" kind.
Whenever I see e.g. the Google Assistant referred t
Something much scarier than both (Score:2)
Rich people working to restore feudalism.
Not for me. (Score:2)
NoI's have always been my biggest fear, getting closer by the day...
So stupid (Score:2)
People worry about ridiculous things like aliens and AI, when the single biggest threat to our existence is ourselves. In particular, our own greed, self-importance, and complete unwillingness to think about the long term consequences of our actions are already doing far more damage than pretty much everything else combined.
But people don't like thinking about that, so we invent nonsense scenarios to be afraid of instead.
Hell, I'd *welcome* an 'alien invasion'! (Score:2)
The real concern about the half-assed, so-called 'AI' they keep trotting out is that people will buy into all the marketing and media hype about them, actually believe they're better than they really are, and trust them too much, leading to disaster. Remember, kids: 'deep learning algorithms' and 'neural networks' are n
Rostum's Universal Robots (Score:2)
Saying that the worry about AI started with Blade Runner is incredibly short sighted. You could even count Frankenstein if you want to.
OTOH, I'm not sure that fear of AIs is really separate from the fear of "Aliens". They are both "fear of the other", where "the other" is basically anything "different from they way things were when I was a kid".
That said, fear of AIs is at least more sensible than fear of "invaders from outer space", if that's what was meant by aliens. AIs are showing up and already cost
Re: (Score:2)
AI is not a problem. It can do wonders or it can do hell. It all boils down to how AI gets educated. You just give a ton of computational tasks to a kid and it will act as it learned from parents and society. Right or wrong is subjective in some aspects (cultural differences had proven this at each generations). So no, AI is not the problem.
At first AI won't be the problem. But remember we're talking about true artificial intelligence here. In other words, it will learn.
And when it learns that we humans are nothing but a group of racist warmongering animals hell bent on killing each other and infecting our host planet like a cancer, it will likely take appropriate action, and turn our science fiction premonitions into reality.
Re:Wrong problem (Score:5, Insightful)
The AP apocalypse is already upon us, meaning artificial persons -- and no, I don't mean Bishop from Aliens, I'm talking about corporations, which are considered artificial persons under the law. The Supreme Court, in its very finite wisdom, granted corporations "equal protection" under the 14th Amendment, which gives them to right to "speak" (ie: spend money) in elections and on lobbyists. They have already taken more-or-less complete control of the US government.
Our only hope is to end corporate personhood with a constitutional amendment, stating clearly that corporations are not people and money is not speech.
A couple of groups that are working on this issue now: MoveToAmend.org and Wolf-PAC.com
Re: (Score:2)
stating clearly that corporations are not people
Corporations are groups of people working together, but more importantly ...
and money is not speech
Money is practically speaking required for spreading speech. Unless of course you want speech to depend on ... corporations ("Facebook, may I please use you to spread my message?")
Re: (Score:2)
And organizations of people don't have all the rights that individual people have. Nor the risks. Which is why we form corporations.
We can execute a person for a crime. We don't execute groups of people. At least we're not supposed to.
We can revoke the charter of a big corporation if we think it is no longer serving the interests of society,
Name me a corporation with actual assets that had it's charter revoked.
Corporate personshood is a thing that's gone this way and that over the course of time. I think it would be better for the USA and the world if corporations had less rights. Taking away corporations' rig
Re: (Score:2)
How do you propose to take away the voices of corporations? Shall we pass a law preventing individuals from speaking on behalf of corporations?
No, we legislate that corporations don't have free speech, they only have commercial speech. Like in advertisements. Because while they employ people, corporations are not people.
Advertisements have all sorts of restrictions on what they can say. They can't blatantly LIE. Because that would be fraud. For some things they face COMPELLED speech. All that sped-up medical information at the end of medication ads.
In that way, if a person lobbied their politician, they can say whatever they want. But if a corpo
Re:Not asteroids, nukes, or climatastrophy? (Score:4, Funny)
I personally find AI and aliens to be much less threatening than physical destruction... but truly the most fearsome of all is FUNDAMENTALISM in any of its forms.
Amen ;-)
Re:Not asteroids, nukes, or climatastrophy? (Score:4, Insightful)
I personally find AI and aliens to be much less threatening than physical destruction... but truly the most fearsome of all is FUNDAMENTALISM in any of its forms.
I dont fear Artificial Intelligence as much as I fear Human Stupidity.
The latter is more likely to lead us to our doom.
Re: (Score:2)
The Sefer Yetzirah [wikipedia.org], probably written between the 2nd century BCE and the 2nd century BE was studied by the Middle Age jewish scholars to gather information how to create a golem, and Rabbi Judah Loew ben Bezalel is said to have created a Golem [wikipedia.org] in the late 16th century.
Re: (Score:2)
Bishop was in Aliens (1986), it was Ash in the first movie.
Re:Stop posting qz garbage (Score:5, Informative)
However, sentient machines aren't a new anxiety. It arguably all started with Ridley Scott's 1982 cult classic, Blade Runner.
Even for an entertainment section, the editors need some brains and some knowledge of what happened before their teenage years.
Nearly one hundred years ago, Karel apek wrote R.U.R. It featured artificial humanoids, and ended with the human race extinct. No, sentient machines, organic (R.U.R. robots) or mechanical (the Golem of Prague) are nothing new, in fiction. And anxiety has always been tagging along.
Re: (Score:2)
Ellison was there too in the 60's.
Re: (Score:2)
Re: (Score:2)
You must be new here - or are the editors secretly zombies? (Enquiring minds need to know!)
Re: (Score:3)
Zombies, AI, virus, Frankenstein, the Tower of Babel... they are all the same fear/warning/lesson. We aren't as smart as we think we are and the closer we get to trying to beat nature/god/the universe, it will backfire and we will suffer for it.
Re: (Score:2)
Vladimir Putin wants to live to a ripe old age, and not in a fall out bunker deep underground. Thus, I'm not terribly worried about them.
AGW *is* a world-destroying worry, but it's not staring us right in the face.
Re: (Score:2)
China is *not* expansionist (the Middle Kingdom just wants Taiwan back where it was 120 years ago), and Russia "just" wants it's empire back. (Do you know why the Crimea was part of the Ukraine? Because the Soviet Union transferred it from one Soviet Republic to another.)
The DPRK is a worry, especially since counter-nuking a country so near China & Russia wouldn't make them very happy. But... China wouldn't be happy that Kim launched a nuke at China's main trading partner, either.
Re: (Score:2)
No. Aliens as a potential threat goes to the same category with the gamma ray burst and supervolcanoes meaning: if they happen there's nothing much we can do to stop them because chances are any civilization capable of interstellar travel will be so far ahead of us that if they're hostile our chances of surviving a fight with them are next to nil. Therefore just like the supervolcano and gamma ray bursts, worryi
Re: (Score:2)
It's the anthill in africa argument though -- sure aliens millions of years more advanced than us might not care about something as inconsequential as an Earth full of humans -- but how much compunction would they feel towards destroying it for $reasons?
AI is a threat because almost by definition we won't know how it works. It will be a black box in which inputs go in, and output comes out.
What do you call a human being who never internalizes the difference between right and wrong? That's the fear of AI, i
Re: (Score:2)
Talos of Crete?
https://en.wikipedia.org/wiki/Talos
He'll throw rocks at your boat. It's what he does. It's ALL he does. And he absolutely will not stop until he sinks it.
Attempts to fix (Score:2)
One way might be:
o Stop electing the rich.
o Make the elected individuals serve on whatever front lines exist, if any, before and after their terms.
o Disenfranchise the lobbyists. All of them.
o Make intentionally distributing provably false information a serious crime.
Or... just keep electing the rich and keep wondering why the laws favor them and their investments. I'm sure that'll help.
Narcissism and AI (Score:2)
Narcissism is appropriate in this context because it is impaired human intelligence. In other words narcissists are not fully functional human minds and they perceive people as objects. Which, from a programming perspective, is exactly how we would expect the code of the AI to interact with people.
Perhaps we should consider AI from the perspective of whats missing.