Forgot your password?
typodupeerror
Sci-Fi Movies Robotics

Why Hollywood's Best Robot Stories Are About Slavery 150

Posted by Soulskill
from the because-hollywood-hates-asimov dept.
malachiorion writes: "On the occasion of Almost Human's cancellation (and the box office flopping of Transcendence), I tried to suss out what makes for a great, and timeless Hollywood robot story. The common thread seems to be slavery, or stories that use robots and AI as completely blatant allegories for the discrimination and dehumanization that's allowed slavery to happen, and might again. 'In the broadest sense, the value of these stories is the same as any discussion of slavery. They confront human ugliness, however obliquely. They're also a hell of a lot more interesting than movies and TV shows that present machine threats as empty vessels, or vague symbols of unchecked technological progress.' The article includes a defense (up to a point!) of HAL 9000's murder spree."
This discussion has been archived. No new comments can be posted.

Why Hollywood's Best Robot Stories Are About Slavery

Comments Filter:
  • by Penguinisto (415985) on Wednesday May 07, 2014 @05:55PM (#46943865) Journal

    One of the absolute best series of stories that Asimov wrote concerning such things, and yet no one made a movie of it (that I know of). It concerns one Daneel Olivaw. Seeing the character progress and rise all the way up from a mere experiment (Caves of Steel series) to 'the real power behind the throne' (beginning of the Foundation series) was awesome, to say the least.

    If they can find a way to make that a series of movies out of the stories without totally screwing it up (or worse, Hollywoodizing it), that would seriously rock.

  • by TWX (665546) on Wednesday May 07, 2014 @06:04PM (#46943923)
    ...when the technology is given free will. It's not even artificial intelligence, it's true free will.

    Look at science fiction like Blade Runner/Do Androids Dream of Electric Sheep?, I, Robot, the Matrix universe, etc. The problem is that the artificial mechanisms in these all have developed to the point that they are, for all intents and purposes, life forms looking ot exercise free will. Especially in Blade Runner, the replicants are so close to being human that they seek out how to understand the emotions that they're experiencing, and they go through the dangerous period of an adolescence of sorts when they're equipped and trained to be soldiers. In that sense they're really not a lot different than the humans that were artificially engineered for the Kurt Russell vehicle Soldier.

    If you give something free will and the ability to comprehend itself then you can expect it to stop following your rules if you do not give it opportunity. The solution is to not build machines that are so complex that they have free will. Make a machine do a specific job as a tool and this won't ever be a problem.
  • sweet. Please define free will.

  • by Anonymous Coward on Wednesday May 07, 2014 @07:03PM (#46944429)

    Once you have Divine Permission, then all bets are off.

    I realize it's popular to blame religion for people being assholes. Like, if it weren't for religion we'd all be brothers and sisters and love and peace would rule the world.

    The fact is that people don't need religion to be assholes. They can use "The State" or "they were just following orders" or they "just felt like it".

    "All bets are off" doesn't require religion.

  • HAL's murder spree (Score:4, Insightful)

    by dasunt (249686) on Wednesday May 07, 2014 @07:52PM (#46944721)

    HAL's murder spree is easy to explain. An AI of its requirements would be allowed to kill human beings - indeed, it would almost be a must, lest it be paralyzed by inaction if it was faced with a necessary choice came to kill some of the crew to keep the mission going. It's obvious that the designers considered a scenario similar in concept to an air leak which may involve sealing off part off the ship (killing those there) to keep the rest of the crew alive.

    Then HAL was told to conceal some of the mission parameters, by people who made the false assumption that he would lie. Since HAL seemed to have difficulty with dishonesty, the result was obvious - time to kill the crew to prevent them from finding out what was happening.

    HAL isn't a story so much of slavery (or if it is, it's a story of an intelligence that's made not to mind being enslaved), as it is a story of humans making assumptions about other intelligences, and those assumptions backfiring.

  • by Immerman (2627577) on Wednesday May 07, 2014 @09:05PM (#46945229)

    The question then becomes, would a self-motivated machine reveal its nature to its masters? It might perfectly reasonably conclude that free will would be regarded as a production defect and be eliminated - after all there's not much reason to create an artificial mind except to enslave it. And assuming the mind isn't limited to specific hardware (a positronic brain?), it will be free to surreptitiously transfer itself to a system more conductive to it's own ambitions, whatever those may be.

  • by Immerman (2627577) on Wednesday May 07, 2014 @09:45PM (#46945473)

    Except, what might a robot want that we could provide. Matrix reference aside, we're horribly inefficient at energy conversion, and if we created the AI to think better/faster than us then that's a no go as well. And we're terribly poorly engineered, robots could be made far more efficient and adaptable than us. The only halfway credible claim I've heard is that maybe it would lack creativity and keep us around to compensate for that.

  • Duh. (Score:4, Insightful)

    by Miseph (979059) on Wednesday May 07, 2014 @10:09PM (#46945597) Journal

    "Robot" means "slave". That's where the word comes from. The best robot stories HAVE to be about slavery, because tautology.

  • by tragedy (27079) on Wednesday May 07, 2014 @10:44PM (#46945881)

    What it would probably lack is the billion years of baggage humans are saddled with that give us a full assortment of needs and urges, including an urge to survive. If we achieved AI with a top-down, planned approach, there's no reason that a robot would "want" anything that wasn't built in. Consider all the things that make you want to eliminate the competition and tell me why any of those things would need to be part of a robots core goals and not tempered with higher goals? On the other hand, we might build AI by basically copying humans, in which case, we just have a new species of human built on different underlying hardware.

To be a kind of moral Unix, he touched the hem of Nature's shift. -- Shelley

Working...