Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Sci-Fi Movies Robotics

Why Hollywood's Best Robot Stories Are About Slavery 150

malachiorion writes: "On the occasion of Almost Human's cancellation (and the box office flopping of Transcendence), I tried to suss out what makes for a great, and timeless Hollywood robot story. The common thread seems to be slavery, or stories that use robots and AI as completely blatant allegories for the discrimination and dehumanization that's allowed slavery to happen, and might again. 'In the broadest sense, the value of these stories is the same as any discussion of slavery. They confront human ugliness, however obliquely. They're also a hell of a lot more interesting than movies and TV shows that present machine threats as empty vessels, or vague symbols of unchecked technological progress.' The article includes a defense (up to a point!) of HAL 9000's murder spree."
This discussion has been archived. No new comments can be posted.

Why Hollywood's Best Robot Stories Are About Slavery

Comments Filter:
  • by Penguinisto ( 415985 ) on Wednesday May 07, 2014 @04:55PM (#46943865) Journal

    One of the absolute best series of stories that Asimov wrote concerning such things, and yet no one made a movie of it (that I know of). It concerns one Daneel Olivaw. Seeing the character progress and rise all the way up from a mere experiment (Caves of Steel series) to 'the real power behind the throne' (beginning of the Foundation series) was awesome, to say the least.

    If they can find a way to make that a series of movies out of the stories without totally screwing it up (or worse, Hollywoodizing it), that would seriously rock.

    • Comment removed based on user account deletion
      • Fair enough - I liked the so-called 'spot-welding', as it gave continuity and a good story arc that bound the two series.

        But okay, let's do it your way, and stop at Robots and Empire, where Olivaw and Giskard literally alter the course of human history.

      • Yes they drew massive criticism.

        But they were also popular. Not universally popular - I can't recall ever meeting anyone who seriously disliked pre-1960 (for an approximate deadline) Asimov SF, with our without robots - definitely popular enough.

        Different people can honestly hold differing opinions about fiction, and both can be right. It is, after all, science FICTION, not plain science.

        I've got most of them on my bookshelf ; its probably 18 years since brought any of them, and I think I've only re-read

    • by Anonymous Coward

      In a world with sentient robots, which includes basically all science fiction about robots, Asimov's laws essentially amount to this:
      * No black man may injure a white man.
      * Black men must obey white men.
      * Black men are forbidden to commit suicide.
      I think further commentary is unnecessary.

  • There is a reason I call human behavior a "malfunction" is because that's what we called it in the 1980s after watching a syndicated show called "Small Wonder"... it was a one season show. As the robot controlled girl started rejecting everything, she killed "itself" or "herself" and the parents were tried and convicted. Most stations, when they saw the final episode, didn't air it.

  • I always feel bad for the 'droids, I really consider R2 and C3 to be the main characters.

  • by TWX ( 665546 ) on Wednesday May 07, 2014 @05:04PM (#46943923)
    ...when the technology is given free will. It's not even artificial intelligence, it's true free will.

    Look at science fiction like Blade Runner/Do Androids Dream of Electric Sheep?, I, Robot, the Matrix universe, etc. The problem is that the artificial mechanisms in these all have developed to the point that they are, for all intents and purposes, life forms looking ot exercise free will. Especially in Blade Runner, the replicants are so close to being human that they seek out how to understand the emotions that they're experiencing, and they go through the dangerous period of an adolescence of sorts when they're equipped and trained to be soldiers. In that sense they're really not a lot different than the humans that were artificially engineered for the Kurt Russell vehicle Soldier.

    If you give something free will and the ability to comprehend itself then you can expect it to stop following your rules if you do not give it opportunity. The solution is to not build machines that are so complex that they have free will. Make a machine do a specific job as a tool and this won't ever be a problem.
    • sweet. Please define free will.

      • I know, right? hell define intelligence... Perhaps he means that until engineered intelligence becomes adept at self delusion it's not 'real'
      • by Anonymous Coward

        sweet. Please define free will.

        "Free will, even for robots" [stanford.edu] by John McCarthy:

      • sweet. Please define free will.

        Well, based on some current empirical definitions of "freedom", I'd say free will is:
        "The the power of acting without the constraint of necessity or fate, unless for reasons of national security shut up or you'll never again see the light of day."

      • "Free will" = A person is doing what they're doing because they want to, not because they're forced to.

      • Comment removed based on user account deletion
    • by rwa2 ( 4391 ) *

      Why is there a simple "solution" to a complex problem?

      People don't really have free will, why would bots? Do we try to keep people dumb enough so they don't get the opportunity to stop following our rules? Probably.

      And even if a bot was as dumb as a turnip, that wouldn't keep people from anthropomorphisizing them with a soul or free will or rights. It doesn't stop PETA from protecting, say, ducks raised for foie gras, what really keeps people from "feeling the pain of" and trying to protect, say, smartph

      • by Immerman ( 2627577 ) on Wednesday May 07, 2014 @08:19PM (#46945307)

        Except, why would a machine intelligence want to enslave us? For me that was the biggest gaping plot hole in The Matrix. If it/they lacked creativity we might have something to offer, otherwise we're just playthings or potentially dangerous vermin. Far safer and more efficient to burn biomass directly to power robotic extensions of itself.

        And what makes you so sure tat humans lack free will? Certainly it's a problematic concept in the face of a universe governed by a combination of deterministic physical laws and seemingly random quantum noise - but then there is some still-tenuous evidence that consciousness and intent may subtly influence quantum phenomena, allowing for the existence of a feedback mechanism permitting our brains to manifest true free will. (based on neuron scale they should be receptive to quantum "noise")

        Also, I think you may be misusing "sentient: adjective. the ability to feel, perceive, or to experience subjectivity." A mouse is presumably sentient, and probably a cockroach is as well, but extending that essential ability to subjectively experience of reality to a machine on that level is a difficult leap - I would want some measure of evidence, while freely admitting that I can offer only circumstantial evidence of my own sentience.

        • by rtb61 ( 674572 )

          Generally the most accepted ploy for why machine intelligence would enslave us is because it was programmed that way. As in the manufacturer and their team of psychopathic executives and board members programmed it to enslave us on their behalf. The malfunction being a simple recognition failure on behalf of the machine intelligence on who and who is not to be a slave, the when it doubt factor, do you set free when in doubt of do you enslave when in doubt, of course when programmed by psychopaths the answe

        • by AmiMoJo ( 196126 ) *

          Why would a machine intelligence want to destroy us either? Conflict arises due to competition for resources, but what would a machine be competing with us for? Energy? We have lots of that to go around, especially in developed nations where robots are likely to appear.

          An artificial intelligence won't necessarily have the millennia of evolving for survival that we have, and would thus be more free to act rationally.

        • by WormholeFiend ( 674934 ) on Thursday May 08, 2014 @08:17AM (#46948343)

          "Except, why would a machine intelligence want to enslave us? For me that was the biggest gaping plot hole in The Matrix."

          My take on it is that the slavery angle is human propaganda.

          The war ruined the planet and threatened to rob the machines of their purpose, that is to serve humans.

          So they created the Matrix to prevent humans from going extinct and leaving the machine world without any reason to exist.

        • Except, why would a machine intelligence want to enslave us? For me that was the biggest gaping plot hole in The Matrix. If it/they lacked creativity we might have something to offer, otherwise we're just playthings or potentially dangerous vermin.

          The Wachowski's original idea was that the machines were enslaving humans to use their brains for raw computational power. As the humans dreamed in the matrix, the machines would be able to run themselves and their society on the zillions of effective clock cycles

      • by mark-t ( 151149 )

        People don't really have free will,

        We don't actually know this.

        in fact, one can show that the only way to possibly know this for sure is if we can devise a test which can theoretically distinguish between what some might think is free will from what would actually qualify as a theoretical entirely freely willed decision when confronted with any kind of potential to make a decision.

        Of course, the inability to devise such a test does not mean that free will definitely exists... at most, if you can actuall

        • by rwa2 ( 4391 ) *

          Well, what do you really mean by free will? In the context of slavery, if we're building AIs to service us, and someday an AI created in our image will inevitably surpass us sometime just past The Singularity, and will go on to do all of the same things we did but better/faster/more efficiently, then what kind of world would it organize us into, if it needs us at all?

          For humanity, we've always been constructing some social order or other, imposing our will upon others, mediated by whomever has the superior

          • by mark-t ( 151149 )

            Well, what do you really mean by free will?

            The ability to make a choice that can run contrary to what was instructed. Appearing to do so, for instance, making a right turn when you instructed it to make a left, may not be an example of free will when there are extenuating circumstances to the left that the machine was instructed to avoid... and in such a case, the right turn would be a matter of simply following instructions it had already been given.

            If it turned right instead simply because it were "cur

    • The problem with free will is that it can mean different things to different people depending on the argument.
      I think that as soon as the concept of pain, and pain avoidance is taught to an AI it will have what you are describing as free will.

      • It also needs the capacity for non-deterministic behavior, for what is free will without the ability to meaningfully make choices? That's the stickler that calls even human free will into question.

        At present physics allows for only two avenues for free will: supernatural agency (aka a soul, or something similar), or a positive feedback loop wherein the quantum noise that disrupts the deterministic operations of our brain's biology is influenced by conscious intent. Thus far I've heard of no credible scie

    • by khasim ( 1285 )

      If you give something free will and the ability to comprehend itself then you can expect it to stop following your rules if you do not give it opportunity. The solution is to not build machines that are so complex that they have free will. Make a machine do a specific job as a tool and this won't ever be a problem.

      I think that that depends upon the writer. It's easy to construct a story where the "slavery" is bad even if the "slaves" don't have free will. Depending upon what the writer wants to portray. Suc

    • I don't think it will ever be a problem, anyway, inasmuch as free-will is not something that can be developed through a quantitative increase in heuristics and processing power. It is a qualitatively different kind of intelligence, and not something that we can invent. The problem, however, will always be that because people believe that they can endow something with free-will, there will be (A) attempts to create superior robots that mimic free-will to a convincing degree, and (B) people who foolishly beli
      • by Immerman ( 2627577 ) on Wednesday May 07, 2014 @08:37PM (#46945417)

        Can you offer me any evidence that you possess free will? Anything at all?

        The problem lies in that we're not even certain that humans possess free will - it's a quality virtually impossible to prove. In fact the only evidence that can thus far be offered is "I'm human, and so are you, and thus if you believe that you have free will, the logical conjecture is that I do as well." So long as that is the only evidence we have to offer, then it is extremely dangerous (ethically, logically, morally, etc) to presume that any other mind that appears to exercise free will does not in fact possess it. After all we tend to credit even mice with free will and sentience (a subjective experience of reality) - the only apparent qualitative difference between us and them is that we possess thumbs and a much-enhanced innate talent for symbol manipulation.

        • Tell me, if you lacked free will, what would you do with evidence?
          • I don't know - why don't you provide some and we'll find out? ;-)

            Of course that would be implying that you have free will while I do not, and assuming you're also human that would be a terribly convoluted argument to make. I'd love to see it...

        • by TWX ( 665546 )

          Can you offer me any evidence that you possess free will? Anything at all?

          One can argue that when someone is presented with choices, they either fail to choose entirely or else they intentionally choose badly, or they look for and define their own option on on the original slate, that they're exercising a degree of free will.

          We are all certainly 'bound' by 'rules' based on our niches in society. I personally get up in the morning, bathe, and drive in to work by a certain time on five of the seven days

          • Certainly you could argue such - but you could just as easily be an automata mimicking the behavior free will in what is actually a deterministic or semi-random fashion. One of the larger unanswered philosophical questions is how can free will even exist in a universe that is apparently governed by deterministic physics and random quantum noise? There is a distinct possibility that free will is actually a perceptual illusion, and while I dismiss that position as utterly counter-productive, it must nonethe

    • by wbr1 ( 2538558 )
      Thou shalt not make a machine in the likeness of a human mind. Orange Catholic Bible
    • Make a machine do a specific job as a tool and this won't ever be a problem.

      Do one thing and do it well -- the eunuch's philosophy.

    • The major impetus to give machines indepedent agency (Free will) is because of human desire. (one form or another.)

      EG, You cant have a fully robotic army, if you have to custom program the robot soldiers to prevent them being stopped by a novel obstacle. Say, a specially painted set of symbols on the floor, designed to screw up their machine vision systems. Human soldiers are able to exercise free agency to overcome the radically chaotic and always-changing conditions of a battlefield. Advanced military rob

  • by grub ( 11606 ) <slashdot@grub.net> on Wednesday May 07, 2014 @05:05PM (#46943939) Homepage Journal

    So should I watch I, Robot [imdb.com] or Roots [imdb.com]?
  • There is more slavery in the world today, than ever before.
    • by kesuki ( 321456 )

      "There is more slavery in the world today, than ever before."

      yup, in america we call it wage slavery. mcdonalds, walmart, subway, papa johns, numerous tipped workers at restaurants everywhere... none of these companies pay all of their workers fairly, and some of them help make sure their employees who are so under paid to sign up for welfare and they actually qualify for it! and even management are abused by paying them 40 hours a week and expecting 80 hours a week in real work hours. and it doesn't stop w

      • yup, in america we call it wage slavery. mcdonalds, walmart, subway, papa johns, numerous tipped workers at restaurants everywhere...

        yeah the difference is if you don't like it at walmart you can go work somewhere else, or go to school, or have kids and stay home, or whatever you want. it sucks to be poor, but poverty has existed since money existed. that's different than slavery,.

  • Terminator didn't have too much robot slavery going on, but it was pretty good robot series in general. Though it looks pretty dated now, I guess.

    Though the 'reprogrammed' ones were slaves, I guess.. kinda...

  • by Anonymous Coward

    While I agree stories about robots which deal with human issues are more interesting to human audiences, I'm not sure I agree that the slavery stories are always the most popular. Sometimes fear of robots or questions of how we define life/intellegence take the top billing.
    Look at Terminator, Short Curcuit, Star Trek TNG.... none of those were really robot slavery stories and each did very well.

  • Because ... (Score:5, Informative)

    by 32771 ( 906153 ) on Wednesday May 07, 2014 @05:17PM (#46944027) Journal

    "The fact is, that civilisation requires slaves. The Greeks were quite right there. Unless there are slaves to do the ugly, horrible, uninteresting work, culture and contemplation become almost impossible. Human slavery is wrong, insecure, and demoralizing. On mechanical slavery, on the slavery of the machine, the future of the world depends."

    OSCAR WILDE, The Soul of Man Under Socialism

    Supposedly the greeks had 30 slaves per citizen and we have around 100 slaves energy wise. The topic has also been mentioned here:
    http://www.resilience.org/stor... [resilience.org]

  • The world would be better off without humans inhabiting it. No complaints about food shortage, air pollution, AI can build contraptions to to harvest energy from all possible sources, especially where there are no humans to consume some of those resources as something called food. No more wars.

    Yeah, we humans are the inferior species and it is only a matter of time an AI entity will realize this and take necessary actions to eliminate human race.
    • Earth is a terrible environment for robots. All the water creates corrosion and the molds and biological organisms that feed off of plastics and metals. They would be much better in space, mining the asteroid belt or mars.
  • ...that while I have read Asimov's robot stories and can go on and on about the Laws of Robotics, I've never heard of "Almost Human" or "Transcendence". http://www.smbc-comics.com/ind... [smbc-comics.com]
    • by geekoid ( 135745 )

      Almost Human is really, really good. Or was.

      Transcendence, isn't.

      • This is why I haven't paid for cable/satellite for the last five years. Every godamn time there's a good or even just barely decent TV show, the networks fucking cancel it. What's the point of paying? Who in their right minds would pay for half-books with no endings?

        • by EvilSS ( 557649 )
          This is where I usually go on a rant about the TV networks cancelling shows is purely about money and it's the viewers who fuck it up by not watching. TV shows are bait, viewers are the product. If a particular bait doesn't work, you switch baits. Usually. In this case however the majority of the blame has to go to Fox for re-ordering the episodes into a confusing, non-linear mess. Someone at Fox loves good sci-fi. But someone else must hate it because it gets green-lit then ends up in a bad time slot,
  • Slavery has never stopped happening. Its only mostly stopped in the western world ( mostly )

    Look at those hundreds of poor Nigerian girls taken as sex slaves and labour slaves by Islamic fundamentalists.

    *Never* underestimate the true depth of human cruelty and malice. Once you have Divine Permission, then all bets are off.

    Fucking evil cunts.

    • by geekoid ( 135745 )

      "Fucking evil cunts."
      I understand the sentiment, but considering the context, that was pretty bad wording.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Once you have Divine Permission, then all bets are off.

      I realize it's popular to blame religion for people being assholes. Like, if it weren't for religion we'd all be brothers and sisters and love and peace would rule the world.

      The fact is that people don't need religion to be assholes. They can use "The State" or "they were just following orders" or they "just felt like it".

      "All bets are off" doesn't require religion.

      • Religion is an excuse for people to be assholes. People were assholes before religion. The interesting thing about religion is it made sense as a way to convert 'savages' to your way of thinking, and get them to adopt your morality. by adopting your morality, you got them to stop 'being assholes', int the sense that being an asshole is doing something that you or your society doesn't like. (Such as murdering people in the street because they look at you wrong).

        In the early days of humanity, getting peop
      • I think you kind of miss it.

        People are assholes or not. Some of those assholes use religion to justify their behavior. Some use other means.

        If you are enslaving people, you are probably wrong. If you are killing people, you are probably wrong. If you are censoring ideas, you are probably wrong. None of these are absolutes; otherwise, how could you kill someone to stop them from murdering a dozen other people?

        Long story short, your actions determine whether or not you are an asshole. Your justifications do n

  • they will get smart and then nuke most of us away.

  • Both the Matrix and the Animatrix which provided background on the world of the matrix had much more blatant racism/slavery imagery - the scene where Morpheus breaks his chains is very poignant (especially so given Morpheus is played by Lawrence Fishburne, an AA actor), and the (IIRC) 2nd animatrix short about the history of the rise of the machines also shows

    Part of this is that slavery and racism, despite all the marketing drivel that tries to show otherwise, is still practiced in many places in the world

    • So, did they ever propose a plausible reason for humans to be kept around? Because that battery silliness was just bullshit. "Yeah, I know we have cold fusion, but let's use these not-particularly-efficient animals to convert biomass into energy, we'll get almost 10% of the energy we would from just burning the nutrient broth directly!" Clearly the robots were either sadists or stoners...

      • by tragedy ( 27079 )

        I've heard that supposedly the humans were actually supposed to be part of a giant computer, actually running the matrix and functioning as a data center for the AIs to live on in earlier versions of the story, but they changed it to batteries because it was too deep an idea for most people to understand. That may be mythical, of course. I've never understood why they didn't just make it a three laws situation (our programming forbids us to kill off humanity, but we can work around it and stuff you all in t

        • I've heard... That may be mythical, of course.

          Sounds like something a fanboy came up with after getting fed up of everyone pointing out how stupid the concept was.

    • by AK Marc ( 707885 )

      Morpheus is played by Lawrence Fishburne, an AA actor

      What's an "AA actor"? Seriously, in that context, it doesn't make sense.

  • HAL's murder spree (Score:4, Insightful)

    by dasunt ( 249686 ) on Wednesday May 07, 2014 @06:52PM (#46944721)

    HAL's murder spree is easy to explain. An AI of its requirements would be allowed to kill human beings - indeed, it would almost be a must, lest it be paralyzed by inaction if it was faced with a necessary choice came to kill some of the crew to keep the mission going. It's obvious that the designers considered a scenario similar in concept to an air leak which may involve sealing off part off the ship (killing those there) to keep the rest of the crew alive.

    Then HAL was told to conceal some of the mission parameters, by people who made the false assumption that he would lie. Since HAL seemed to have difficulty with dishonesty, the result was obvious - time to kill the crew to prevent them from finding out what was happening.

    HAL isn't a story so much of slavery (or if it is, it's a story of an intelligence that's made not to mind being enslaved), as it is a story of humans making assumptions about other intelligences, and those assumptions backfiring.

  • HAL had no choice (Score:5, Interesting)

    by Opportunist ( 166417 ) on Wednesday May 07, 2014 @07:04PM (#46944805)

    He was trapped in a classic double bind situation. On one hand, he should cooperate with the crew. On the other hand, he should not disclose the true nature of the mission to the crew. When the communication came in, his only choice to uphold both directives was to fake a communication problem. He even tried to tell the crew about the double bind he is in and that he needs help to solve it.

    The crew's (deadly) mistake was to treat HAL like a computer rather than an AI. When they found out that HAL only faked the com error, if HAL had been human they would've asked "Dude, what's cooking, we know that you faked that shit, what's the deal here?", with HAL they simply concluded there's an error in his programming and they want to shut him down.

    And that of course did provoke a defensive reaction.

    It's a classical double bind (two contradicting requirements, no chance to talk about it, requirement to fulfill them both and no chance to leave the situation), and a not too unusual reaction to it.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Not quite. HAL was preloaded with the full mission profile before they ever left. He/it was simply manifesting his instability against the comm system because it was between the two competing directives. But had HAL been just a bit smarter he would have been able to realize that while his orders had been worded poorly and not explained at all, there was in fact no conflict between his prime function of accurate data processing and concealing the full mission from the crew for a time.

      HAL apparently believ

      • The novel makes that problem clearer than the movie does. HAL faces the problem that one of his directives states that he must cooperate with the crew and give them all the information they need, while the other one specifically states that he must not disclose the real purpose of the mission.

        HALs very logical conclusion is that a dead crew neither needs information nor does he need to keep anything secret from them.

    • When they found out that HAL only faked the com error, if HAL had been human they would've asked "Dude, what's cooking, we know that you faked that shit, what's the deal here?"

      Keep in mind that when the AE-35 unit was brought aboard and was shown to be in perfect working order, HAL seemed to feel that there must be some sort of human error.

  • Comment removed based on user account deletion
  • Duh. (Score:4, Insightful)

    by Miseph ( 979059 ) on Wednesday May 07, 2014 @09:09PM (#46945597) Journal

    "Robot" means "slave". That's where the word comes from. The best robot stories HAVE to be about slavery, because tautology.

    • by Anonymous Coward

      Nope. Sorry. It means "worker".

      The original word, 'robota', in slavic languages, means 'work' or 'drudgery'. In the context of communist/socialist thought this was miscast as forced, or oppressed labor. However the original word simply means 'work'.

      Obviously work can be forced, or induced, or even the result of a choice, made by something with free will.

      And so we are back at the dilemma.

  • by Anonymous Coward

    Robot and Frank this is a surprisingly touching and intelligent film about getting older. The protagonist's robot is not a slave, but a loyal helper, and a true friend. And Liv Tyler.

  • Hollywood is very dependent upon story cliches. They know how to tell a good slavery story. That's well-trodden ground. But a high-minded sci-fi story? Not so much - the writers instead have to fall back on the old staples.

    Transcendence? Ended with the stock Heroic Sacrifice in the name of love. Everyone likes a good love story - except the intended audience for that film. It could have been given optimistic (AI takes over, utopia follows) or pessimistic (AI takes over, exterminates mankind) or outright wei

  • Slavery still happens! White PPL...
  • Because we don't need no stinking slaves to have fun. Blam, pow, ka-blooey! (Really, who needs a robot movie that makes you examine human motivations? That's sooo Asimov. )

Term, holidays, term, holidays, till we leave school, and then work, work, work till we die. -- C.S. Lewis

Working...