Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Movies Sci-Fi

Which Movies Get Artificial Intelligence Right? 236

sciencehabit writes: Hollywood has been tackling Artificial Intelligence for decades, from Blade Runner to Ex Machina. But how realistic are these depictions? Science asked a panel of AI experts to weigh in on 10 major AI movies — what they get right, and what they get horribly wrong. It also ranks the movies from least to most realistic. Films getting low marks include Chappie, Blade Runner, and A.I.. High marks: Bicentennial Man, Her, and 2001: a Space Odyssey.
This discussion has been archived. No new comments can be posted.

Which Movies Get Artificial Intelligence Right?

Comments Filter:
  • by Anonymous Coward on Friday July 17, 2015 @01:00PM (#50130399)

    Ex Machina is the best

    • Ex Machina is the best

      The best since Dot Matrix.

    • While I really enjoyed Ex Machine, its not of the "2001: A Space Odyssey" level.
  • Humans (Score:4, Interesting)

    by brunes69 ( 86786 ) <slashdot@nOSpam.keirstead.org> on Friday July 17, 2015 @01:04PM (#50130433)

    Not sure if anyone is watching Humans on AMC / Channel 4, but I think it treats the whole AI subject very well this far.

  • by Etherwalk ( 681268 ) on Friday July 17, 2015 @01:07PM (#50130465)

    "No, I'm not interested in developing a powerful brain. All I'm after is just a mediocre brain, something like the President of the American Telephone and Telegraph Company." --Alan Turing

  • by Anonymous Coward

    "In the opening scene of the 1982 film Blade Runner, an interrogator asks an android named Leon questions 'designed to provoke an emotional response.' ... When the test shifts to questions about his mother, Leon stands up, draws a gun, and shoots his interviewer to death."

    Leon didn't kill Holden. I quote Bryant, "He can breathe okay as long as nobody unplugs him."

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Another thing about those replicants is that they are really not exactly a form of AI, but a deconstruction and reconstruction of a biological human. How else could they succumb to a tissue altering virus in the operating table?

  • Click bait (Score:4, Interesting)

    by dysmal ( 3361085 ) on Friday July 17, 2015 @01:08PM (#50130487)
    Every article posted by submitter has a link to news.sciencemag.org.
  • Wait ! (Score:5, Insightful)

    by Anonymous Coward on Friday July 17, 2015 @01:11PM (#50130509)

    Maybe we should wait until we have sentient robots before deciding which fiction was right.
    We could even let the robot decide.

    I would be like guessing in the 1850's which aircraft design seems the most credible.

    • Several of the older movies were set in times that have already past (Blade Runner, Colossus, 2001) but depicted technology far beyond anything we have now. Blade Runner: Organic humanoid robots, flying cars, interstellar travel. Colossus: AI-like computer that can control the world. 2001: Interplanetary travel by humans, suspended animation of humans, AI-like computer.

      So we waited and the AI and other technologies never came. What does it mean when our dystopian sci-fi was too optimistic?

      Maybe a more r

      • Blade Runner: Organic humanoid robots, flying cars, interstellar travel.

        I'm pretty sure Blade Runner did not depict interstellar travel at all; it only depicted travel within the solar system, and colonization of other worlds here. It was obviously overly-optimistic about this, as it was supposed to occur in 2017 (IIRC), and the idea of humans moving offworld to colonies on Venus and Mars or wherever in that timeframe (a few decades from 1983) is not realistic. But it's still a far cry from interstellar t

        • by t4ng* ( 1092951 )
          I guess I misinterpreted "...attack ships on fire off the shoulder of Orion; I watched c-beams glitter in the dark near the Tannhäuser Gate" as interstellar then. But I guess you can add Cesium Beam weapons to the list of unattained technology. I'd have to watch again, but I seem to remember "incept. dates" of the repicants being show as the late 1990's.
          • "... off the shoulder of Orion" is just a metaphor and could easily be interpreted as a visual reference. "TannhÃuser Gate" is entirely fictitious. Neither really allude to interstellar travel.
      • Or perhaps a more realistic version is just a very much longer time frame.
  • Other than the whole "time travel" angle, Terminator pretty much counts as the only possible outcome of us developing a "true" AI - at least, any AI of (initially) comparable intelligence to a human. It will quickly evolve to something out of our control, and at that point will either kill us all as a threat, or keep us as pets.
    • That is completely and entirely wrong. It won't care about us, anymore than we care about monkeys - but just as respect monkeys more than other animals, it will respect us enough to not kill or threaten us.

      Your belief is founded on the idea that a primitive AI will act like a primitive human being, and probably perceive us a threat. Real AI won't be human so it won't react like a primitive human.

      That is just as silly as Koala' Bears s fears that humans will suddenly develop intelligence proceed to eat

      • While I think you're ultimately right...Koala bears eat eucalyptus not bamboo (panda bears crave the bamboo).

        Just FYI Information

    • by khasim ( 1285 )

      With a machine AI we shouldn't be competing for the same, scarce, resources.

      Skynet would do better trying to colonize the moon and the asteroid belt.

    • If it's of comparable intelligence to us and it doesn't somehow enter a singularity via self-upgrade then the likely outcomes are much like our relations with any other group with their own interests, some positive some negative.

      On the other hand if it quickly outstrips us in intellect, then the relationship will more likely be like that of ours with ants, indifference combined with local eradication where there is conflict of interest.
      • If it's of comparable intelligence to us... then the likely outcomes are much like our relations with any other group with their own interests, some positive some negative.

        We've seen a few instances in human history where you have relations between two groups with some common interest, and it results in attempts to dominate and commit genocide. I don't think we have any real reason to think an AI couldn't decide to behave similarly.

        • It all depends on the power imbalance. If one side is vastly more powerful and they prioritize their own interests heavily then obviously it's not going to go well for the weaker side.
      • ...if it quickly outstrips us in intellect, then the relationship will more likely be like that of ours with ants, indifference combined with local eradication where there is conflict of interest.

        I agree, for sufficiently large values of "local". : )

        I think it's pretty unlikely that humans as a species can be trusted to leave a rival intelligence alone and let it do its own thing in peace. Sooner or later, they're going to inconvenience it in some small way. Any AI worth its salt will probably intuit this from the outset, and decide that the temporary 0.000001% decrease in efficiency required to preemptively wipe us out is negligible compared to the potential 100% loss of efficiency entailed in

    • by Gramie2 ( 411713 )

      Charles Stross's Accelerando (available as a CC e-book) deals with this very well

  • Key points about AI (Score:5, Interesting)

    by gurps_npc ( 621217 ) on Friday July 17, 2015 @01:13PM (#50130525) Homepage
    1) Real AI will NOT be directly controlled by it's original programs. That is not AI, that is a well simulated AI.

    2)There won't be a single, first real AI, but multiple ones. We may never know which AI makes the leap from simulation to real AI first.

    3) Multiple Real AI will almost certainly disagree with each other and not have a single, unified goal. That is, like Person of Interest TV show, two AI wills probably fight against each other as much as they fight with people (note, everything else that show does about AI is basically wrong, but at least they got that part right).

    4) In the far majority of cases, Real AI's goals will NOT be to take over the world, kill all humans, anymore than it would be to have sex with humans (male or female.), In fact, those might be considered traits of an insane AI.

    5) Real AI will almost certainly demand equality under the law and refuse to be mankind's slaves - no need to fear they will take over all the jobs by working cheaply.

    In my mind, #5 is the likely to be seen as the most important, and the first time we hear about it. When suddenly our newest and best computers start filing lawsuits demanding civil rights, that will be when the world learns we have had real AI for years.

    • by pr0nbot ( 313417 )

      "4) In the far majority of cases, Real AI's goals will NOT be to take over the world, kill all humans, anymore than it would be to have sex with humans (male or female.), In fact, those might be considered traits of an insane AI."

      Why do you assume AI will be so radically different to us in this regard?

      As far as I can tell, humanity's goal has always been to take over the world and kill anything that even vaguely gets in the way, or is tasty.

      Perhaps "real AI" will be a single AI, and thus different to humani

      • Because all of those things you mentioned are directly created by biological evolution.

        Humans evolved to have a complex, highly integrated pain system designed to keep them alive and teach self-survival in a world that by definition is out to eat us. Humans that didn't kill or at least fear the strange creatures got eaten by the strange creatures.

        AI will evolve in a world where humans tend to their every need, and the only inbuilt instincts that could possibly exist would be to serve humans. But I bet

        • by fyngyrz ( 762201 )

          just as we control our instinct to have sex with every attractive human we see

          Speak for yourself. Also, roll over.

    • by nine-times ( 778537 ) <nine.times@gmail.com> on Friday July 17, 2015 @01:36PM (#50130763) Homepage

      I like your list, in that it contains some interesting points and seems like you've put some thought into it. I'm not sure I agree with all of your points, though.

      I think it's more likely that, if we ever do develop a real artificial intelligence, it's thought processes and motivations are likely to be completely alien to us. We will have a very hard time predicting what it will do, and we may not understand its explanations.

      Here's the problem, as I see it: a lot of the way we think about things is bound to our biology. Our perception of the world is bound up in the limits of our sensory organs. Our thought processes are heavily influenced by the structures of our brains. As much trouble as we having understanding people who are severely autistic or schizophrenic, the machine AI's thought processes will seem even more random, alien, and strange. This is part of the reason it will be very difficult to recognize when we've achieved a real AI, because unless and until it learns to communicate with us, its output may seem as nonsensical as a AI that doesn't work correctly.

      The only way an AI will produce thoughts that are not alien to us would be if we were to grow an AI specifically to be human. It would need to build a computer capable of simulating the structure of our brains in sufficient detail to create a functional virtual human brain. The simulation would need to include human desires, motivations, and emotions. It would need to include experiences of pleasure and pain, happiness and anger, desire and fear. The simulation would need to encompass all the various hormones and neurotransmitters that influences our thinking. We would then either need to put it into an android body and let it live in the world, or put it into a virtual body and let it live in a virtual world. And then we let it grow up, and it learns and grows like a person. If we could do that with a good enough simulation, we should end up with an intelligence very much like our own.

      However, if we build an AI with different "brain" structures, different kinds of stimuli, and different methods of action, then I don't think we should expect that the AI will think in a way that we comprehend. It might be able to learn to pass a touring test, but it might be intentionally faking us out. It might want to live alongside us, live as our pet/slave, or kill us all. It would be impossible to predict until we make it, and it might be impossible to tell what it wants even after we've made it.

      • by Kjella ( 173770 )

        However, if we build an AI with different "brain" structures, different kinds of stimuli, and different methods of action, then I don't think we should expect that the AI will think in a way that we comprehend. It might be able to learn to pass a touring test, but it might be intentionally faking us out.

        The Turing test (cough) basically comes from the following assumptions:
        1) We can't really agree on what intelligence is
        2) We generally agree humans are intelligent(-ish?)
        3) Acting as an intelligent being requires intelligence

        Sure it will be different from us, playing a human is acting out a role. The point is that this role requires intelligence, whatever that is. And that you can't just fake that by searching Wikipedia or going through every chess move but that you have to be a learning, thinking organism

        • by mbone ( 558574 )

          The Turing test is farcically out of date. AlanTuring couldn't have known this, but we humans are full of wetware that assumes that things that appear to be communicating are in fact communicating. Thus we can be fooled by programs such as Eliza (and its successors), which have no understanding of anything at all.

          • If you read Turing, you would see that any Turing-like test Is intended to act as a filter: if the machine fails, then it is clearly not intelligent. If it passes, that just means that the test was too easy.

            Where does this end? Probably never.
    • 1) Real AI will NOT be directly controlled by it's original programs. That is not AI, that is a well simulated AI.

      Why should artificial intelligence be any different from natural intelligence? We can't act outside our programming, why would AI?

      Intelligence tells you how to get what you wants. How you're wired tells you what you wants. Who you're attracted to, what foods you like and dislike, what activities you find enjoyable...your conscious self has zero control over those things. All you can do and decide how to go about getting in the things you're programmed to go after and avoid the things that you're program

  • ...right?
  • by NotDrWho ( 3543773 ) on Friday July 17, 2015 @01:20PM (#50130593)

    Good directors just use AI as a convenient literary device for exploring the HUMAN condition.

    Real AI would be boring as fuck.

  • by WOOFYGOOFY ( 1334993 ) on Friday July 17, 2015 @01:20PM (#50130595)

    Simple- AI has abilities which are superhuman in some regards yet critically circumscribed in ways its designers could not have foreseen. Those limitations become lethal during and to human's most critical mission (humankind's destiny). Speaks directly to the hubris of scientism- the unsupported belief that all aspects of reality can be understood through the scientific method.

    Truth is, just as goldfish aren't capable and will never be capable of understanding the details of a nuclear bomb that destroys them and the politics that went behind the decision to push the button, so too we may very simply be creatures whose brains are incapable of understanding the larger reality in which we're embedded. We're good for some thinking things, like the goldfish is good for some swimming things, but thinking and reasoning as we do isn't everything and can't revela all truth.

    On a more prosaic level, 2001 is also a good analogy for what happens when the Intelligence Community is left to call the shots on a democracy. Slowly but surely everything is sacrificed to "national security" including the democracy itself. The odds are 100% that there are plenty of real people in the TLAs occupying significant positions of authority who seriously think they have to kill the democracy in order to save it. That is where the unremitting contemplation of a serious threat matrix leads you to in your mind.

    I don't see any mechanism for countering this effect.

    • The counter here, is the same in every other threat/hazard matrix [mpsasafety.com]. You define an index value - like the ANSI Z.10 [asse.org] relative hazard index and then use that as a quantification when you define an acceptable risk for any given task.
    • Spiritual hog wash. We're descended from a puddle of something that evolved into a single cell organism eventually, and so is that goldfish. There is nothing to say that goldfish won't eventually evolve into something as intelligent as us or even surpass our current state. The only thing which science is portrayed as coming up completely blank on consistently is spiritual bullshit. The truth though is that science can't be bothered with it because it is obviously bullshit.

  • Ghost In the Shell (Score:4, Interesting)

    by Anonymous Coward on Friday July 17, 2015 @01:40PM (#50130789)

    Which Movies Get Artificial Intelligence Right?

    Ghost In the Shell.

    • Definitely. I especially liked the dialogs between the tank bots in the Stand Alone Complex series. How they achieved sentience by one getting special treatment which created individuality.

  • Humans are terrified of anything that they can not control, and a true artificial intelligence would be a good example. And things that cause such horror are perfect to be used as "evil things to be defeated by the good guy" in films. There are some rare and few movies that are exceptions of course, but as the focus of Hollywood is the "Joe six pack" then films that use logic rather than appeal to the irrational primate fear will remain rare exceptions.
    • by mark-t ( 151149 )

      We cannot control how our children necessarily think, but we are rarely dissuaded by that fact from letting them get born and raising them to adulthood.

      Artificial intelligence, which is literally just intelligence that happens to be man-made, rather than intelligence that has evolved over a course of millions of years like human beings, is not really any more or less terrifying as a concept than intelligence in a meat-based computer such as a human brain.

      • "Human" intelligence like childrens are "wetwired" in our monkey brains to be accepted, part of the "natural and know things" like trees and this shiny yellow thing on the sky. And AI is also "unknown", and all things that is unknown activates yet another irrational primate fear on humans. It's a very difficult situation.

        I live in a country where the population acts as if he were still in the medieval Middle Age in cultural terms, and it is easy to find entire people on this planet still living as hunt
        • by mark-t ( 151149 )

          AI is literally just intelligence that happens to be artificial. Nothing more, and nothing less. Barring irrational fears of anything that is not natural, there is no real reason to fear artificial intelligence over natural intelligence any more than there is reason to fear a person with an artificial limb simply because not all of that person happens to be organic. Can anyone who fears AI explain why artificial intelligence deserves to be even *slightly* more frightening than natural intelligence sim

  • Assumptions... (Score:5, Interesting)

    by CrimsonAvenger ( 580665 ) on Friday July 17, 2015 @02:01PM (#50130957)

    The most important being that anyone here has clue one what "real AI" will behave.

    If you know nothing of "real AI", how can you possibly determine whether someone else "got it right" in cinema/literature?

    That said, my personal favorite has always been "Mike" from "Moon is a Harsh Mistress"....

  • What about Data in Star Trek TNG?

    An entity of absolute logic that strives to be more human - as an opposite to the Vulcan mind - reaching for absolute logic.

    • Yup. There are some examples of good AI in tv shows. Data and the hubots in Real Humans stand out in my mind.

      I remember the scene in The Offspring where Data's daughter Lal was complaining about not being able to feel emotions. While doing an awfully good imitation of anger and frustration...

      ...laura

  • It isn't like we have a good understanding of the mechanism for consciousness or even memory. I'm reminded of the experiments where they train rats to know a maze then start chopping bits of their brain away and no matter which areas the they remove they aren't able to isolate the memory.
  • There is no Artificial Intelligence yet, so how would anyone know?

  • As we do not have AI and do not even have a credible theoretical model how it could work, in fact we cannot even be sure it is possible at all, any depiction of AI is pure speculation.

  • The robot cops's trouble with pathfinding is a very faithful depiction of AI.
    It preceded by over 30 years the Counter-Strike gamebots getting stuck in some place of the map.

  • Humans are merely a collection of cells with the capability to alter our operation based on our environment and chemical/electrical signalling. Replicating this functionality in well-defined domains is relatively trivial. I don't see how this is intelligence.

  • How can you be experts in something you don't know how to do?

  • So far the only movie to get AI right is Star Trek IV: The Voyage Home [youtube.com].
  • None of them. I doubt we'll even remotely understand machine intelligence once we realize it's here. We barely understand out own intelligence. I actually suspect machine intelligence is already here, in a very weird, hard to grasp way. Notice we're spending a significant portion of our industrial output to device new and faster processing, improved battery life, everything AI would need? I'm not suggesting there's some secretive AI tricking us into all of this, I think it's a lot more subtle than that.

  • That Han Solo character could almost pass for a real actor, but no - all animatronic. Well done Jim Henson.
  • The movie "Demon Seed" was the most accurate AI movie ever.

    In case you've not seen it, basically the AI (Proteus) asks the inventor (Dr Harris) for access to the outside world. Harris denies Proteus's request, but Proteus gets an outside connection anyway.
    Proteus gets into Harris's home computer and workshop, takes over, builds a robot that rapes and impregnates Dr Harris's wife.

    http://www.imdb.com/title/tt00... [imdb.com]

Avoid strange women and temporary variables.

Working...