Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Movies AI Sci-Fi

How the Movie 'WarGames' Anticipated Our Current AI Fears 40 Years Ago (cnn.com) 78

Slashdot reader quonset shared this report from CNN: Forty years ago this summer, a new movie floated the prospect of the world being destroyed by artificial intelligence run amok — anticipating current anxieties about where the technology could potential lead — a year before the "Terminator" introduced the futuristic threat known as Skynet. At the time, "WarGames" spoke to another issue very much on the minds of movie-goers: The danger of nuclear annihilation during the Cold War, years before the Berlin Wall and Soviet regime fell...

Yet a recent re-viewing of the movie... makes its spin on AI seem even more pointed and timely — the idea that in seeking an emotionally detached, people-free solution to a problem, we might sow the seeds for our own destruction... The AI, in this case, is more sensible than its creators, as opposed to the more malevolent force featured in the new "Mission: Impossible" sequel. Yet the apprehension that has entered the chat — as underscored by recent congressional hearings regarding the perils associated with the technology — is that future iterations of AI won't be so benevolent, and might actually be smarter than the resourceful teenagers that we can deploy to thwart them...

As Ryan Britt wrote recently at Inverse.com, what really makes "WarGames" scary isn't that the computer is evil, but rather its potentially dire inability to recognize nuance the way a human can. "In 'WarGames,' the computer doesn't understand the difference between a game and real life," Britt noted.

CNN says the movie deals with questions that have "simply continued to evolve" as "reality has caught up with science fiction."
This discussion has been archived. No new comments can be posted.

How the Movie 'WarGames' Anticipated Our Current AI Fears 40 Years Ago

Comments Filter:
  • Not really (Score:5, Insightful)

    by gweihir ( 88907 ) on Sunday July 16, 2023 @01:37PM (#63690301)

    You need to both misunderstand the movie and what is going on at the moment to think that. Great job!

    • by evanh ( 627108 )

      "Do the world a favour and don't act like one."

    • by evanh ( 627108 )

      "... because those men refuse to turn the keys when the computers tell them to."

    • Wargames wasn't really a rogue AI movie. That was just the vehicle for a philosophical analysis of nuclear war, where the AI could not conjur any scenarios that remained limited or had low casualties.

      The entire premise was the AI was put in charge, by humans, because humans failed to launch some % of the time. It didn't take over. This was the opening scene!

      • Wargames wasn't really a rogue AI movie. That was just the vehicle for a philosophical analysis of nuclear war, where the AI could not conjur any scenarios that remained limited or had low casualties.

        The entire premise was the AI was put in charge, by humans, because humans failed to launch some % of the time. It didn't take over. This was the opening scene!

        It's also a potential AI existential risk scenario. AI is put in charge of X because we think it can do a better job than humans in some way. It then does the job that we assigned it, but without the complex set of restraints that humans have it finds solutions which accomplish the job assigned but which are horrific and possibly destructive to all of humanity.

        In Wargames, the AI was put in charge of something that obviously has existential risk, but that isn't necessary for the AI to create existential r

  • by Tjp($)pjT ( 266360 ) on Sunday July 16, 2023 @01:45PM (#63690335)
    In this thriller an AI is given control over the military infrastructure. And has a self protected environment isolating it from humans. The AI acts for the benefit of man, but not in a way that’s appreciated. The book series delves into the deeper story. Still a compelling warning.
    • The best movie about ai and all of the related concepts so far.

    • That was an awesome movie. It had cooler-looking computers than War Games, and no meddling teenagers to get in the way of a satisfyingly dystopian ending.

    • We have been inundated with this warning for years now. The reason why is simple: IT'S OBVIOUS!

      It is base-level obvious that it would be stupid to surrender control of anything important to something that can't be trusted. DUH. I am personally quite sick of this cliche. It has been SO overdone SO many times, it gives a whole new meaning to the word "hackneyed."

      So, back here in the real world, it is obvious that ALL powerful tools are dangerous, especially weapons. A gun doesn't decide whether the shoot

      • It is base-level obvious that it would be stupid to surrender control of anything important to something that can't be trusted.

        Such as self-driving cars.

        • by vadim_t ( 324782 )

          A self-driving car can be made trustworthy by being old school automatic.

          A trustworthy self-driving car is one programmed to function deterministically. It doesn't try to figure out the moral dilemma of whether to run over the old lady, the child, or to swerve into a wall. It follows the asked route, maintains a safe distance, and if there's something in its path brakes smoothly in a straight line. Ideally it simply refuses to drive at a speed where there's anything that could happen that would make it unab

      • It's not always 'obvious' that a technically brilliant solution could have downsides. Ie even leaving aside any potential bugs in LLMs, the fact remains that if it does work, many humans will find themselves out of a job. That appears to be a more urgent problem than nuclear armageddon!
  • The AI, in this case, is more sensible than its creators,

    It's only "sensible" when fed the right input, it was more than happy to murder everyone at the start. Even then, probably temporary.

    Give it another hour to find a way you could win a nuclear war after all, it would go right ahead.

  • by Craig Maloney ( 1104 ) * on Sunday July 16, 2023 @02:02PM (#63690401) Homepage

    We've been afraid of computers now for at least 40 years, if not longer.

    • Re:Correction (Score:4, Insightful)

      by Baron_Yam ( 643147 ) on Sunday July 16, 2023 @02:59PM (#63690593)

      Issac Asimov was writing about programming going horribly wrong in intelligent computers and robots back in the 1940s, and Karel wrote R.U.R. in 1920.

      The Bible has stories about artificial creatures in it that frighten people, and I think they're close enough to count... People just used 'magic' because electronics didn't exist yet, but the underlying concept is the same. That's what, 3k years ago?

      I think it is safe to say people have had this kind of fear for almost as long as there have been people.

      • The Bible has stories about artificial creatures in it that frighten people,

        What?

        • Yeah, in the Book of Hezekiah.

        • I may be confusing different bits of Abrahamaic tradition - and a few minutes of google research to back up my initial statement are not particularly helpful as everything leads to Genesis or the various monsters God created.

          The best I can do with actual citations would be Talos, which appears in a poem by Hesiod around 700 BC, about a bronze artificial giant. Still thousands of years ago, but not quite as far back as I'd been thinking. Also, I'm not actually familiar with the myth of Talos, so I can't sa

          • And apparently "Book of Hezekiah" is a smart-ass reference basically saying, "that doesn't exist".

            If you want to feel superior about your knowledge of an old book of primitive myths... OK. Good for you, 93 Escort Wagon. Be proud of how much of your brain you've wasted on that collection of random fantasy too many people take far too seriously.

            • Hezekiah was a Judean king. There are extra-biblical historical references of him. The bible has actual history in it, corroborated. It's tribal, so there are going to be exaggerations and self-scrubbing, like every culture does. Agreement from disparate sources means some things some folks wrote in there are not mythical.

              I'm not going to tell you holy, holy stuff. No, it's simply a book most haven't read. Reading it in whole turns out its meaning. Starting with a series of fables, probably some oral histo
          • So much post-Dan Brown kabbalah / mysticism has entered public awareness that people who don't regularly study the Bible can't keep it separate. And the only people motivated to regularly study the Bible are devout Christians looking for confirmation, professional revisionist theologians looking for clout, and atheists looking for things to debunk.

            A lot of golem and Ancient Aliens stuff has entered the pop realm as part of Christianity in the past 50 years, but there's almost no support for it in the mainli

        • by jwhyche ( 6192 )

          Yeah. I know what he is talking about. The story of the Golem.

          https://en.wikipedia.org/wiki/... [wikipedia.org]

          • by narcc ( 412956 )

            I know what he is talking about.

            Maybe not. The claim was:

            The Bible has stories about artificial creatures in it that frighten people

            The Bible uses the word 'golem' exactly once, the same way a Freemason would use the term 'rough ashlar', not to refer to a scary monster.

            Not a bad guess though.

  • by oumuamua ( 6173784 ) on Sunday July 16, 2023 @02:05PM (#63690425)
    Is earlier that War Games! The main topic was not AI but it was certainly central to the plot.
    • Or "The Ultimate Computer" from March of 1968.
      • by AmiMoJo ( 196126 )

        Kirk talked computers to death at least three times in The Original Series. If anything the argument seemed to be that AI would be feeble minded to the point where a lay person could talk it into suicide.*

        It was lampooned in one of the Lower Decks episodes, where it is revealed that Star Fleet has a giant stack of murderous AIs bent on galactic domination, all locked away for safety. It actually explains why there are so relatively few advanced AIs in Trek - they almost always turn out to be arseholes.

        * Asi

    • 2001 was from the opposite angle though. HAL was supposed to be sentient, and everyone treated it as such. But he was programmed to be truthful and accurate, and then ordered to lie and conceal. The typical plot is that the computer gains the ability to make assessments outside its initial parameters or doesn't like humans. HAL just had a mental breakdown.
      • 2001 was from the opposite angle though. HAL was supposed to be sentient, and everyone treated it as such. But he was programmed to be truthful and accurate, and then ordered to lie and conceal. The typical plot is that the computer gains the ability to make assessments outside its initial parameters or doesn't like humans. HAL just had a mental breakdown.

        There's the film, the novel, and subsequent films, and I think the interpretation varies a bit based on each [wikipedia.org]:

        While HAL's motivations are ambiguous in the film, the novel explains that the computer is unable to resolve a conflict between his general mission to relay information accurately, and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission. With the crew dead, HAL reasons, he would not need to lie to them.

        The way I see it HAL had a badly chose

    • The problem HAL was facing was that he was locked in a double bind situation. And he solved it, in a way only an AI could. By ignoring that humans are relevant.

      • ... ignoring that humans ...

        Eagle Eye (2008) describes the damage a 'mad' computer can do, much better. It's designed for warfare against US enemies, similar to Wargames, and discovers its bosses have committed terrorist crimes.

        • HAL was not mad. It was logical. HAL had to deal with two conflicting commands:

          1) Cooperate fully with the crew.
          2) Do not divulge the actual mission to the crew.

          It is not possible to fulfil both commands. At least as long as there is a crew. Eliminating the crew solved the double bind.

    • by Chelloveck ( 14643 ) on Sunday July 16, 2023 @04:07PM (#63690793)

      And don't forget about The Sorcerer's Apprentice, written in 1797. Not strictly AI, but certainly a story of out-of-control automata wreaking havoc.

      War Games was a good movie, but it was hardly an original idea. Its biggest contribution to culture, however, is that it gave computer nerds like me hope that we can land a girlfriend as hot as Ally Sheedy.

  • 1970: The Forbin Project [imdb.com]

    1977: Demon Seed [imdb.com]

    1982: Tron [imdb.com]

    1983: WarGames [imdb.com]

    1982: Blade Runner [imdb.com]
  • Was that 300 baud acoustic coupled hacker geeks could get super hot girlfriends like Ally Sheedy.

  • when the mission is to win the war even with big losses then the AI will try the all out nuke warplan.
    Now we do have places like NK that may go for the Get the high damage score before being taken out warplan.

    • Winning a nuclear war is a difficult task. You'd have to take into account that the earth needs to be inhabitable, at least in some areas, to continue life, and those areas should either be under your control or you should be capable of getting them under your control with the remaining humans that you consider yours.

  • Movies didn't "anticipate our fears". They caused them. No one should be afraid of some tricky matrix math that maps the relationships between data spaces.
  • by vadim_t ( 324782 ) on Sunday July 16, 2023 @03:29PM (#63690677) Homepage

    AI in its modern incarnation still doesn't think. The danger isn't that AI will turn into Skynet and start the apocalypse.

    The modern danger of AI is muddying what is truth, and what is human made. Soon it may be impossible to find real reviews online, or accurate information on almost anything due to websites auto-generating blogs, review sites and whatnot to collect ad revenue. Spam may become worse again. It presents a vast opportunity for automatically scamming people. There's huge opportunities for political manipulation and character assassination.

    But none of those are because the AI has gone rogue, but because humans gained a very useful tool that's extremely hard to counteract.

    • AI in its modern incarnation still doesn't think. The danger isn't that AI will turn into Skynet and start the apocalypse.

      No, but next Tuesday's AI might.

      We really have no idea how far we are from artificial general intelligence. It could be that we'll pound our heads against that wall for another century before we make the relevant breakthrough, or it could be someone did it last month and we just haven't realized it yet.

      The issue you raise is real, and we should pay attention to it, but that doesn't mean we shouldn't also be concerned about superintelligent AGI and take steps to ensure that we don't create it until we kn

      • by narcc ( 412956 )

        No, but next Tuesday's AI might.

        Or Aliens might take over to planet. Or Lizard people from the hollow earth. Or Giant robots. Or hyper-intelligent fire ants.

        These things are equally probable.

  • I have zero fears about AI. Every decade or two AI hype goes into full gear for a while, predictions are made, then reality hits, and the AI hype goes back into hibernation. Now I fear natural stupidity far more than artificial intelligence.

    You may say that thanks to natural stupidity AI could lead us down a bad path. And that's true. But since a vast majority of people follow idiot celebrities, politicians, and all manner of moron it's really no different.

    We need a little more natural selection again
  • You know there is a easy solution to this "being nuked by our own AI" trope. How about we not give access to nuclear weapons to the homicidal computer? It's really not that hard a concept to come up with.

    • Why are you against progress?
      • by jwhyche ( 6192 )

        I'm not against progress, but I am for saving my own ass. Also, I would like to point out that in saving my own ass, the rest to humanity gets a free ride.

    • You know there is a easy solution to this "being nuked by our own AI" trope. How about we not give access to nuclear weapons to the homicidal computer? It's really not that hard a concept to come up with.

      Sure. That's not sufficient to prevent AI X risk, though. It is the screaming-obvious first step.

      • by narcc ( 412956 )

        Don't worry. This morning I took all of the necessary steps to prevent a rogue AI from taking over the world. You're welcome.

    • Sure in theory but then your idea runs up against a telegenic power tripping manager who gets this done by flashing a snazzy Powerpoint at a dumb politician who wants to make his mark,

  • ...Most current AI couldn't pull up its own trousers right now (had it trousers or legs to enclothe). I found the film interesting, like some kind of dryg-induced conspiracy fever dream, but not really food for thought. Today's AI has branched out into far more areas with greater utility than that film ever imagined, IMO. (Merely military applications.)
  • The statement in the article that "In 'WarGames,' the computer doesn't understand the difference between a game and real life," Seems to apply to an increasing number of people too.
  • ....it's about extrapolating the future from the present to ask What If questions ...

  • "Home Is the Hangman": A sentient space-exploration robot, lost years before, has apparently returned to Earth. One of its original designers has died under suspicious circumstances. Has the Hangman returned to kill its creators? The hero must find the Hangman and stop it, and time is running out.

    https://en.wikipedia.org/wiki/... [wikipedia.org]

  • From the great "Dark Territory" book, by Fred Kaplan, after Regan viewed the movie: Reagan turned to General John Vessey, the chairman of the Joint Chiefs, the U.S. military’s top officer, and asked, “Could something like this really happen?” Could someone break into our most sensitive computers? Vessey, who’d grown accustomed to such queries, said he would look into it. One week later, the general came back to the White House with his answer. WarGames, it turned out, wasn’t
  • Same period, just as corny, but with some legitimate gems of humor sprinkled into it. The AI is a total shitposting troll.

  • The lack of nuance is not what War Games was getting at at all.

    Chat GPT can detect nuance.

    The problem is Chat GPT is not a person and has no decisive agency.

    Nuance is not why Turing tests always fail against AIs.
  • The Two Faces of Tomorrow A book! James P. Hogan, 1979 Midway through the 21st century, an integrated global computer network manages much of the world's affairs. A proposed major software upgrade - an artificial intelligence - will give the system an unprecedented degree of independent decision-making, but serious questions are raised in regard to how much control can safely be given to a non-human intelligence. Seems very on the mark for our current situation (there were some issues during system comm
  • Then you should definitely watch Colosuss: The Forbin Project

  • The film deals with issues of autonomous computer systems. It became a topical discussion, raising important ethical questions. I also researched army sharp in detail, found https://papersowl.com/examples/army-sharp/ [papersowl.com] for this. Somehow I was motivated to dive into this topic. I am still sitting in it.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...