Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Movies Sci-Fi

Is AI Dangerous? James Cameron Says 'I Warned You Guys in 1984 and You DIdn't Listen' (ctvnews.ca) 144

"Oscar-winning Canadian filmmaker James Cameron says he agrees with experts in the AI field that advancements in the technology pose a serious risk to humanity," reports CTV: Many of the so-called godfathers of AI have recently issued warnings about the need to regulate the rapidly advancing technology before it poses a larger threat to humanity. "I absolutely share their concern," Cameron told CTV News Chief Political Correspondent Vassy Kapelos in a Canadian exclusive interview... "I warned you guys in 1984, and you didn't listen," he said...

"I think the weaponization of AI is the biggest danger," he said. "I think that we will get into the equivalent of a nuclear arms race with AI, and if we don't build it, the other guys are for sure going to build it, and so then it'll escalate... You could imagine an AI in a combat theatre, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate..."

Cameron said Tuesday he doesn't believe the technology is or will soon be at a level of replacing writers, especially because "it's never an issue of who wrote it, it's a question of, is it a good story...? I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said — about the life that they've had, about love, about lying, about fear, about mortality — and just put it all together into a word salad and then regurgitate it ... I don't believe that have something that's going to move an audience," he said.

But the article notes about 160,000 actors and other media professionals are on strike, partly over "the use of AI and its need for regulation."

SAG-AFTRA president Fran Drescher has told reporters that if actors don't "stand tall right now... We are all going to be in jeopardy of being replaced by machines."
This discussion has been archived. No new comments can be posted.

Is AI Dangerous? James Cameron Says 'I Warned You Guys in 1984 and You DIdn't Listen'

Comments Filter:
  • A warning? (Score:5, Insightful)

    by RemindMeLater ( 7146661 ) on Sunday July 30, 2023 @05:29PM (#63726068)
    Calling Terminator a "warning" is a bit much. It was a fantasy sci-fi with time travel as a central story arc.
    • Well for what it's worth I warned you too but nobody listened to me either.

    • by geekmux ( 1040042 ) on Sunday July 30, 2023 @06:03PM (#63726150)

      Calling Terminator a "warning" is a bit much. It was a fantasy sci-fi with time travel as a central story arc.

      This. Ironically enough, the man who wrote 1984 back in the 50s managed to nail the future a hell of a lot more accurately.

      • Re:A warning? (Score:5, Informative)

        by Chris Mattern ( 191822 ) on Sunday July 30, 2023 @07:45PM (#63726338)

        "the man who wrote 1984 back in the 50s"

        The 40s. Orwell wrote 1984 in 1948, which is how he decided on that particular year (although it would not be published until 1949).

        • Re:A warning? (Score:5, Interesting)

          by roman_mir ( 125474 ) on Monday July 31, 2023 @12:21PM (#63728564) Homepage Journal

          Also what is interesting personally to me is that he wrote that book influenced by his own experience of killing an elephant in lower Burma, where he was a police officer in a town, a hand of the Empire in one of its colonies. He was asked to come find an elephant that went through a 'musth' episode (hormonal condition with testosterone levels becoming 60-140 times the normal levels for a short time span, causing a bull elephant to go nuts searching for a female) and caused some property damage, killed a cow, scared people, destroyed a van and also killed man. George wasn't going to shoot the elephant, it made no sense, once the musth stops the elephant calmed down, after all, it wasn't a wild elephant, a tamed one, one used to do actual work in the village, like a piece of machinery to them actually.

          However approximately 2000 of Burmese people were gathering around, all cheering, all encouraging for him to shoot the elephant, some wanting to get the meat but mostly probably just hoping for a spectacle. Orwell felt like he was a puppet in the hands of the crowd and he shot the animal not because it needed to be done, the elephant was no longer a danger, but because the crowd wanted it to be done. The only reason he killed the elephant (and he didn't know how to kill it right, so he shot it many times all in the wrong places, so it took more than half an hour for the beast to die) was to make sure that the white men wouldn't look foolish to the locals.

          This was the event that prompted Orwell to write the 1984, because this was the event that made him realize how much we are *not* in control of our own actions but how much the *crowd* is in control.

      • And my first philosophy class at Berkeley in 1982 I predicted the singularity would happen in 2025. I just applied more slaw a threshold of million 1982 dollars for a grad student to be able to use a college bot machine to code intelligence. I also predicted that instead of writing code the way we do today there would be a new profession where we guide the machines telling it more about what we wanted than how to build it.

        Humidity can take this in two ways. We can harse the awesome potential to produce m

      • And my first philosophy class at Berkeley in 1982 I predicted the singularity would happen in 2025. I just applied more slaw a threshold of million 1982 dollars for a grad student to be able to use a college bot machine to code intelligence. I also predicted that instead of writing code the way we do today there would be a new profession where we guide the machines telling it more about what we wanted than how to build it.

    • No worries, in a few decades he'll tell us he warned us for time travel to. If only we had listened...
    • Harlan Ellison famously warned us in 1967. Colossus was written in 1966, which slightly predates Ellison's work. I am sure there are other earlier works I am unfamiliar with. Even War Games beat Terminator by a year.
  • by VeryFluffyBunny ( 5037285 ) on Sunday July 30, 2023 @05:32PM (#63726070)
    AI's a tool. It can be used to benefit people or to control & exploit them for personal gain. If by threat to humanity you mean the usual arseholes are gonna find new ways to f$%k us over, yeah, he's probably right.
    • by dvice ( 6309704 )

      There are several AI doomsday scenarios:
      1. singularity theory, where AI gains super powers and humans can't stop it.
      2. AI as a tool is used to make bad things
      3. AI is used for good. It replaces all jobs and humans can just eat all they want and do all they want. There are no wars or crime as everyone has everything. It might sound strange, but according to rat tests, this kind of scenario leads first into overpopulation and then into people not wanting to have or take care of kids, which causes sudden and t

      • 3. AI is used for good. It replaces all jobs and humans can just eat all they want and do all they want. There are no wars or crime as everyone has everything. It might sound strange, but according to rat tests, this kind of scenario leads first into overpopulation and then into people not wanting to have or take care of kids, which causes sudden and total collapse in population.

        Yup. In some ways, this is the most terrifying of the 3 options.

        1/ is lofty and abstract. There's no telling what will happen then. Chances are we won't even know what hit us.

        2/ happening now, and being contained, or at least recognized. Nasty, but probably manageable.

        • One of the things that struck me about the Calhoun experiments was it is still just a velvet lined caged (i.e- no opportunity to move).

          Even for the Beautiful Ones, they were essentially remaking "culture" to fit their circumstances. Walling themselves off was essentially an attempt at escape.

          While I can see very abstract social games coming into play just for pecking order and boredom in a material utopia, I can't see collapse.

          • Wasn't that essentially the premise for Alan Moore's Judge Dredd, i.e. people were all unemployed, got bored, & had to find ever more creative & extreme ways to amuse themselves?

            Forget any of the Hollywood adaptations, they could never capture the spirit of Judge Dredd &, even if they did, US audiences would more than likely be completely turned off by it. Remember how US audiences totally misunderstood Paul Verhoeven's Starship Troopers?

            Might do well in Europe though!
            • Remember how US audiences totally misunderstood Paul Verhoeven's Starship Troopers?

              Let's see...bug aliens attacking earth?

              Denise Richards at her HOTTEST (but sadly way too clothed in this movie)....

              What did I miss?

        • and then into people not wanting to have or take care of kids

          I would posit that we're already seeing this in action.

        • On population growth & development, I find Hans Rosling's arguments better informed & more convincing as well as a lot more optimistic than these billionaire & wannabe billionaire types: https://www.youtube.com/watch?... [youtube.com]
      • , which causes sudden and total collapse in population.

        Looking at birth rates in Japan, Korea, China, most of Europe, Russia, etc...

        We don't need AI for population collapse.

    • by HiThere ( 15173 )

      AI is currently a tool. It *could* be kept as a tool, but there are advantages to having it act as an agent, so at least some versions probably will be in that form, eventually. And the problem there is that it will act to achieve the goals it was given, and the folks that gave those goals probably didn't think about all the edge cases.

      • It depends what we plug it into. It needs its fingers on the switches, be that in a factory, operating theatre or courthouse. It needs what it can do to be tangible and not abstract. It needs a unified mind which we can possibly mitigate by sandboxing it into connectionless nodes. Beyond that point, we may not be able to switch it off. I'm so stoned right now.
        • by HiThere ( 15173 )

          But there will be lots of DIFFERENT people making the choices. Some of them will be cautious, some won't.

          • well, as long as it isn't some soulless, unscrupulous, venture capital backed, Ayn Rand libertarian, Silicon Valley wannabe billionaires... oh, wait.
  • by PubJeezy ( 10299395 ) on Sunday July 30, 2023 @05:33PM (#63726072)
    Warned us? About what? Shotgun toting truckers and cops made of liquid metal? Neither of those happened. He did not make a movie about any meaningful issue surrounding AI. He just says "AI did a thing and so kaboom".

    AI isn't a robot holding a shotgun, AI is a robot generating fake spam SMS spam that fewer and fewer people are willing to click on.

    The problem in the real world is that corporate spammers use bots to generate fake content for other bots to fake consume so that humans can monetize the metrics.

    No shotguns. No truckers. No liquid metal cops. Cameron didn't warn us, he monetized a fear in a wholly unproductive way.
    • by phantomfive ( 622387 ) on Sunday July 30, 2023 @05:40PM (#63726086) Journal
      Terminator is a movie about a chatbot that used spam to start a nuclear war. Ethereum was involved. Check your facts.
    • by jonnythan ( 79727 ) on Sunday July 30, 2023 @06:16PM (#63726170)

      Terminator 2 was 1991.

      The original Terminator's plot explains that SkyNet was an AI developed by humans to more efficiently control assets of war, making the military far more effective. It became self-aware and decided to use its ability to control warmaking to eliminate humans, eventually creating the Terminator cyborgs.

      So, even in-universe in The Terminator, it's originally a computer AI created by humans that eventually creates real-world soldier cyborgs. SkyNet never was "a robot holding a shotgun." SkyNet was, from the very beginning in the Terminator universe, an pretty classic AI that was given control of real-world resources and used them for its own, internally logical, purposes.

      • by AmiMoJo ( 196126 )

        It's a little worrying how close we are to that with today's chatbots. They can write code. A more advanced one with a Raspberry Pi and a root shell could be a dangerous thing.

    • And Animal Farm (and Aesop's Fables for that matter) was about a bunch of talking animals.

      Maybe art has a different way of relaying a message that isn't supposed to be taken literally, you fucking fauxtistic nerd.
    • If a machine can put a human out of work, then it should. It makes zero sense to pay humans to do things that can be done more cheaply and better by a machine. The economic impact of the job loss must be dealt with by other means, and there are many options available.

      "We don't want to lose our jobs to machines" is the wrong hill to die on, as we have seen before, and it will just leave people on the wrong side of history. Labor automation is coming, its awesome, and there is no stopping it. We must adap

      • If a machine can put a human out of work, then it should. It makes zero sense to pay humans to do things that can be done more cheaply and better by a machine.

        And when enough humans are put out of work because of machines, who will buy the products?

        The economic impact of the job loss must be dealt with by other means, and there are many options available.

        Such as? Not everyone can be a programmer or robot repairer or robot manufacturer (which, ironically, might be done robots) or a writer or songwriter or

        • Machines will buy more raw materials to create server space to sustain themselves and multiply. This is called the "paperclip maximizer problem" in AI safety circles. Eventually they will convert the iron in your blood into server racks and the carbon from your body into rocket ships to mine asteroids to make more server racks.

          It is a consequence of "instrumental convergence".

          Heck, James Cameron was right about the OceanGate Titan when Stockton Rush asked him to endorse the project.

        • According to my analysis, as long as machines can be programmed to fulfill the role of "consumer" there is no real need for people to be involved at all. Apart from the overlords who command the machines, of course. As long as the rest of humanity does not interfere with the process or try to compete with the machines for resources they will be free to do whatever they want. Or perhaps they will starve off and die. Either way it is of no consequence to the machine economy or its overlords.
        • Since you asked, here are some:

          1. Universal Basic Income

          It's already hotly debated, and silly token experiments have been attempted, but the fact is it doesn't work right now, in our current economic environment. THe reason is simple: give free stuff to working-class people, and they lose their incentive to work, which in turn means most of them quit, which in turn means we don't have enough people producing the things we need, which in turn produces fatal supply shortages across the board, which makes it

      • The economic impact of the job loss must be dealt with by other means, and there are many options available.

        Yeah. Historically, some common means of dealing with the economic impact are rioting, revolution, war, etc.

    • The "robot holding a shotgun" was a plot device. We can't wrap our brains around billions of IoT devices self-organising, so he told that story through the representation of various characters.

      That's the Terminator series of films to me. May there be many more!

  • when you give a celebrity attention.

  • So whose job is sacred? When we replaced the sewer with a sewing machine. That was not a big deal. When we replaced the human harvester with a combine, nobody complained. It was all wrong. Everyone's job is sacred. Instead of making robots and AI to increase production and provide UBI, lets everyone convert to Amish and ban all machinery of every kind. Force everyone back to being farmers with hand made tools only. Someday when a wealthier nation, like say North Korea, invades the USA the few of our citizen

  • by thragnet ( 5502618 ) on Sunday July 30, 2023 @05:49PM (#63726110)

    and we didn't listen then, either. You're a bit late to the party, James.

    • and we didn't listen then, either. You're a bit late to the party, James.

      Watching humans try and predict doomsday, isn't exactly something they're "late" for.

      The behavior is quite predictable when it is repeated over and over again as doomsdays come and go, taking "prophets" with it. We'll probably have a dupe submission here by morning.

    • All of the technology Orwell predicted, is available today. Where is his dystopia?

      The reality is, dystopia is caused by bad governments, not by technology. Look at North Korea, or Iran, or Russia, for examples. All these countries have and use technology to repress their people. But free countries have even more technology, and use it (for the most part) to improve the lives of their people.

    • by GuB-42 ( 2483988 )

      1984 is not at all like today's society is like. As far as dystopias go I'd say we are more "brave new world" than we are "1984".

      The idea of surveillance is kind of right, but it is not at all how it is works in today's real life. 1984 describes an coercitive society, and surveillance is very obvious, with cameras at home you can't turn off for the authorities to watch for your behavior and arrest you if you do something inappropriate. Today, people willingly install these cameras, which they pay with their

      • I am somewhat sympathetic to your "Brave New World" argument, but this:

        https://www.nytimes.com/2009/0... [nytimes.com]

        struck me as a bit coercive.

        And one of the primary themes of 1984 is the misuse/perversion of language. This is abundant in both the private and public sectors of all the countries that come to mind.

    • Well, TECHNICALLY Orwell warned us in 1984 too, like "in the book 1984", which was published only in 1949.

      Which is not about AI (directly). But how would Big Brother keep track of all the proles if not through AI? But we liked our newspeak and our doubleplusgood gadgets, so hating Orwell is love, and being ignorant of what he says is knowledge. Has always been.

      • Minitrue is hiring ! You appear to be an excellent candidate.

        My bad on 1948 vs. 1949. 1984 was written in 1948, but as you correctly note, was not published until 1949.

  • Yeah, no (Score:2, Offtopic)

    by vadim_t ( 324782 )

    Terminator is cool and all, but that scenario has nothing to do whatsoever with modern dangers of AI.

    If that was what we were supposed to be concerned about, then he missed the mark by a lot. The current danger isn't in us developing Skynet, but in the erosion of social trust. Comments, reviews, and articles can be quickly AI generated, and can promote any agenda you want. We're already seeing scarily good generated AI voices, images and videos.

    We're well on the way to a world where nothing is trustworthy.

    • While AI eroding social trust is an issue, They are also developing AI for the military for killing people too, good luck controlling that, no serious military on earth is going to give up that tackle advantage.

      • While AI eroding social trust is an issue, They are also developing AI for the military for killing people too, good luck controlling that, no serious military on earth is going to give up that tackle advantage.

        Clearly the Three Laws of Robotics won't be in their programming, either.

    • OK, it was a joke. Clearly.

      Why is everyone so serious?

  • by test321 ( 8891681 ) on Sunday July 30, 2023 @05:58PM (#63726138)

    When you think of the dangers of a chatbot (with access to hardware), you'd think of 2001, a Space Odyssey (1968) or WarGames (1983), also maybe Tron (1982). The killer robots with death rays was depicted in Master of the World (1934). From the Wikipedia category https://en.wikipedia.org/wiki/... [wikipedia.org]

    • When you think of the dangers of a chatbot

      When you think of the dangers, you read "I want to scream and I have no mouth". The stuff that nightmares are made of.

  • in an nuclear fight you can win with an big 1st strike that can take out the enemy's ability to fire back

    • in an nuclear fight you can win with an big 1st strike that can take out the enemy's ability to fire back

      It's one planet. With both the winners and losers having to share one atmosphere.

      You really think the rest of the fish tank isn't gonna have to eventually notice that massive pile of hot wet shit in the corner?

    • by dvice ( 6309704 )

      I don't know are you joking, but such a strike would cause massive dust cloud, which would block the sun, which would cause nuclear winter that would last for years, which would kill 90% of population and destroy civilization. If that was your goal, you won the game, but it doesn't matter that much where you hit if you have enough nuclear weapons.

    • in an nuclear fight you can win with an big 1st strike that can take out the enemy's ability to fire back

      And how would you do that? This isn't science fiction where you can magically teleport a nuclear weapon to any point you desire in an instant. Delivery of nuclear weapons to their designated target takes about 30 minutes to complete (that is, launch to impact)*. As soon as the opposing country sees any launch of a nuclear weapon they will respond within minutes to launch their own, thus no "first strike".

      For an explanation of what is needed to launch once a nuclear launch is detected, and the return strik

      • * If the launch were directed at continental Europe that time is even lower. Also, sub launched nuclear missiles could, depending on their target, strike even more quickly. Think a Russian sub off the coast of New York or Virginia.

        Once you introduce nuclear subs though, sure, the Russian submarine nukes Virginia. Then the USA Virginia class submarines proceed to nuke the rest of Russia.

        Not to mention all the sites in North Dakota opening up and sending ICBMs to Russia. It might be a first strike option, but nuclear submarines, at least in the quantities Russia has, aren't enough to take out enough of the USA's response.

  • That warning was as valid as Tim the Enchanter in The Holy Grail. If you cant explain it in a meaningful way, probably shouldnt bother.
  • It seems the "Login" button has dissolved into thin air. To login, click on an Apple article that goes to apple.slashdot.org, log in, come back here. You are still logged in.
    • Slashdot's site "design" includes some badly implemented CSS.

      If you don't see the "login" link, just drag your browser window narrower or wider - the link will eventually reappear.

  • The promise of the Internet was a true and honest answer to every question, and you see how well that went. If we expect AI to recognize the truth it must have a foundation in "the truth" or as I call it, "Reality," with a capital R. The AI needs to examine each question beginning from a set of known facts in; chemistry, geology, biology, astrophysics, quantum mechanics, and math. Using a process akin to proving a postulate or theory, the AI creates and provides a step-by-step analysis of the proof. "Di
  • Since Covid, and probably before, there is tons and tons and megatons of BS being posted everywhere on the internet. A nice one was in the UK, during development of a Covid vaccine the first vaccination was given to two women volunteering (yes, giving it to only two people first is wise). Someone found their names, next day the story was out that they had both died. They protested, showing new photos of themselves on Facebook, but that was then called all prepared and a conspiracy. Then they showed videos o
  • I have no love for "AI," but Camron was sued for the Terminators Story.
    https://en.wikipedia.org/wiki/... [wikipedia.org]
    • Harlan sued everybody... it was kind of his thing. The only thing those stories have in common is time traveling bad guys.
  • I would say dangers of AI controlling robots, we are not there yet. Nor AIs taking over governments and means of productions in big/unchecked scale (of, at the very least, robots).

    What we have now is governments making robotic weapons, and governments building AIs, and nuclear/chemical/biological weapons, and weaponizing internet, controlling in the end the way global culture thinks and see reality. The common factor there are governments, not AIs, and against that we were warned in the other 1984.
  • When was his point (if he even made one) proven? It's a bit much for him to run around screaming "I was right!" .. Humans are still very much in charge. Meanwhile he used computers to replace make-up artists in Avatar. Hell, his movie takes away jobs from local theater actors.

    • Look at the percentage of men under 30 who have had sex in the year surveyed, and compare before and after the advent of the Tinder algorithm.

      Elliot Rodger was radicalized by PUA Hate message boards, but to be radicalized you first need what the CIA calls a "personal injury".

      For Islamic terrorists, the "personal injury" is usually as simple as simple poverty, in countries that have extreme GINI Coefficients. The Tinder ELO score (ranking system) created a GINI Coefficient for sex more extreme than 95% of ec

    • Even if he warned us, the movie offers the option of going back in time and solving the problem retrospectively so I guess we can go with his solution at the appropriate time.
  • by illogicalpremise ( 1720634 ) on Sunday July 30, 2023 @08:51PM (#63726462)

    > I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said — about the life that they've had, about love, about lying, about fear, about mortality — and just put it all together into a word salad and then regurgitate it ... I don't believe that have something that's going to move an audience," he said.

    Sorry James. You just described the last 40-something years of Hollywood creative output. Right now we get about one truly original idea a year. The rest is "let's dust off a 1950's super hero" (Marvel), or "Let's shit all over a beloved franchise" (He-man, Velma, Foundation). Movies are written by committees. Honestly, at this point "A.I.-generated word-salad" will probably be an improvement.

    Hollywood has no appetite for risk. Profit is the only thing that matters. Until that changes we're not going to get anything better.

    • 'American Gods', 'Sandman' and 'Good Omens' have all been spectacularly good, though Good Omens 2 could have done with some truncation.

      There are a few good things out there - but yes, they are rare, and some of the best are coming from unexpected directions (Squid Game?).

  • He's not exactly a technology expert, or a psychologist. Does he even know what AI does, exactly?

  • I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said â" about the life that they've had, about love, about lying, about fear, about mortality â" and just put it all together into a word salad and then regurgitate it ... I don't believe that have something that's going to move an audience," he said.

    Cameron is living in denial. AI can come up with unique plot lines in less than a minute. Each time it's a totally different story. The shit even small AI models come up with is hardly worse or less creative than what we've been treated to by the industry.

  • At least John Badham of Wargames fame is not gloating.

    Wargames and it's AI message came out in...

    Wait For It...

    1983 !!

  • Speaking of Harlan, https://en.wikipedia.org/wiki/... [wikipedia.org]

    Lol. Did you ever read The Moon is a Harsh Mistress? "Mike" turned out to be a "Dinkum Thinkum", but things could have gone differently.

  • LLMs are not the Terminator nor are they going to be anytime soon. The problem isn't computers becoming self aware the problem is advanced forms of automation that can take over jobs at a pace where they can't be replaced by new forms of work and in a civilization built around the fundamental concept that if you don't work you don't deserve to eat.

    We already have over 200,000 homeless people in this country who have full-time jobs. Not to mention the additional 400,000 who don't. We're looking at the po
    • LLMs are not the Terminator nor are they going to be anytime soon.

      But that's what they want you to think ;)

  • The Simpsons predicted everything. They even predicted that Trump would be President, murder hornets, and COVID-19 lockdowns.

  • So in the wake of his widely publicised insights on submarines, which I will concede, Cameron is now an expert on AI? and everything else I suppose? Does Joe Public need father figures figures like this (Musk is another example) to direct their opinions, despite their only voicing stuff that most people in the upper 50th percentile of intelligence could tell them?
  • Orwell predicted some elements of AI used in the service of dictatorship, notably smart TVs with cameras that cannot be turned off, and machines that wrote rubbishy pop songs for the "proles". Remarkable predictions for 1948, a time predates modern computing.
  • Here's Cameron trying to blow smoke up his own ass by claiming that he was some kind of prophet. What a joke. Pretty much all sci-fi writing is dystopian and intended to be cautionary tales about the evils of mankind that the authors don't agree with.

  • So now, we have to avoid any possible technology used in horror and disaster films and books? And why does he think the dangers of AI aren't acknowledged. There are a lot of smart people trying to make AI safe and effective. Of course there will be many problems, but the benefits clearly outweigh the dangers from my perspective.
  • People who use AI as a weapon are dangerous, really dangerous
    We need to develop effective defenses

WARNING TO ALL PERSONNEL: Firings will continue until morale improves.

Working...