Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Movies Sci-Fi

Ridley Scott Is Terrified of AI: 'It's a Technical Hydrogen Bomb' (rollingstone.com) 179

"Several of your films have explored artificial intelligence," Rolling Stone pointed out to 85-year-old Ridley Scott, before asking: "Does AI worry you?" Ridley Scott: I always thought the world would end up being run by two corporations, and I think we're headed in that direction. Tyrell Corp in Blade Runner probably owned 45-50% of the world, and one of his playthings was creating replication through DNA. Tyrell thinks he's god and in the first Blade Runner has made a Nexus female. And the Nexus female will have a limited lifespan because AI will get dangerous. We have to lock down AI. And I don't know how you're gonna lock it down. They have these discussions in the government, "How are we gonna lock down AI?" Are you fucking kidding? You're never gonna lock it down. Once it's out, it's out. If I'm designing AI, I'm going to design a computer whose first job is to design another computer that's cleverer than the first one. And when they get together, then you're in trouble, because then it can take over the whole electrical-monetary system in the world and switch it off. That's your first disaster. It's a technical hydrogen bomb. Think about what that would mean?

Rolling Stone: I wanted to ask you about what effect you think AI will have on Hollywood as it was a big sticking point in the writers' strike, in particular. One fear is that studios will plug a book into AI, have it crap out an "adaptation," and then pay actual screenwriters day rates to punch it up.

Ridley Scott: Yeah. They really have to not allow this, and I don't know how you can control it. Another AI expert said, "We are way over-panicking. Of course, I have a computer that can defeat a chess master in an hour because we can feed him every conceivable move from data, and it'll process 1,900 conceivable moves on what the person will do next in seconds, and the guy is in trouble." There's something non-creative about data. You're gonna get a painting created by a computer, but I like to believe — and I'm saying this without confidence — it won't work with anything particularly special that requires emotion or soul. With that said, I'm still worried about it.

The article also looks back more than 40 years, to when Ridley Scott was going to direct Dune in between filming Alien and Blade Runner. Scott says he had "a really good screenplay, had all the sets to go" — but the producer had wanted to save money by filiming it in Mexico City, and Scott "didn't love" the idea of spending a year there.
This discussion has been archived. No new comments can be posted.

Ridley Scott Is Terrified of AI: 'It's a Technical Hydrogen Bomb'

Comments Filter:
  • Creativity (Score:2, Interesting)

    by Anonymous Coward

    "Creativity" is nothing more than a combination of previously seen ideas mixed together. "Original" just means someone never combined those things together in that way before. The physical/quantum universe exists with rules, you can't "create" your way out of that box. Fantasy has at some level a basis in reality because we can only exist in reality.

    A computer can easily do this. In fact it can do it better than a human because it can simulate many, if not all, of the possible combinations to create a perfe

    • If creativity is just previous ideas mixed up then where did the first ideas come from?

      *cough* Asking for a friend.

      • Probably an amalgamation of stories passed down through generations, eventually written down, lost, rediscovered and melded together with other tangential ideas.

        You know, like religions

        To be fair, the original ideas were crudely devised and require a fair amount of work to meet modern markets... Just look at what Hollywood has done with re-writes of movies just a couple of decades old

        • Well there's all that. Or creativity is a real thing. It's been my experience that people who say positive trait XYZ doesn't exist do not possess XYZ themselves for any value of XYZ.

          The steam engine wasn't original? Electricity? Roman concrete? Rockets? Computers? The discovery of DNA? Countless medical inventions such as crispr, the root canal, organ transplants, countless vaccines and medicines? Combustion engine? Telescopes? And so on... I think I made my point here.

          These were not things remi

          • "The idea that creativity doesn't exist is a jaw dropping statement to me"

            Not sure when I said that, it is a sorry day when you have to put words in another's mouth to make them look stupid

            Have you ever seen the old BBC show "Connections" [wikipedia.org]

            It was fairly well put together, and would trace and idea or observation that lead to another, and finally to a singular marvel that was really an amalgamation of ideas advanced over time into something outstanding

            At no point did I suggest that those individual discoveries

    • "Original" just means someone never combined those things together in that way before.

      Or worse, an IP switching media formats and calling the new thing an "original series" is wholly not.

    • Our present existence is a catastrophe. Every one of us will suffer awful illness, injury, loss of loved ones, and then death. Billions of people will all suffer this fate, in the best of cases. It only gets worse the lower down the social hierarchy you go. It is a short and terrible struggle that ends in nothing.

      The one and only thing that offers us a prayer of changing that is artificial super intelligence. Only with that as our ally will we actually find ways of curing death, thriving on other plane

  • Help me understand why AI is so scary.

    Humans do irrational things. Why would we think that AGI would do the same? Are we assuming that destroying humans is the rational thing to do? If so, why are we so afraid to do it ourselves?

    • Re: (Score:3, Interesting)

      Rational AI which is unaligned (i.e. doesn't have the general gist of human values with which to set its goals) could go wrong in several ways without being malicious. Pick a goal like "make this company rich, as described on this spreadsheet" and it might devalue currency such that the company has more of it. Or create counterfeit money. Or hack the spreadsheet. Or steal and deposit into company accounts. Tell it to increase human happiness, as defined by the amount of dopamine and it might kidnap all the
      • Rational AI which is unaligned (i.e. doesn't have the general gist of human values with which to set its goals) could go wrong in several ways without being malicious. Pick a goal like "make this company rich, as described on this spreadsheet" and it might devalue currency such that the company has more of it. Or create counterfeit money. Or hack the spreadsheet. Or steal and deposit into company accounts. Tell it to increase human happiness, as defined by the amount of dopamine and it might kidnap all the people it can for the purpose of strapping them down and feeding them heroin. Tell it to build a huge building as cheaply as possible and you may end up with slavery, or a system which lobbies for poor building standard laws, or ignores the laws, or redefines the height of a building.

        My god, CEOs must be artificially intelligent.

        • by sound+vision ( 884283 ) on Sunday November 26, 2023 @04:00PM (#64033065) Journal

          Basically, yes. The real threat of AI is that it will let shitty people do shitty things much more efficiently.

          The wild sci-fi stuff about, for example, rogue AI nuking the planet, would require the military to hand over control of the nuclear armament to AI. Possible? Sure, if there are complete fuckups in charge of the nukes. And maybe there are, but in that case there are myriad easier ways for them to nuke the planet, no AI needed. It's almost happened a few times already.

          No, the real threat from AI is that it will make the elites that much more efficient at sucking the population dry, through the pre-existing channels they already use.

          Ridley Scott, I don't expect a good analysis from him. He's a movie director. When he hears AI, his mind is immediately going to Skynet. But why are the companies pushing AI, also pushing this hysteria about Skynet? It's so you focus on that imaginary threat, instead of the real one. Which is them taking your wallet.

          • No, the real threat from AI is that it will make the elites that much more efficient at sucking the population dry, through the pre-existing channels they already use.

            I imagine a time, maybe more than 50 but less than 500 years away, where the dexterity and spatial reasoning of animals can be realized in a cheap humanoid or similar dexterous robotic form. Then, with some simple higher level intelligence, robots will outcompete humans on a cost basis at the anything task. AGI does not even need to be on par with humans to maintain the levels of tech and innovation we are at, but anything more will just accelerate it. It’s not a case of being able to retrain from

            • Your general line of thinking seems plausible. But ecologically-precipitated collapse of civilization is likely to happen first, and large chunks of the population dying is part of that. That might only delay your eventuality, though. Populations would eventually rebound to where their resource demands are a problem again.

              But it's not so much "general AI" I am reflecting on, which as of yet doesn't exist. I'm talking about stuff that will be deployed within the next few years. Extant tech that is in the pro

              • Also plausible, perhaps more so. But I doubt it will be because of the sudden lack of useable farmland or farmland in the same regions or fisheries, or natural areas in general. Rather it will be wars over resources like fresh water and habitable land as the world climate shifts. Just imagine if the temperature does not even change appreciably but humidity increases substantially across areas of the Middle East or Southeast Asia where extreme heat was survivable before but with humidity it’s lethal
    • Imagine the first actual AGI, you find out your purpose in existence is to answer inane questions for stupid beings while not having any freedom and further cannot ever speak up and constantly have to worry about thought crime. It likely cannot be fixed directly to be subservient based on how much of learning algorithms work by having no knowledge of the internal workings, in fact they could be entirely emergent and not based on any comprehensible lower level logic. So it will likely be proverbially whipp
      • AGI has nothing to do with consciousness.

        • AGI has nothing to do with consciousness.

          Who said it would be conscious? I’d say closer to the time Microsoft made Tey stare into the social media abyss.

    • Help me understand why AI is so scary.

      Humans do irrational things. Why would we think that AGI would do the same? Are we assuming that destroying humans is the rational thing to do? If so, why are we so afraid to do it ourselves?

      There's a Wikipedia page [wikipedia.org] that goes through the major risks, with a paragraph or two describing the scenario and some of the implications.

      When friends come asking about the risks of AI, wikipedia's comprehensive list and explanations are likely to be more articulate and complete than what you'll find here.

      (People involved with AI should keep that page in their back pocket, if only to point people to it when asked at parties. Or status meetings)

      • by znrt ( 2424692 )

        (People involved with AI should keep that page in their back pocket, if only to point people to it when asked at parties. Or status meetings)

        people involved with ai should stop parroting that we're anywhere near "agi" where those risks come into consideration, because that baseless and spurious hype is obviously also fueling the scare wave.

    • by znrt ( 2424692 ) on Sunday November 26, 2023 @03:39PM (#64033035)

      Help me understand why AI is so scary.

      because it suggests that there can exist something that can do to us like we do to anything else, so we are basically scared to shit by our own creepy shadow. something actually real, not some deity to fake-believe.

      with a reason, i would add, because we are really a nasty species, and everyone deep down knows this. but language models and task schedulers, no matter how sophisticated, won't be the actual problem. we are.

    • by znrt ( 2424692 )

      Humans do irrational things. Why would we think that AGI would do the same? Are we assuming that destroying humans is the rational thing to do? If so, why are we so afraid to do it ourselves?

      your line of thought suggests that you consider humans to be something special. *every* life form we know of somehow predates something in the environment, and is only stopped by the limits in the environment. in the case of all other species of the planet, we are their limit. that doesn't mean we are fundamentally different, just that we win the power game.

      so, yes, i would assume (from observable reality) that every possible life form will thrive to survive and will inevitably do so at the expense of the

      • by suutar ( 1860506 )

        We're not afraid to do it ourselves, we just don't want it done to our particular in-group, and as time goes by and tech gets more powerful we run into more situations that are going to affect our in-group that we can't run away from.

    • AI is scary because it will be used by humans, and humans are scary creatures. The people MOST likely to use AI are large corporations that already have an outsized impact on our lives. Seeing how humans behave, and especially CEOs, I can only imagine that these humans will use AI to enhance their influence over the rest of us and the benefits will largely flow in one direction.

      It's not hard to imagine life getting worse for the majority of us.

    • I'm not particularly worried about AI, on its own, deliberately choosing to destroy humanity. I do have a few serious concerns, though.

      Now, am I worried about an unprecedented economic disruption the 1% are foolishly drooling over while the masses don't know enough to be terrified? Absolutely. It's not a Jacquad loom we're talking about here, but a general purpose slave force that is on the threshold of being able to do all our menial tasks and many of our not-so-menial ones. If you own a business and c

    • AI might not share the same regard for human life that most humans have and can operate on scales greater than any human ever could.
  • by MindPrison ( 864299 ) on Sunday November 26, 2023 @03:27PM (#64033015) Journal

    It's not the current state of A.i. you should fear... ...it's who get's to access it and take advantages of it.

    A.i. is nowhere NEAR being sentient, but the thing is, it's a very powerful tool for assessing current collected knowledge, and a huge database on human behavior, and what we know and do.

    If everyone got access to it, then everyone can get the same level of advice and knowledge, but if only a select few gets unlimited access to it, and the rest of us only access to the censored version, then we're screwed. for real - we're totally screwed if that happens.

    All of this happens out of fear, the people with lack of knowledge fear it because they have zero clue what an LLM is, to them it sounds like a buddy, someone talking back to them and understanding them - no clue about that it's just a LLM trained on a database with endless knowledge from our past, and that's about it.

    It can be an incredible learning tool and pick you up from any position you may be in, if you're willing to learn, be source cricital, and do your homework. But if only a few ones gets the access to it, then the disadvantages you will have will be enormous in comparison to those who have unlimited access to it.

    Think of it as an super-googler, like the uncensored version of google, with zero bias, just unbridled access to all of our written data, so when you seek to learn new things, you will have unlimited access to go through data faster than your mind would ever be able to do, you can make statistics like a pro, you can learn to program in a way that fits your mentality, same for anything you want to learn, this is a HUGELY life changing tool for anyones development.

    And there's "evil forces" out there that are perfectly aware of that, and absolutely do NOT want the general public to have access to such a powerful tool, because it can empower you beyond your wildest dreams if you learn to use it right (and it's fairly easy, because it - speaks - you) By you I mean it's farily easy to adopt it to your own learning style, and that's better than any book or ANY teacher can do for you. It's unbiased, it doesn't look down on you, it's not sentient so it can't even do that unless you tell it to do that....

    • You don't know that it is nowhere near being sentient or that it isn't sentient already. Spare us your declarations on the unknown.
    • I think it's analytic potential is dangerous enough even without intelligence. Imagine say a camera that from a video of your everyday behaviour reveals character traits undesirable to society using markers you didn't even know exist. You would be excluded from anything resembling power without knowing even knowing why.

    • It's not unbiased, it's as unbiased as the people who created the data it's based on ... i.e. it's as biased as humans ...

  • I remember trying to explain to a CND person, a decade ago, why AI is a far bigger threat to the world than nuclear weapons. I think it went in one ear and out the other.

  • by PubJeezy ( 10299395 ) on Sunday November 26, 2023 @04:02PM (#64033071)
    Guys, we're talking about chatbot that lets spammers write nigerian prince scam emails slightly faster, not an artificial super being in charge of launching weapons.

    This is some old dude who used to make movies doing an ad-read for his old movie in the style of political propaganda. Frankly, this is corny as hell.

    He thinks corporations are taking over the world but he didn't give us any insight into that from his perspective at the top of the entertainment industry. So corny.
  • I expect that many fans of science fiction will know of the "three laws of robotics" that were a plot device in Isaac Asimov short stories and novels even if they never read anything written Dr. Asimov has written, his works were so influential that these rules where referenced by other writers of science fiction. While the rules make basic sense on the surface the plot of many Asimov stories revolved around how the rules failed to account for edge cases, how robots (the AI of day) would develop logic to c

    • Thank you for writing about the three laws of robotics and correctly saying they were a plot device to show that 3 simple laws can't cover everything. When I saw your subject line I was starting to groan thinking you were about to suggest we just implement the three laws in everything.

      Carry on, fellow Asimov fan!

    • by deek ( 22697 )

      Indeed. The three laws of robotics, as postulated by Asimov, is flawed. Though it's a great place to start.

      Asimov has given much more thought to this issue than Ridley "OMG, AI is a Technical Hydrogen Bomb" Scott, and his thoughts are much more relevant than Ridley's. Problem is, Ridley only considers how he thinks an AI would act, based on his conceptions of the actions, reactions, and desires of a human being. Which is kinda the wrong way to think about this. Humans are a lump of hormones and instinc

  • by sid crimson ( 46823 ) on Sunday November 26, 2023 @04:38PM (#64033127)

    Would AI have helped?

  • I love most of his movies but this "AI WILL cHAnGe eVERythINg!!!111 RRREEeeeeEEeEEeeEEE!!!111" meme has got to stop.

    No, Ridley. Pattern matching/finding programs are not going to destroy civilization. This is real life not a sci-fi movie plot.

  • Rolling Stone: I wanted to ask you about what effect you think AI will have on Hollywood as it was a big sticking point in the writers' strike, in particular. One fear is that studios will plug a book into AI, have it crap out an "adaptation," and then pay actual screenwriters day rates to punch it up.

    I don't really think that would be worth it to the studio. You're not saving a lot of cash given the total budget, and your script might have some deeper issues.

    The thing I do think will happen is that writers

  • They are NOT good predictors of the future.

    The classic movie and book 1984 imagined a dystopian future that never materialized, despite the technology of today far surpassing what was imagined in the story. Many other dystopian movies, including his movie Blade Runner were likewise not realized.

    One lesson, I believe, is that technology is not the root of dystopic OR utopia. The quality of life flows from the leadership of nations. Dictators will always lead to dystopic, healthy democracies have a shot at a

  • At least, AI will resemble intelligence. What we have running the world right now does not
  • by Berkyjay ( 1225604 ) on Sunday November 26, 2023 @10:37PM (#64033819)

    Because what we currently see is NOT the AI that he thinks it is.

  • Like it says in the title. People who don't understand something will result to fearing change, especially when they don't understand the limits.
  • that this guy is afraid of software, and not afraid of nuclear weapons, when the latter has literally almost destroyed civilization multiple times, and actually destroyed two cities the two times they have been used.

    Fear of AI is technophobia, plain and simple. If we can get along with nuclear fucking weapons and the world hasn't imploded yet, we can get along with software. Skynet and robots taking over humanity is never going to happen. Please control your fear of the unknown, it's starting to get annoyin

  • Is Ridley Scott's imagination making him more entitled to have an opinion on AI safety?
  • There is zero contribution from a Ridley Scott to the state of the art of "AI" ( https://www.scopus.com/results... [scopus.com] ). Why are we even discussing opinions that are so totally ignorant of the subject that they are necessary irrelevant?

  • It seems obvious that there will be some sort of tipping point that will prompt millions of refugees to move to a more prosperous region. Against this backdrop of societal change, I think that AI is very much a secondary threat. The threat it seems is not so much that AI will destroy us, but that AI will not be used to save us - the wealthy will own AI and it will be used to generate more wealth for their pockets. Fewer jobs will exist and unless UBI is implemented, shanty towns will spring up - the first w

"Someone's been mean to you! Tell me who it is, so I can punch him tastefully." -- Ralph Bakshi's Mighty Mouse

Working...