Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Television Sci-Fi

Black Mirror Creator Says He Used ChatGPT To Write An Episode. It Was Terrible. 62

Charlie Brooker, the showrunner of "Black Mirror," revealed in an interview that he used OpenAI's ChatGPT to write an episode for the show's sixth season but deemed the results "shit." Gizmodo reports: "I've toyed around with ChatGPT a bit. The first thing I did was type 'generate Black Mirror episode' and it comes up with something that, at first glance, reads plausibly, but on second glance, is shit," the dystopian sci-fi auteur told Empire. "Because all it's done is look up all the synopses of Black Mirror episodes, and sort of mush them together. Then if you dig a bit more deeply you go, 'Oh, there's not actually any real original thought here.' It's [1970s impressionist] Mike Yarwood -- there's a topical reference."

While his experiments with generating an episode of Black Mirror with AI might have been deemed a failure, Brooker told the outlet that it did point out some of his writing cliches. "I was aware that I had written lots of episodes where someone goes 'Oh, I was inside a computer the whole time!'," he said. "So I thought, 'I'm just going to chuck out any sense of what I think a Black Mirror episode is.' There's no point in having an anthology show if you can't break your own rules. Just a sort of nice, cold glass of water in the face."
This discussion has been archived. No new comments can be posted.

Black Mirror Creator Says He Used ChatGPT To Write An Episode. It Was Terrible.

Comments Filter:
  • â¦the last couple seasons of the show felt like they were written by an AI.

  • Look, ChatGPT does word prediction. ChatGPT predicts the *most likely* next word. Always going with the most likely result limits creativity. I'd argue that creativity is about finding alternatives to the most likely. So, if you want to try to get ChatGPT to be creative then you need to ask it to produce multiple options for each response. Then you probably need to add something with your human creativity, then you ask it to combine its responses with yours. At some point you've done enough of the creative

  • LLM takes existing material, evaluates relationship between all words, all sentences, all paragraphs and so on. And then generates responses based on those relationships.

    Ergo, it cannot create something that wasn't already described in those relationships.

    • Can it draw a connection between two concepts that no hu-man ever thought of before?

      • by vadim_t ( 324782 )

        It's a statistical word generator. It doesn't think. But that does mean that it can make random cool crap by pure accident.

        So to give a try at something novel, I asked it to come up with a story in which an ant destroys the world. I googled and there seems to be nothing of the sort out there. The results are mainly about destroying ant colonies. So I think the plot is reasonably novel in that this particular idea might not exist (though certainly random doomsday scenarios are common)

        It spit out a plot for "

        • Sounds like a grey goo-style apocalypse tbh. Not necessarily saying thatâ(TM)s bad, but kind of predictable.

          • by vadim_t ( 324782 )

            I mean yeah, it doesn't do magic. But it spit that out in about 10 seconds. That's faster than I can type, let alone plot. It's also interesting that I don't think there's a whole lot of doomsday ant scenarios out there, but this thing can just keep pumping them out anyway.

            Though the vast majority of plots are going to be unoriginal anyway. The trick is in having a good execution.

        • So to give a try at something novel, I asked it to come up with a story in which an ant destroys the world

          You haven't seen "Them! [wikipedia.org]" ?

          I seem to remember an ink-on-dead-tree story from the "Golden Age" where an irradiated termite becomes an even more voracious eater and literally brought civilisation down around people's ears. That would have been in the same time period as "Them!". And "Day of The Triffids". Disaster by animals was a 1950s theme.

          Wyndham also did "The Kraken Wakes [wikipedia.org]" in the same time period.

      • "Can it draw connections that humans haven't?"

        Yes and no. The model actually needs to be trained. You could theoretically 'teach' chatgpt by giving it examples of what types of connections to look for and giving it text to analyze, but it is better to do some fine-tuning and build a model specifically for your purpose and what types of connections it should be looking for.

        It really depends on how novel of a connection you're hoping to find. Is it a connection that exists elsewhere already, and easy

      • Only based on statistical probability, as I understand it, which is about as uncreative as it gets because the most likely response is defined by training data, thus written in stone. You control the output with your prompt, but there's no creativity there. I see this flaw with all the AIs with which I interact. If you want creativity, use your own brain.
        • by Luckyo ( 1726890 )

          Actually, it's the input where creativity lies. There are already people working on what is tentatively called "superinputs" or "superquery", where instead of asking LLM a simple question, you craft a complex, highly exclusive essay on what you want it to output, directing it towards desired goal while eliminating potential points of error.

          I suspect this will be the new "big thing to learn".

      • I'ts probably not impossible to do that.

        But from the descriptions I've heard, that is not how ChatGPT works.

        • by Luckyo ( 1726890 )

          ChatGPT as it has been explained is about statistical connections between letters and words. As I understand it, it doesn't go to sentence and paragraph level yet. That is in the future.

          I could be wrong and they're already there.

    • LLM takes existing material, evaluates relationship between all words, all sentences, all paragraphs and so on. And then generates responses based on those relationships.

      Ergo, it cannot create something that wasn't already described in those relationships.

      Except non-technical people have no concept of that at all. If you listen to the news stories about AI, you'll see there's a huge disconnect between what it actually is versus what the layperson thinks it is. To be fair, the latter is probably largely the fault of the AI evangelists who are simply interested in building enough hype so they can cash in big before people become aware of just how limited it is.

      • It is funny, isn't it, that the AI evangelists blew up right as the crypto scam blew its wad and entered a slump.
        • by caseih ( 160668 )

          The AI evangelists going before congress begging to be regulated is nothing more than asking the government to keep out their competitors and let them have a monopoly.

      • by Luckyo ( 1726890 )

        Has it ever occurred to you that many if not most of the current news stories... are written by LLMs?

        I've watched more than one podcast about news writers, ranging from random podcasts to this weird show on youtube where guy takes in people who are deep in debt, breaks down their life and their debt and makes them a plan on how to get out of debt.

        There have been news writers in all of those, and they're all saying the same thing. "I'm using AI to write stories now, and I'm outputting way more stories now, s

  • anthology series. How about the Heavy Metal magazine?

  • by Somervillain ( 4719341 ) on Friday June 09, 2023 @05:45PM (#63589866)
    Looking at the code generated, it's plausible at first glance as well and highly wrong at second. It's the worst of all worlds...looks right enough to fool you...yet fundamentally wrong and possibly dangerous. It's a pattern matcher that has no clue what it's doing. It's output is like those cheap knock-off toys at the dollar store. Resembles the real thing, but is far from it. It's a bunch of hype with no substance and I can't wait for this fad to pass.
    • Are you idealizing, just a teensy weensy bit, "the real thing"?

    • by hey! ( 33014 )

      I don't know. The thing isn't human, it doesn't understand or care about what it's doing so it will respond without to your prompt even if your prompt is ambiguous or just plain wrong. So even if it were *infallible*, you couldn't trust its output uncritically.

      I've only played around with its code generating capability, but a couple of things impressed me. One was its knowledge of obscure languages, libraries and frameworks and their conventions. Another is part of it generating plausibly correct looking

  • ... in writing and other arts anyway. It competes around the 60-90% level

    Brooker isn't just surreal, he's a genius surrealist.

    • And yet, ChatGPT managed to produce precisely what Brooker actually desired - a shitty episode to make him feel good about himself in comparison.
  • sounds like a Black Mirror episode.
    • Sounds more like IT Crowd to me.

      Assuming the Black Mirror episode was proposed by Douglas Renholm and the storyline involved a lot of boobies.

  • by esperto ( 3521901 ) on Friday June 09, 2023 @05:52PM (#63589898)

    Last South Park Season the episode on ChatGPT was partly written with ChatGPT and I have to say it was pretty awesome, but the key was likely that Trey didn't just put "write me a south park episode" read it and find the result a piece of shit, but used it as tool and knew where to use what ChatGPT gave them and where to not.

    • There are 325 episodes of South Park to draw from, so if you feed it a couple pieces of current events (a meme and a big news topic) it should be able to build something out of the existing characters, and maybe even get some decent puns.
  • Any tool has to be used with the awareness of its strengths and limitations.

    So for instance I wouldn't expect it to generate a full plot for an episode of anything. What I think it's good for though is for very quickly getting somewhere in the ballpark, and to quickly throw a lot of stuff at the wall. So I'd expect it to quickly produce something that follows the general form of an episode. Then you take that and tweak it until it sounds sane. Then you can use it to quickly generate scenes or dialogue. Then

    • Can I tell it to start writing sketches and critiquing them on its own time using its own criteria until it develops its own tastes and stops listening to you?

    • It would work much better the opposite way, having a real person come up with the overarching plot outline and concepts, and then using the LLM to fill in the more rote aspect of dialogue and character interactions between major plot points.

      Of course, an expertly crafted story will manage to include subtly relevant points of interest even in what appear to be rote scenes. That's what the LLM would have a tough time replicating. But today's casual "content consumers" are rarely engaged/perceptive/intelligent

      • It really feels like a lot of the issues with generative AI is that it gives largely uncreative people the feeling of creativity, without actually understanding the purpose behind it.

        Much of the audience as you say are not particularly perceptive or picky when it comes to content. Part of that unfortunately stems from the content-driven algorithms that largely emphasize mediocrity over quality. Garbage in, garbage out. An audience built on mindlessly consuming mindless content is going to generate equally m

        • The clogging I'm more concerned about are the outright scams and spams. Labor cost for deploying those is going to go down dramatically. Generating stuff that "sounds good" is both the primary requirement of a scam, and the primary thing LLMs accomplish. You could have a bot building rapport with a mark for free, over the course of many months. A human might only need to check the chat log a couple of times, until it's time to swoop in for the finale.

          Loads of consumer-media dreck is something we already dea

          • I see it from a much more longer-term perspective. If the goal is truly to automate most if not all digital white-collar work, we could very well live to see the Dead Internet Theory in action. Everything, from scams to customer support, creative content and even server administration will be bots with a smidgen of humans to periodically deploy and check on their bot nets.

            Social media sites like Meta/Facebook seem bent on eventually deploying generative users that can convincingly pose as human users. No ne

  • I haven't read the article, but maybe the author was using the wrong approach? I thought you had to tell the AI to write an episode with an adversary who is capable of defeating Data. Then you get something new and different.

  • I saw similar experiments by some youtubers. They do not think this is a threat to them after that.

  • They're all terrible.
  • Putting Black Mirror in such a short prompt is what lead to the mash up of black mirror plots as an output.

    A more clever prompt might be: write me a story about a near future period when a current technology has advanced further. The technology seems helpful at first but is now a negative for people is its advanced form.

    As a second prompt ask ChatGPT to write the resulting story in the style of a black mirror episode

  • by caseih ( 160668 ) on Friday June 09, 2023 @08:16PM (#63590174)

    Whether it's generating text or images, this has been my experience as well. At first glance it looks not too bad, even plausible, and for simple simple things quite good. But on closer inspection of more complicated endeavors, it's garbage. Especially the image creators. The model has no concept of the underlying principles behind something, so it can only mash stuff together that sounds or looks vaguely similar in a statistical fashion, with no regard to physical principles or any sort of rules. Every Bing image I've created has something majorly wrong with it as far as plausibility goes.

    Generating computer programs is somewhat better (more patterns I guess), but again still the LLM has no concept of the logic behind constructions, and no intelligence. A useful tool for curating out some examples to learn from, though. Maybe just as good as your outsourced coder at boiler plate code.

  • Comment removed based on user account deletion
    • I'd be impressed if it were 3-4 levels deep... a Black Mirror episode about an AI being asked to write an episode of a popular TV show which is about an AI being asked to write an episode of an unpopular TV show about an AI being asked to write an episode of a semi-popular TV show... and each AI is represented by a turtle standing on it's output.

  • A writer who has also not studied the show personally, will give equally shit results with the same instructions.

    I tell my shovel to dig all the time and it just sits there.

    If you don't spend time learning how to use a tool, you might as well not use it at all. Don't blame the tool for your inadequacy though.

    That said with OpenAI's aggressive restrictions and biases it would actually be nearly impossible to write a good episode.

  • it comes up with something that, at first glance, reads plausibly, but on second glance, is shit,

    But we're talking about an entertainment programme (I think - I recognise Booker's name from some other bits of entertainment "news" which were broadcast within my zone of attention) so the overwhelming majority of the audience aren't going to give it a "second glance". Certainly not enough attention to spot the plot holes, logical fallacies, non sequiteurs, etc. It's not like it's a documentary.

  • >> 'Oh, there's not actually any real original thought here.'

    ChatGPT is like a "wisdom of crowds" version of things. It can rearrange concepts into unique systems based on how the different concepts relate, but it's not going to come up with a completely new concept. It is probably going to do a better job of providing filler than something like a studio executive, but that's mostly because it has no sense of avoidance of risk.

  • LLMs only do slightly randomized next word prediction in extending a line of text. They are just limited to the training data set used. This isn't a flaw, it is what they are made to do, the problem arises when people do not understand the limitations this necessarily brings.

    This situation with Black Mirror highlights the lack of creativity that comes with that. It cannot originate new plot ideas.

    This is the other side of the coin to that other more recently famous phenomena of "hallucinations" which are wh

  • Chat GPT and its rivals are the latest incarnations of 50 to 60 year old programs like ELIZA (1962-4) and PARRY (1972). Then in the 1990s Andrew C. Bulhak's Postmodernism Generator using the Dada Engine, a system for generating random text from recursive grammars. and Jason Hutchens' MegaHAL which used, in part hidden, Markov models to produce often lucid text and, of course, others.

    Chat GPT has the advantage of a huge corpus of text in its training data, tools for decoding natural language & (probably

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...