Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Television Entertainment

How Stanford Engineers Created a Fictitious Compression For HBO 90

Tekla Perry (3034735) writes Professor Tsachy Weissman and Ph.D student Vinith Misra came up with (almost) believable compression algorithms for HBO's Silicon Valley. Some constraints -- they had to seem plausible, look good when illustrated on a whiteboard, and work with the punchline, "middle out." Next season the engineers may encourage producers to tackle the challenge of local decodability.
This discussion has been archived. No new comments can be posted.

How Stanford Engineers Created a Fictitious Compression For HBO

Comments Filter:
  • by hax4bux ( 209237 ) on Saturday July 26, 2014 @03:04AM (#47537323)

    Now they can admit it.

  • Meh (Score:4, Insightful)

    by ShakaUVM ( 157947 ) on Saturday July 26, 2014 @03:08AM (#47537333) Homepage Journal

    Anyone who knows anything about compression knows that universal lossless compression is impossible to always do, because if such an algorithm existed, you could run it repeatedly on a data source until you were down to a single bit. And uncompresing a single bit that could be literally anything is problematic.

    I sort of wish they'd picked some other sort of woo.

    • by Carewolf ( 581105 ) on Saturday July 26, 2014 @03:35AM (#47537381) Homepage

      I don't think they mean univeral that way, I believe they mean universal lossless compression as gzip, bzip2 or 7zip. They will work on almost any data, but not all kinds of data. The idea here is that the show has a new way to do this that is supposed to be even better. The method they use remind me though of FLAC.

      • by Anonymous Coward

        Some 20 years ago when there were some choices of compression software I remember I ran some tests - and found that a utility called "Winace" is the only compression utility that produces a smaller compressed file than the original if compressing .avi, .mov, and .jpg files

        All the rest produced larger "compressed" files than the original

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        I haven't seen the show, but I have experience in dinking around with lossless compression, and suffice it to say, the problem would be solved if time travel existed, because then we could compress data that doesn't yet exist.

        Basically to do lossless, you have to compress data linearly. You can't compress the chunk of data it will get to in 10 seconds now on another core, because the precursor symbols do not yet exist. Now there is one way around this, but it's even more horribly inefficient, and that is by

      • The method they use remind me though of FLAC.

        FLAC is actually in the first episode for a few seconds; it was the baseline they were comparing against.

      • Exactly. The first step in ANY compression algorithm is:

        Know Thy Data

        Your mention of FLAC is a perfect example.

    • Re:Meh (Score:5, Funny)

      by hankwang ( 413283 ) on Saturday July 26, 2014 @03:56AM (#47537411) Homepage

      "you could run it repeatedly on a data source until you were down to a single bit."

      That's why you need two distinct compression algorithms. Sometimes one will work better, sometimes the other. While repeatedly compressing, don't forget to write down in which sequence you need to apply the decompression. I believe this can compress abitrary data down to zero bits, if you are patient enough.

      • Re:Meh (Score:4, Funny)

        by serviscope_minor ( 664417 ) on Saturday July 26, 2014 @07:29AM (#47537809) Journal

        While repeatedly compressing, don't forget to write down in which sequence you need to apply the decompression.

        Pretty much. I've found that I can do this. Essentially for N bits, I've got a large family (2^N) of compression algorithms. I pick the best one and write down it's number. The resulting data is 0 bits long, but there's a little metadata to store.

    • Meh. You only need the basic rules of physics to compute the universe from scratch, including all possible movies.

    • by WoOS ( 28173 )

      > if such an algorithm existed, you could run it repeatedly on a data source until you were down to a single bit.

      Ah, but you are not describing universal lossless compression but universal lossless compression with a guaranteed compression ratio of better than 1:1.
      That indeed isn't possible but I can't see it claimed in TFA.

    • Re:Meh (Score:4, Interesting)

      by TeknoHog ( 164938 ) on Saturday July 26, 2014 @07:49AM (#47537859) Homepage Journal
      Or if you're into math, you invoke the pigeonhole principle [wikipedia.org].
      • Re:Meh (Score:5, Insightful)

        by pla ( 258480 ) on Saturday July 26, 2014 @08:23AM (#47537917) Journal
        Or if you're into math, you invoke the pigeonhole principle

        Though technically true, in fairness we need to differentiate between meaningful data and noise. Yes, a universal compressor doesn't care. Human users of compression algorithms, for the most part, do care.

        So the limit of useful compression (Shannon aside) comes down to how well we can model the data. As a simple example, I can give you two 64 bit floats as parameters to a quadratic iterator, and you can fill your latest 6TB HDD with conventionally "incompressible" data as the output. If, however, you know the right model, you can recreate that data with a mere 16 bytes of input. Now extend that to more complex functions - Our entire understanding of "random" means nothing more than "more complex than we know how to model". As another example, the delay between decays in a sample of radioactive material - We currently consider that "random", but someday may discover that god doesn't play dice with the universe, and an entirely deterministic process underlies every blip on the ol' Geiger counter.


        So while I agree with you technically, for the purposes of a TV show? Lighten up. :)
        • Or if you're into math, you invoke the pigeonhole principle So the limit of useful compression (Shannon aside) comes down to how well we can model the data. As a simple example, I can give you two 64 bit floats as parameters to a quadratic iterator, and you can fill your latest 6TB HDD with conventionally "incompressible" data as the output. If, however, you know the right model, you can recreate that data with a mere 16 bytes of input. Now extend that to more complex functions - Our entire understanding of "random" means nothing more than "more complex than we know how to model". As another example, the delay between decays in a sample of radioactive material - We currently consider that "random", but someday may discover that god doesn't play dice with the universe, and an entirely deterministic process underlies every blip on the ol' Geiger counter.

          IOW, Kolmogorov complexity [wikipedia.org]. For example, tracker and MIDI files are a great way to "compress" music, as they contain the actual notation/composition rather than the resulting sound. Of course, that doesn't account for all the redundancy in instruments/samples.

          So while I agree with you technically, for the purposes of a TV show? Lighten up. :)

          IMHO, half the fun of such TV shows is exactly in discussions like this -- what it got right, where it went wrong, how could we use the ideas in some real-world innovation. I find that deeper understanding only makes me enjoy things more, not less, an

        • by Nemyst ( 1383049 )
          If we "someday" discover that radioactive decay is not inherently random and unpredictable to an atomic level, it'd mean we suddenly contradict a hundred years of scientific research, models and theories. While not impossible, your post implies that there's a model and we just don't know it; the truth is that it's extremely unlikely to be the case.
        • by AmiMoJo ( 196126 ) *

          16 bytes plus the model.

        • Your definition of "random" and your understanding of quantum mechanics isn't quite right, although the rest of your post was quite interesting. A thought experiment known as EPR (Einstein-Podolsky-Rosen), followed up by the Stern-Gerlach apparatus and Bell's theorem, actually proved that certain attributes of particles in no way persist between measurements, and that they "choose" outcomes based on a mechanism that is not only unknown, but proven by experiment to be non-describable by deterministic theorie
    • Anyone who knows anything about compression knows that universal lossless compression is impossible to always do, because if such an algorithm existed, you could run it repeatedly on a data source until you were down to a single bit. And uncompresing a single bit that could be literally anything is problematic.

      Actually, anyone who knows anything about compression knows that universal lossless compression is always possible to do. Sometimes you get good compression, sometimes you get shitty compression, and sometimes you get 0% compression. Note that 0% compression is still compression — it's just a degenerate case.

      You are right, of course, that you can't repeatedly compress something down to a single bit — there is always a point of diminishing returns. But just because you can't compress something do

    • http://en.wikipedia.org/wiki/S... [wikipedia.org] "In information theory, Shannon's source coding theorem (or noiseless coding theorem) establishes the limits to possible data compression, and the operational meaning of the Shannon entropy."
    • 1

      Apparently the /. crapfilter doesn't like my compressed comment.

  • Ask CSI. GUI interface using visual basic to track the killer --> http://www.youtube.com/watch?v... [youtube.com] Seriously, the average joe that watches TV doesn't care about what algorithm is used, let alone know what an algorithm is.
    • I disagree. We should not be dumbing down the GUIs just because some Average Joe does not get extra enjoyment from seeing a realistic one. I greatly appreciated the whiteboards showing some believable discrete cosine mathematics in Silicon Valley.
  • Now let's just hope that no aliens listening in to our broadcasting and no far future humans actually believe this and try to recreate the "groundbreaking compression algorithm" the whiz humans of the digital age came up with.

  • He came up with the idea of using lossy compression techniques to compress the original file, then calculating the difference between the approximate reconstruction and the original file and compressing that data; by combining the two pieces, you have a lossless compressor.
    This type of approach can work, Misra said, but, in most cases, would not be more efficient than a standard lossless compression algorithm, because coding the error usually isn’t any easier than coding the data.

    Well, this is almost how MPEG movie compresion works - and it really does work! MPEG works by partly describing the next picture from the previous using motion vectors. These vectors described how the next picture will look based on movements of small-ish macroblocks on the original picture. Now, if that was the only element of the algorithm movies would look kind of strange (like paper-doll characters being moved about)! So the secret to make it work is to send extra information allowing the client to calc

Do molecular biologists wear designer genes?

Working...