Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Television Entertainment

How Stanford Engineers Created a Fictitious Compression For HBO 90

Tekla Perry (3034735) writes Professor Tsachy Weissman and Ph.D student Vinith Misra came up with (almost) believable compression algorithms for HBO's Silicon Valley. Some constraints -- they had to seem plausible, look good when illustrated on a whiteboard, and work with the punchline, "middle out." Next season the engineers may encourage producers to tackle the challenge of local decodability.
This discussion has been archived. No new comments can be posted.

How Stanford Engineers Created a Fictitious Compression For HBO

Comments Filter:
  • by Horshu ( 2754893 ) on Saturday July 26, 2014 @06:03AM (#47537519)
    I wasn't even aware that programmers in Cali could even legally call themselves "engineers". I worked for a company out of college HQed in California, and I was told coming in that we used the term "Programmer/Analyst" because California required "engineers" to have a true engineering degree (with the requisite certifications et al)
  • Re:Meh (Score:4, Interesting)

    by TeknoHog ( 164938 ) on Saturday July 26, 2014 @08:49AM (#47537859) Homepage Journal
    Or if you're into math, you invoke the pigeonhole principle [].
  • Re:Meh (Score:2, Interesting)

    by Anonymous Coward on Saturday July 26, 2014 @10:54AM (#47538327)

    I haven't seen the show, but I have experience in dinking around with lossless compression, and suffice it to say, the problem would be solved if time travel existed, because then we could compress data that doesn't yet exist.

    Basically to do lossless, you have to compress data linearly. You can't compress the chunk of data it will get to in 10 seconds now on another core, because the precursor symbols do not yet exist. Now there is one way around this, but it's even more horribly inefficient, and that is by compressing from each end (or in the case of HBO's codec... the middle) so instead of a single "dictionary" for the compressor to operate from, it uses two. At the end could then throw away the duplicate dictionary entries on a second pass. That's why it's inefficient. In order to split compression accross cores, you have to do some inefficient compression by duplicating efforts.

    If I have a 16 core processor and I want to compress it using all 16 cores, I'd be doing it putting each scanline of an image on a separate core, so effectively every 16 pixels, wrap around to another core. At the end of the the compression scheme, a second pass is run over the dictionary to remove duplicates. In video, this won't really exist, because video doesn't have a lot of duplication unless it's animation (eg Anime, Video games) This is why lossy compression is always used for video/still captures from CMOS/CCD cameras. That data has data that can be lost because there is inherent noise in the capture process.

    That second pass is still going to be stuck to one core.

    The ideal way to solve lossless compression problems is by not trying to make the algorithm more efficient, but by intentionally trading off efficiency for speed. So to go back to the previous example. Instead of having 1 progressive stream, you instead have 16 progressive streams divided horizontally. This will work fine for compression, but decompression will have a synchronization problem. You may have seen this when you watch h.264 videos and some parts of the I frames aren't rendered, resulting in "colorful tiles" in the missing spaces if your CPU is too slow. This is because the 16 parts of the frame won't all decompress at the same speed, because they will have different complexity. So the end result is you end up having to buffer enough for two I frames, so that you can still seek the video. At UHD resolution, this means 33554432 bytes per frame. So if you have a 120fps video, you need 4GB of RAM just to buffer 1 second. Our current technology can't even read data off a SSD this fast. The fastest you can get is 1500MB/sec and even then it costs you 4000$. Hence why we use lossy compression, so the disk can keep up.

Matter will be damaged in direct proportion to its value.