Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Music Slashdot.org

Meta's AI-Powered Audio Codec Promises 10x Compression Over MP3 (arstechnica.com) 98

Last week, Meta announced an AI-powered audio compression method called "EnCodec" that can reportedly compress audio 10 times smaller than the MP3 format at 64kbps with no loss in quality. Meta says this technique could dramatically improve the sound quality of speech on low-bandwidth connections, such as phone calls in areas with spotty service. The technique also works for music. Ars Technica reports: Meta debuted the technology on October 25 in a paper titled "High Fidelity Neural Audio Compression," authored by Meta AI researchers Alexandre Defossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. Meta also summarized the research on its blog devoted to EnCodec.

Meta describes its method as a three-part system trained to compress audio to a desired target size. First, the encoder transforms uncompressed data into a lower frame rate "latent space" representation. The "quantizer" then compresses the representation to the target size while keeping track of the most important information that will later be used to rebuild the original signal. (This compressed signal is what gets sent through a network or saved to disk.) Finally, the decoder turns the compressed data back into audio in real time using a neural network on a single CPU.

Meta's use of discriminators proves key to creating a method for compressing the audio as much as possible without losing key elements of a signal that make it distinctive and recognizable: "The key to lossy compression is to identify changes that will not be perceivable by humans, as perfect reconstruction is impossible at low bit rates. To do so, we use discriminators to improve the perceptual quality of the generated samples. This creates a cat-and-mouse game where the discriminator's job is to differentiate between real samples and reconstructed samples. The compression model attempts to generate samples to fool the discriminators by pushing the reconstructed samples to be more perceptually similar to the original samples."

This discussion has been archived. No new comments can be posted.

Meta's AI-Powered Audio Codec Promises 10x Compression Over MP3

Comments Filter:
  • by sodul ( 833177 ) on Tuesday November 01, 2022 @09:08PM (#63017537) Homepage

    Seems that the marketing department is confusing things again. If I want to record birds, or rats singing that format would drop the ultrasound part of the information. So yes it works for day to day use with a claim that the missing audio is non perceivable by most humans but it is far from lossless.

    • The Marketing Dept. probably think what everybody really wants is more Facebook in their lives.
    • Surely it's no loss in quality relative to MP3, not a lossless codec.

      • by DrXym ( 126579 )
        Even claiming that would be subjective. Most codecs do A-B-X style testing to figure out how they compare to one another at various compression settings. I could easily see how some AI based compression might work great with the training samples and sound like shit on something else. And quality is just one consideration, CPU / memory consumption, jump seek, live streaming / latency, patents, stereo etc. are others.

        e.g. Not much good to have an AI codec if hypothetically it transpired you needed to load u

        • I'd be curious if the training data was mostly Western music with equal temperament tuning and if that affects what happens when you give it Pythagorean tuning or microtonal sounds.

    • by piojo ( 995934 ) on Tuesday November 01, 2022 @09:28PM (#63017593)

      If you read to the end of the summary, you will see:

      "The key to lossy compression is to identify changes that will not be perceivable by humans, as perfect reconstruction is impossible at low bit rates. To do so, we use discriminators to improve the perceptual quality of the generated samples. This creates a cat-and-mouse game where the discriminator's job is to differentiate between real samples and reconstructed samples. The compression model attempts to generate samples to fool the discriminators by pushing the reconstructed samples to be more perceptually similar to the original samples."

      It seems like they understand compression and loss quite well.

      • by AmiMoJo ( 196126 ) on Wednesday November 02, 2022 @05:07AM (#63018171) Homepage Journal

        The issue with this method is the discriminator. It has to model human hearing and perception of sound in order to measure how good a lossy codec is. Any error in that modelling gets translated to the compression algorithm.

        My first thought was that it might be more interesting to do what FLAC does. A lossy compression of the audio, followed by a lossless compression of the difference between the lossy version and the original. That way you get some of the benefit of lossy compression, but the end result is still lossless. AI could optimise the lossy compression part.

        • The issue with this method is the discriminator. It has to model human hearing and perception of sound in order to measure how good a lossy codec is. Any error in that modelling gets translated to the compression algorithm.

          I assume they leverage previous work on lossy codecs. Even if much of it is human testing, they might be able to use those saved results to create a model.

          My first thought was that it might be more interesting to do what FLAC does. A lossy compression of the audio, followed by a lossl

    • by RandomUsername99 ( 574692 ) on Tuesday November 01, 2022 @09:50PM (#63017639)
      Jeez... how did an entire team of engineers creating a highly compressed audio format for a social media website consider support for ultrasonic rat song an obscure edge case rather than a showstopper? ::shakes head:: People just aren't interested in making quality software, anymore.
    • If it's not in FLAC, I'm not coming back.

    • No they are not. Just he editor mistakenly dropped 'virtually' from in front of 'loss-less'. ;-)

    • by ArmoredDragon ( 3450605 ) on Wednesday November 02, 2022 @02:28AM (#63017979)

      Under that reasoning, there is no lossless codec. Sampling above 44.1khz is pointless for nearly anything other than scientific applications. Per Nyquist, that means you don't get any sound above 22khz, which is already above the best human hearing. Only idiots sample higher than that for anything a person is intended to hear.

      But say you want to record unicorns singing for whatever reason, even 192khz sampling rate is nowhere near the theoretical limit of the highest possible audio frequency that can physically exist.

      In other words, lossless doesn't mean what you think it means.

      • by Saffaya ( 702234 )

        "Only idiots sample higher than that for anything a person is intended to hear."

        Nice, insulting other people because you think you know better.
        Except you don't.

        1) Higher than 44.1 kHz sampling is used for mastering purposes, before downsampling back to 44.1 kHz
        2) 44 kHz would be enough to perfectly sample frequencies up to 22 kHz, IF and ONLY IF you had infinite precision on the amplitude.
        Which you clearly don't.
        Thus, a higher sampling rate do bring benefits, as you are trying to approximate real values wit

        • by noodler ( 724788 )

          2) 44 kHz would be enough to perfectly sample frequencies up to 22 kHz, IF and ONLY IF you had infinite precision on the amplitude.
          Which you clearly don't.

          With 24 bits you have more amplitude resolution than you can practically realize in the analog electronics that precede or follow it.
          24 bits represents a dynamic range of over 144dB. What analog equipment had a signal to noise ration close to that? Most tops out around 120dB.
          Thus there are many 24 bits DACs that won't ever be able to display their excellent amplitude resolution because of the analog electronics that transport it to your speakers.
          24 bits gets you way deep into analog background noise. How yo

          • by mystran ( 545374 )
            Oversampling DACs are a thing, but usually you'd be oversampling with something like 1-4 bits in order to produce a signal that's equivalent to the 24 bits signal that you're advertising, rather than to try and improve beyond (since that's already mostly beyond the analog limits).

            Oversampling in audio processing is also very much a thing to mitigate aliasing from non-linear processing (ie. any sort of distortion). This way hopefully the most significant aliasing will fall into the excess bandwidth so it
            • by noodler ( 724788 )

              Oversampling DACs are a thing, ...

              Since most ADCs and DACs are delta-sigma most ADCs and DACs are also oversampling. Like i said, practically all converters work in the mega Hertz range internally.

              but usually you'd be oversampling with something like 1-4 bits in order to produce a signal that's equivalent to the 24 bits signal that you're advertising,

              Sure, that's how delta-sigma converters work. Super high samplng rates and super low bit depths. All in all capturing about 24 bits worth of information when converted back to normal sampling rates.

              Oversampling in audio processing is also very much a thing to mitigate aliasing from non-linear processing (ie. any sort of distortion).

              Sure. But you only need to do this if the plugin in question doesn't already have internal oversampling.
              And as you say, it's also kindof strange to do

              • by mystran ( 545374 )
                Perhaps I should clarify as I wasn't really trying to argue as much as just add more thoughts into the discussion. As a plugin developer I generally think that having every plugin oversample internally is overall the better engineering trade-off (and sometimes you want different amounts of oversampling for different stages in a single plugin), but the real point I was trying to bring up is that when processing audio, the bandwidth beyond what's audible can often become meaningful due to distortion (includin
                • by noodler ( 724788 )

                  Good points well made.

                  What's really sad about this 'hi-res' craze (well, it's been going on for quite some time with SACD etc) is that the recordings are terrible overall.
                  At one point a couple of years ago i decided to check them out from a technical perspective.
                  I downloaded a bunch of 'samplers'. Generally encoded at 96kHz or higher and 24 bits.

                  I think i examined 8 or 10 or so and none of them actually had musical content that came close to using the available space.
                  None of them had a noise floor below abo

        • It seems someone is confusing sampling with quantization?

        • Nice, insulting other people because you think you know better.

          Because I do.

      • theoretical limit of the highest possible audio frequency that can physically exist.

        Some post on the internet claim that the highest sound frequency is in the Gigaherz range.
        Do you know if that is true? I would think that the highest possible sound frequency would depend on the speed of sound and the density of the medium?

    • by AmiMoJo ( 196126 )

      Most lossless formats won't preserve your ultrasound either.

      Audio recordings are typically 44.1kHz or 48kHz sample rate. Due to Nyquist that means the highest frequencies they can reproduce are 22.05kHz and 24kHz respectively.

      And even if you sample at 96kHz, most sound cards cannot exceed 48kHz so will just remove frequencies higher than 24kHz anyway, or even worse convert them into distortion. Most sound recording hardware has a built in low pass filter to remove frequencies above 24kHz too, as does a lot

      • by noodler ( 724788 )

        Audio recordings are typically 44.1kHz or 48kHz sample rate.

        This would all be true 10~20 years ago.
        These days most studios record at least at 96kHz.
        The benefit is better reproduction of the hearable range due to more relaxed filter constraints.
        Once recorded you can use it to master lower bandwidth versions.

        most sound cards cannot exceed 48kHz so will just remove frequencies higher than 24kHz anyway, or even worse convert them into distortion.

        That's nonsense if you're talking about audio interfaces. Most stuff is 24/96 or higher these days, even the cheap stuff. Only the very lowest and cheapest category of audio interfaces (Behringer etc.) is limited to 48kHz. Remember that 24/96 converters were alrea

        • by AmiMoJo ( 196126 )

          I was under the impression that even when recording at 96kHz, there is a low pass filter at around 24 or 28kHz anyway. Sampling at that rate is done to improve the quality of digital equalization and mixing later.

          As for audio interfaces, I'm talking about the DAC. While it might take 96kHz in, I bet that the output has a low pass filter that removes everything above about 24kHz. Perhaps in some very high end equipment there are adjustable low pass filters, perhaps done digitally too, but most gear will just

          • by noodler ( 724788 )

            I was under the impression that even when recording at 96kHz, there is a low pass filter at around 24 or 28kHz anyway.

            Not usually.

            Sampling at that rate is done to improve the quality of digital equalization and mixing later.

            This can be one of the reasons, tho mixing itself (adding channels) is not a problem.
            But a lot of processing algorithms do benefit from a higher sample rate, especially non-linear algorithms like distortion. But usually this has nothing to do with the source material. The plugins could simply upsample the signal internally, process, and then downsample to the original sample rate.
            Still, many plugins don't do this or don't do this correctly and a higher samplerate can make them sound better becau

    • by noodler ( 724788 )

      If I want to record birds, or rats singing that format would drop the ultrasound part of the information.

      There seems to be nothing specific about the frequency range that can be encoded by this format.

      it is far from lossless.

      They didn't say it's lossless. They say there is no loss from a 64kbps MP3 but the MP3 is already lossy. So you get the same loss as a 64kbps mp3 except it is 10x smaller.

    • So yes it works for day to day use with a claim that the missing audio is non perceivable by most humans but it is far from lossless.

      You're being obtuse. "lossless" has nothing to do with frequency range. You don't call WAV format "lossy" simply because the sample rate is 44.1kHz and also doesn't contain irrelevant ultrasound information.

      In any case you're the only one here making a claim that something is lossless. That term has a specific meaning and this codec is not lossless, nor was it claimed to be. The claim was a comparison between mp3, a format which already applies low-pass filtering as part of compression (unless someone very

  • The authors need to understand lossless vs. lossy. II also wonder if this will be free and open source?

    • The article never claim the codec is lossless.

      The key to lossy compression is to identify changes that will not be perceivable by humans, as perfect reconstruction is impossible at low bit rates

    • I think this is the critical part that the article ignores. If it is a licensed codec from Meta, you can be pretty sure that no sane company will touch it.
  • But how much computing power is required to compress and uncompress the audio file??
    • With them saying some sort of machine learning is involved, I'd guess at least 10K as much resources burned.
    • Probably a good amount for decompress. But it comes from so little data, 300k for a song. which raises the question: will I be able to plug in my talentless crap, adequately downsampled, and have something that sounds talented come out?

  • ...but Thomas Middleditch instead plays a drugged out billionaire obsessed with getting humans to prefer living within his virtual reality

  • Finally, the decoder turns the compressed data back into audio in real time using a neural network on a single CPU.

    Sorry, a single CPU is not a neural network. A simulated one, perhaps.

    • Luckily everyone reading that sentence would understand exactly what it means
      • An important characteristic of a neural network is massive parallelism, where you have thousands of limited processors working at the same time. Sure, you can simulate this on a single chip, but you lose the massively parallel architecture. You end up spending a whole lot of computing power to the simulation of the neural net. You'd be better off just ditching the neural net simulation and just optimize your prediction algorithm to run like normal software on a normal processor.

        • by Anonymous Coward

          Parallel execution, and the massiveness thereof, are not defining characteristics of neural networks. There isn't some processor number cutoff where where a NN becomes "real" instead of "simulated." I take it you think neural nets were only simulated decades ago and became real since GPUs and TPUs were developed? Or is that not enough? Is a neural net just a simulation until each artificial neuron has its own dedicated processor? (Thus making all NNs with more neurons than processors mere simulations. Absur

  • it sounds like the compressed stream cannot simply be decoded by implementing a published algorithm. Rather, the proprietary app is the only entity that can restore the audio stream to something we perceive as close to the original form. So what happens if you lose the license to decode the stream? Or your hardware is not supported by the vendor? Your data is useless to you. Even if Meta provides an 'export' option, that's not good enough. Because any compressed data you own, that you did not have the
    • by jonwil ( 467024 )

      The paper contains a link to a Github repository containing code that (according to the documentation at least) can both encode and decode. So at the very least there is an implementation out there (even if it not actually open source since its licensed CC-BY-NC)

  • This codec can achieve such amazing compression "because AI."

    The part about achieving the target bandwidth, I believe. The part about no losing quality in the process, not so much.

    • This reminds me of Vernor Vinge's A Fire upon the Deep, where the protaganist is trying to have a conversation with someone in a nice, crisp and realistic video stream and realizes she's just talking to the local AI, which is spouting bullshit because the actual data rate coming through is less than 10 bits per second.

  • by Shag ( 3737 ) on Tuesday November 01, 2022 @09:59PM (#63017661) Journal

    ...if you had some lame audio player with no wireless and less space than a Nomad.

  • First, the encoder transforms uncompressed data into a lower frame rate "latent space" representation.

    Meta's researchers claim they are the first group to apply the technology to 48 kHz stereo audio

    These seem contradictory to me.

  • If an Encodec file is played in the metaverse and there's no one around to hear it, does it sound lossless?
    Since in the metaverse there is no air to breathe or vibrate can sounds be played back in the metaverse?

  • Don't actually make people have to use proprietary formats to post on twtter as i "suggested" on the other news

  • by xwin ( 848234 ) on Tuesday November 01, 2022 @10:14PM (#63017697)
    This new codec is not going to be adopted. There were codecs that produced audio smaller than MP3 before. Most of them failed to be adopted because current codecs are good enough. Just look at this long list https://en.wikipedia.org/wiki/... [wikipedia.org]. MP3 became popular at the time of dial up modem speeds and even at that time it was not dethroned. Today size of an audio stream is tiny in comparison with the video so the audio codec does not matter. It may matters to facebook if they store huge amounts of audio on their servers. Average MP3 file has 1 minute of audio per 1MB, so 1TB drive can store 1000000 minutes or roughly 2 years of audio. Today people use aac which has better compression and supported by pretty much any device.
    If facebook cared about saving bandwidth they would dissolve their company, all facebook does is wastes the bandwidth and the time of people who use it.
    • Yes, but 6 kpbs to make something that approaches 64 kbps MP3 in quality is very impressive. Its utility will be in places where bandwidth will always be at a premium such as low-earth orbit satellite networks. No one will care about its storage savings, however, for obvious reasons.

      • by raynet ( 51803 )

        Yes, but they would need to get it near 128kbps MP3 to be useful. 64kbps MP3 has lots of audible artefacts.

        • Agree on the audio artifacts, you can definitely hear them in music playback. Speech playback has much lower tolerances.

        • A lot of people can easily hear audio artifacts from 128kbps MP3, and even Apple increased the bitrate of AAC files from the iTunes Store to 256kbps, which probably requires at least 384kbps MP3 to match it.

          Are most people using dollar store speakers and headphones to listen to music, or what?

    • I don't think this is for you to listen to music this is for Facebook to save money on bandwidth of their video and audio services. The hard part would be getting Google and Mozilla and Microsoft to support it.
    • This new codec is not going to be adopted. There were codecs that produced audio smaller than MP3 before. Most of them failed to be adopted because current codecs are good enough. Just look at this long list https://en.wikipedia.org/wiki/... [wikipedia.org]. MP3 became popular at the time of dial up modem speeds and even at that time it was not dethroned. Today size of an audio stream is tiny in comparison with the video so the audio codec does not matter. It may matters to facebook if they store huge amounts of audio on their servers. Average MP3 file has 1 minute of audio per 1MB, so 1TB drive can store 1000000 minutes or roughly 2 years of audio. Today people use aac which has better compression and supported by pretty much any device.

      If facebook cared about saving bandwidth they would dissolve their company, all facebook does is wastes the bandwidth and the time of people who use it.

      What's the bandwidth bill for Spotify? They control the server and the client, if there's a smaller file format with equivalent quality I'm not sure why they wouldn't use it.

      • The Opus codec is mature and available today. It is transparent for most music by 128kbps, with great results even at 64kbps, but all the music stores still use inferior MP3, Vorbis, and AAC at higher bitrates.

        They don't really seem to care.

      • by xwin ( 848234 )

        What's the bandwidth bill for Spotify? They control the server and the client, if there's a smaller file format with equivalent quality I'm not sure why they wouldn't use it.

        You are forgetting that there are two parties involved in this - Spotify and their listener. It is the listener that is the problem. Codec needs to be widely adopted for spotify to move to it. Or for facebook for that matter. There is a huge variety of clients that need to support this codec. Some of them do not have horse power to run this decoder. This is an old debate which was re-hashed many times, yet MP3 still exists and supported. For example https://www.amazon.com/music/p... [amazon.com] . This guy born the same

        • With all the better codecs around, music still sold in MP3.

          With how cheap storage space has become, there's no good reason for paid music downloads to be in any sort of lossy format, period.

        • There is a huge variety of clients that need to support this codec.

          Significant savings may be realized through streaming in the new format to clients controlled by Spotify.

  • > The key to lossy compression is to identify changes that will not be perceivable by humans, as perfect reconstruction is impossible at low bit rates.

    I shot the Sheriff but I did not shoot the Deputy, because otherwise you would notice the police were missing.

  • So that guy owned a horse. But he was frugal and figure that if he could get the horse to eat less, he could save a bunch. He almost succeed. He got the hose to survive on little for long time. Enlightened by the progress he then pushed bravely further yet and resolve to not feeding the horse at all. Though the horse then quit and died.

    So did the audio compressed with this method. The zero-length compression works phenomenally well on ... silence.

  • Anegodally, the early dynamic range compression used in telephony worked great for western languages but was impacting legibility for tonal ones. I can find any references to this, anyone?

    • My recollection is that western POTS would carry about 8kHz and you need more like 11kHz for the full range of human speech, so people speaking tonal languages had to shout to be understood.

  • A genetic algorithm perhaps be used to evolve a highly optimized codec. Maybe combine such with neural net techniques.

  • I got a compression algorithm that whips the pants off that. Every tune compresses down to a string that's 128 characters or less. It might take a while to download though.

  • With companies like Facebook, Google, Amazon, Akamai, CloudFlare that live on exploiting other people's data, you always have to wonder why they invest in anything. Because no company invests in cool tech just for the beauty of it.

    So naturally, the first question that comes to my mind is: WTF is Facebook up to with this one? Why do they invest in a codec that can compress audio for use on ultra-low bandwidth links?

    The only two explanations I can think of are:
    - Always-on audio surveillance
    - Always-on music c

    • Look, I'm not buying into this ecosystem, but how can you be confused about the benefit of a more efficient audio codec? Facebook serves a lot of videos with audio, and there's obvious utility to streaming audio in VR.

    • The only two explanations I can think of are:
      - Always-on audio surveillance
      - Always-on music copyright infringement enforcement

      You are asking about a company that not only moves media around its platform but also provides voice and video communication services why they would invest in improving the efficiency and reducing the bandwidth cost of said service, and the only two examples you can think of are something completely unrelated to codecs?

      Seriously on a scale of "Sep/11 was an inside job" to "We are ruled by alien lizards", just how far down the rabbit hole of insanity have you fallen?

  • I'm pretty sure if we remove everything that isn't perceived and doesn't lead to the user noticing a loss in quality or content, I'm fairly sure we can save a couple dozen billions.

    Plus, we'd get rid of that nuisance on top.

  • Not too bad actually (Score:4, Informative)

    by VanessaE ( 970834 ) on Wednesday November 02, 2022 @05:43AM (#63018207)

    Ignoring all the marketing BS, I followed the links to the actual announcement ( https://ai.facebook.com/blog/a... [facebook.com] ) and listened to the samples they provided.

    I have to say for 6 kbps, the last segment of the sample (made with their "Encodec") is pretty good quality.

    • You're right - it's very good for 6kb. However, their claim that is transcoded "without a loss of quality" is vastly overstated. At best, it sounds comparable to a 24kbps MP3 file at 48khz, meaning that they have achieved about 1/4 the file size of an MP3 with like quality. That is still significant, being that AAC or even AAC-HE struggles to reach a 1/2 file size when compared to MP3. Even so, it is hyperbole and hubris to claim over twice the perceived results. Also, there is no subjective evidence (more
  • Meta provides communication services. Even if this codec doesn't get used for music that doesn't mean it is useless. If it can be made more efficient than Opus both in terms of bandwidth and computation, then you'll find hundreds of millions of people around the world may use it without ever knowing about it.

    No really, without looking it up what codec is used by WhatsApp, Messenger, standard 3GPP voice calls? There are many codecs you will use without ever having a clue what they are.

  • I wonder if this is needed. I mean there area better lossy image compression algorightms than jpeg but none have gained any traction. The reason is that jpeg is good enough and storage space and internet speeds have improved so much that you don't win much by switching to better algorithm.

    I think we are starting to see same with audio. mp3 with 320kbit/s is as good as uncompressed audio for all people except very small exceptions. And we have better algorithms without patent problems in form of opus or

    • With modern encoders, listening tests indicate MP3 is essentially transparent at 192kbps. Thing is about whether it's needed: it's a massive straw man to compare against MP3 these days, since that's long been superseded. Opus beats or matches other codecs pretty much across the board except for very low bitrates that it doesn't support and there codec2 beats all comers.

  • After so many years of enjoying the convenience of streaming music/audio, I was amazed at how used to the quality I've become. From music being streamed from my phone to listening to SiriusXM in the car to letting Alexa entertain me at home, etc. This past year I went old school and bought a record player, and nabbed some classic vinyl I'd owned back in the day. Listening to the depth and richness of it compared to digitized streaming media was like night and day. Obviously the bandwidth and convenience are

  • wealth? Because that wasn't lossless !
  • They expect to license this to *everyone* providing an additional revenue stream.

    The problem will be getting anyone to pay them for it, instead of generating their own AI codec, if necessary.

    Meta can't patent the process of generating an AI codec. Even if they end up quite similar, any clean room implementation such as training from scratch would eliminate any basis for a lawsuit.

  • Everytime some fruitloop comes up with a lossy compressor that is more lossy (and thus gets better compression) that fruitloop claims that the improvement occurs without "loss" of "quality" of the audio or video or whatever the fruitloop is compressing. Usually the claim of "equal quality" is due to the defective fruitloop itself being either blind or deaf or both.

    Every single "new fangled lossy compressor" that has come out over the past 40 years that has claimed "more efficient compression" has also, wit

  • Why the specious comparison?

    I mean WMA audio and OGG Vorbis are marginally better than MP3. Even the creator of MP3, Fraunhofer Institute, has been working on later codecs (MPEG-H and xHE-AAC for example https://www.iis.fraunhofer.de/... [fraunhofer.de] ).

    As far as I've seen... AAC is today's standard lossy method, and FLAC the lossless. With AAC having flavors/versions for higher or lower bitrates, and voice specific vs not). The HE above stands for High Efficiency, meaning more work but smaller files or better result

  • I have plenty of storage for my music. When I'm talking to someone over a cellular network, I don't really care about hifi quality.
  • But that's not how it will be used. Telecoms companies will use it to compress voice traffic down to the same crappy sound quality as they do now - carefully engineered to be one iota better than "costs us more in complaint handling than it saves us in bandwidth".
  • How much processing power does it take to encode and decode? It seems like it should take a lot of CPU cycles on the encoding side, and probably a lot on the decoding side as well. Assuming that's the case, it doesn't seem like this will be generally useful. It would be good for niche applications where you have CPU power to spare but storage or bandwidth are severely limited.
  • So, that means the silence you hear in the Metaverse will be that much silenter.

    Awesome-sauce!

Only great masters of style can succeed in being obtuse. -- Oscar Wilde Most UNIX programmers are great masters of style. -- The Unnamed Usenetter

Working...