Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Music Wireless Networking Technology

Qualcomm Debuts Lossless Bluetooth Audio Streaming With aptX Lossless (cnet.com) 96

Qualcomm says it's figured out a way to deliver lossless audio over Bluetooth, yielding quality that should be indistinguishable from uncompressed sources. And it's calling it aptX Lossless, the next generation of Qualcomm's proprietary audio format. From a report: Taking a "systems level approach," was the key, the company says, as it's "optimized a number of core wireless connectivity and audio technologies, including aptX Adaptive, which work together to auto detect and scale-up and are designed to deliver CD lossless audio when a user is listening to a lossless music file and the RF conditions are suitable." So, yes, there are a few caveats, and you'll need new hardware to get the full aptX Lossless experience -- that goes for the device you're streaming from (a phone, for instance), as well as your listening device, typically a pair of headphones. Qualcomm says devices that support aptX Lossless are expected to be available in early 2022. Its key specs are: Supports 44.1kHz, 16-bit CD lossless audio quality
Designed to scale-up to CD lossless audio based on Bluetooth link quality
User can select between CD lossless audio 44.1kHz and 24-bit 96kHz lossy
Auto-detects to enable CD lossless audio when the source is lossless audio
Mathematically bit-for-bit exact
Bit-rate : ~1Mbps

This discussion has been archived. No new comments can be posted.

Qualcomm Debuts Lossless Bluetooth Audio Streaming With aptX Lossless

Comments Filter:
  • >"yielding quality that should be indistinguishable from uncompressed sources"

    As if anyone could distinguish between high-bitrate/quality lossy and lossless in the first place.

    • I'm not sure if the reeee I'm hearing is the audiophiles getting annoyed or the interference on my non-gold-plated connectors
    • As if anyone could distinguish between high-bitrate/quality lossy and lossless in the first place.

      They might be able to if two different lossy compression schemes have been used sequentially, which is what happens when you listen to a mp3 on your bluetooth headphones. In theory they all preserve pretty much the same data, in practice maybe not.

    • High bitrate? No. Anything about 44.1kHz is superfluous.
      High bit-depth? Yes. AES published papers showed that on real trials humans were able to distinguish 120dB or effectively 20bits of dynamic range.
      Lossy vs Lossless? Yes. Trained listeners have proven to reliably identify every current lossy codec on a market in a controlled blind test. But we're not talking about every lossy codec. We're talking about bluetooth codecs, and the two best ones on the market (aptX HD and LDAC) are absolute trash as far as

      • by noodler ( 724788 )

        High bit-depth? Yes. AES published papers showed that on real trials humans were able to distinguish 120dB or effectively 20bits of dynamic range.

        That is the total range, so after the ears get a chance to re-adjust to the present dynamics. In practice these kinds of situations are rare. You don't normally listen to a jack hammer and then to a needle falling on the floor. And you'd need a couple of minutes readjusting to be able to hear the needle after the jack hammer in real life.
        In music the dynamic range is many orders of magnitude lower than the range you state. Most classical recordings, for instance, are typically about 50dB of dynamic range wi

        • You don't normally listen to a jack hammer and then to a needle falling on the floor.

          Actually that's precisely what you listen to with some music. Dynamics. Jackhammer has none. A single drumbeat has a huge amount. And no, you don't need time to adjust for dynamics. You only need time to adjust for sustained higher volume. You can hear a pin drop after someone smashes that drum, you can't if someone's been using a jackhammer for several minutes, and that's your hearing recovering from minor damage.

          But you are absolutely correct in the practical application. My comment is entirely based on t

          • by noodler ( 724788 )

            You can hear a pin drop after someone smashes that drum,

            https://en.wikipedia.org/wiki/... [wikipedia.org]

            Actually that's precisely what you listen to with some music.

            Sure, i'm just saying that when there are loud sounds in the music, like some loud drums, you don't actually experience the lower dynamic levels anymore. They are masked, both physiologically and neurally and your perception is adjusted to accommodate the loud sounds.
            It's like the dynamic range of your eyes which is adjusted by your pupil.
            Here's a video of someone doing a test in an anechoic chamber.
            https://www.youtube.com/watch?... [youtube.com]
            Notice how it takes time to adjust to the

            • And i think the claim of human hearing being 120dB includes this process.

              Again the source for the claim is the AES. Unfortunately I don't have my membership anymore, but they showed it experimentally. That paper made the case for exceeding 20bits, and I highly doubt that the experts in the field missed something like that.

              Anyway since I'm not a member anymore and can't go find the paper I can't provide a concrete cite so I'm going to leave that topic there :-)

              Actually, even 14 bits were considered good enough when cd players were just released. You'd get better performance than compact cassette so that was considered a win.

              Good enough for whom? Standards should not be based on mediocrity, they should be based on a best case scenario. Let cons

              • by noodler ( 724788 )

                Again the source for the claim is the AES.

                Like i said, i'm not a member so i can't check is., :/

                Good enough for whom?

                For the general home market.

                Standards should not be based on mediocrity, they should be based on a best case scenario.

                In an ideal situation perhaps. But quite often standards must deal with some limitation or other.

                14bits provides only 70dB of dynamic range. That noise floor is audible in a quiet living room, no need for even an anechoic chamber.

                Some time ago i did some testing on socalled 'hi res' audio (24 bits at some high samplerate).
                What i found out (talking about bit depth) is that classical recordings were particularly horrible with their noise floors. For 24 bit files, out of the 5 tested recordings none had a noise floor lower than -90dBfs. In fact, only one was c

    • Lossless does not mean "should be indistinguishable from uncompressed sources". It means "Decompresses to exactly the uncompressed source". FLAC, take a bow.

      This is like "55 inch class" TV which is actually 54.3 inches.

      • by noodler ( 724788 )

        Could be, could also not be.
        The stated bitrate makes sense in that it's still a good portion of an uncompressed stereo signal at 44.1kHz/16bits. If they use non-lossy compression then the stated 1mbit/s sounds about right with even some spare space left over for whatever.

      • by rnturn ( 11092 )

        ``It means "Decompresses to exactly the uncompressed source". FLAC, take a bow.''

        Qualcomm's problem is that they wouldn't be able to license FLAC to every company making audio equipment and accessories. They had to come up with something to make all those manufacturers drool over the future sales.

  • Definitely choose 24-bit 96kHz lossy over 16 bit 44.1kHz lossless unless you are a faux-diophile.

    • by Entrope ( 68843 ) on Friday September 03, 2021 @09:52PM (#61761529) Homepage

      16-bit samples give you about 96 dB dynamic range, enough to go from 0 dB (the quietest sound typically heard) to a jackhammer 50 feet away (95 dB, loud enough to cause hearing damage). And 44.1 kHz gives a Nyquist frequency of 22 kHz, above what most humans can hear.

      Or was that some kind of dog whistle?

      • Wait what? How do you, without compromise, derive the decibels of dynamic range from the sampling bit rate? With 16 bits, you get 64,000 different divisions you can use any way you like. Now that may be enough to go from 0db to 96db .. but then, that can be done with 1 bit too. Decibels are a logarithmic scale, so 96 Db is 4 BILLION times louder than 1db .. let's assume that's the smallest variation in volume possible, even though it isn't. So how do you get 4 billion variations of sound amplitude using 16

        • by Entrope ( 68843 )

          Each bit gives you roughly 6 dB of dynamic range. Power is the square of amplitude, so 2**16 amplitude levels give you 2**32 power levels. This isn't rocket science [wikipedia.org].

        • How do you, without compromise, derive the decibels of dynamic range from the sampling bit rate?

          You mean Bits Per Sample? You calling it "sampling bit rate" might be a big part of your not understanding shit.

          Because the sampled values arent arbitrary.

        • Wait what? How do you, without compromise, derive the decibels of dynamic range from the sampling bit rate?

          Dynamic range is purely a function of the amount of bits available to encode the quantization.
          I don't know what the dB/bit would be for audio off the top of my head, but it's very definitely calculable.

        • Wait what? How do you, without compromise, derive the decibels of dynamic range from the sampling bit rate?

          Using a well understood formula. Dynamic range is not about louder or quieter, it's about quantization error. The formula is 20xlog10(2^bits), approximately 96dB for 16bit and 144dB for 24bit.

          but then, that can be done with 1 bit too.

          No it can't. the dynamic range of a 1 bit signal is only 6.02dB. What you can however do is take that 1 bit signal and use it very VERY quickly. During processing you can apply noise to the signal to dither it, increasing the theoretical dynamic range. But then you're left with a noise floor at -6dB. But you can dither

          • by noodler ( 724788 )

            And this (very poorly explained) way of getting more dynamic range out of a single bit is how DSD works.

            DSD and practically every AD and DA converter on the market in the past 20 years or so. :)

            • Sort of. Not quite. Most ADCs yes, most high end DAC chips however are multi-bit. Mind you there are still a few nutjobs out there that insist the "best" way to make a modern DAC is still an R2R ladder. I guess those people still drive cars with carburetors too :-).

              • by noodler ( 724788 )

                Most ADCs yes, most high end DAC chips however are multi-bit.

                Yeah, you're absolutely right. That bit got lost when i originally wrote a whole thing about DSD and delta-sigma and then had to delete it because i failed to read you frikking post. -slap- :)

                About the R2R, yeah, mostly nutjobs. They have a small point tho. The main problem with R2R is precision in the R values, but modern tech has made this much better and high bit depth R2Rs are actually quite viable. And these R2R converters have a different set of pros and cons compared to delta sigma so there may be so

                • I think the industry very much has a fear of the new. Like the people saying Class-D amps are bad, despite modern ones showing distortion and noise signature that would make a Class-A amp blow it's own fuse :-)
                  Delta-Sigma was first introduced as a new technology and as always the first few iterations sucked, so people got it in their heads that it's not suitable, and for a petty few the myth persists, and a couple of companies are willing to take advantage.

                  R2R is commonly associated with a "pure" way of con

      • What is the point of bluray audio then? It supports 24bit 192khz.

        • by Entrope ( 68843 )

          For all I know, the point is somewhere between "because we can" and "because audiophiles are suckers". There are decent reasons for using 24 bits during recording and production, so that you have headroom for mixing and balancing without sacrificing SNR, but I don't know of any reason that it helps at all for the final product.

        • What is the point of bluray audio then? It supports 24bit 192khz.

          High rates are only relevant for audio samples in a studio setting before final mix is compiled due to harmonic effects that would otherwise distort components requiring higher frequencies to capture.

          There is no purpose in listening above nyquist rate of highest frequency humans can hear.

          • Not entirely true.

            The main reason for sampling at higher than 44khz in studio, and elsewhere, is that then they dont need to put a 22khz lowpass filter in front of the recording ADC. Without this filter, any sounds higher than 22khz get aliased by the 44khz sampling rate and become new frequencies that are less than 22khz.
            • You don't need to do that in the whole pipeline though. You can sample at a few MHz on the front end then instantly filter it digitally down to 44kHz. That's pretty much how sigma-delta ADCs work: they sample much much faster than the output frequency since they digitally filter on chip.

        • Comment removed based on user account deletion
        • Partially historical reasons, partially reason. The jackhammer example from the GP while correct concerns himself with a constant volume and not dynamics. The tested limits of human hearing actually lie around 120dB and dynamics present in loud music can actually achieve this.

          The historical reason however is that Nyquist while great in theory, proved difficult to achieve in the 80s. Dealing with the problems of conversion meant some very nasty analogue trickery was required to get digital audio to sound goo

          • by noodler ( 724788 )

            The tested limits of human hearing actually lie around 120dB and dynamics present in loud music can actually achieve this.

            Uhu.
            Have you actually tested this yourself?
            Make a sound file with noise at -90dBfs.
            Play back your favorite music at a good volume. Then put on the -90dB file at the same volume and see if you can hear the noise.
            I'm pretty sure you'll be surprised by how little dynamic range we perceive in reality.
            The 120dB figure is when you take into account some time for your ears to readjust and then you'd need an absolutely silent sonic background to listen in and then you'd have to focus really really hard and maybe yo

            • Yes and yes I can. Your test is faulty due to a lot of assumptions. What background noise am I allowed in the room for my test? What volume should 0dBFS be at my listening location?

              The 120dB was experimentally proven and published in a paper by the AES. I made no claim that you in your living room under your listening conditions would be affected by this. I said the "tested limits of human hearing", i.e. in an ideal sitting.

              Are advancements in audio likely to make even the slightest difference to your joggi

      • by AmiMoJo ( 196126 ) on Saturday September 04, 2021 @05:27AM (#61762199) Homepage Journal

        Most CDs are so poorly mastered anyway that even with current aptX lossless codecs you are unlikely to be able to tell the difference.

        Try the test on this website, see if you can reliably tell the difference between 128kbps MP3, 320kpbs MP3 and lossless audio.

        https://www.npr.org/sections/t... [npr.org]

        • Lossless vs lossy has nothing to do with poor dynamic range. Removing the lower range of audible dynamics is only one of several things that lossy codecs do. Most people can't tell the difference but even with a shitty modern album the difference is there when you know what to listen to.

          All of that is further complicated by the fact that wireless codecs are not in any way optimal from an audio quality point of view. They prioritise low latency and low computational complexity over audio quality. A 320kbps M

  • 16 * 441000 = 705,600 ... that means if you have 1 Mbps ... you ought to easily get lossless 20 bit sampling at 44khz .. and still have plenty of room for error correction coding.

    • My calculator comes out to 7,056,000.

      • by Entrope ( 68843 )

        There was an extra 0 after the 44100.

        But most people have two ears, so they tend to want two channels of audio, which doubles the required bit rate.

        • Both channels are strongly correlated though. There is generally a lot you can do to squeeze here

          • by Luckyo ( 1726890 )

            Not if you have proper directional audio. Correctly implemented slight delay one ear over other even on the similar parts is critical for that.

    • by bn-7bc ( 909819 )
      Hmm, one of us must have some errors then if i'm correct you have ( for a sterio signal with no error correction etc) 2(16*4410O)=1,411,200 or about 1.44Mbps. So now the question becomes how much cam the lossless encoder squeeze this, and what is our target bitrate for the output stream ( and ofc how much iverhead do we need for error correction/detection?
      • Your equation is correct for stereo... his is correct for mono

        The problem with talking about what can be achieved compressing losslessly, IT IS ALWAYS DEPENDENT ON THE SOURCE DATA.

        The fact will be that some audio, maybe even most of the audio you ever want to listen to, can be compressed by at least X%, but not ALL audio losslessly.

        In fact any lossless compression scheme you use is guaranteed to expand at least one signal at least one bit in length. That bit is the bit that says "not compressed becaus
    • by noodler ( 724788 )

      You are forgetting that it's stereo, so 2x the .7mbit/s which makes 1.44 or so mbit/s :)
      The 1 mbit is perfectly in line with lossless compression of generic audio data.

  • Is it lossless?

    I can't tell from the summary.

    • It's not, it's lossy, and it's illegal to label lossy data compression "lossless", it's deceptive advertising.
      • by noodler ( 724788 )

        Interesting. Do you have a reference?

        • At best, it may be lossless "sometimes", but 1mbps is not enough for that, so it will be a similar situation as with LDAC, which has a maximum bit-rate of 990kbps and is also claimed to be "lossless" but is in fact lossy. Files in various lossless formats like FLAC, Apple Lossless, TAK, etc. can often have long sections with an average bitrate that's much higher than 1mbps, like 1.35. There are entire files / steams that have an average bitrate higher than 1mbps, or files with average bitrate of 270kbps (ye
          • ehm by "but 1mbps is not enough for that" I meant that it's not enough for it to be lossless more than "sometimes".
  • Or am I going to have to watch videos that look like a poor lip sync?

  • ... you'll not need only new phone, but new wireless headphones that have new DAC in them capable decoding the proprietary format. Yet wired ones work with all these fancy high bit rates "out of the box"
    • by bn-7bc ( 909819 )
      Not necesarely a new phone, if they let you sacrefice the extra battery used by cpu encide/decode as abbosed to doing it in an asic. But yea new bt headphones will probably be required
      • by tlhIngan ( 30335 )

        Not necesarely a new phone, if they let you sacrefice the extra battery used by cpu encide/decode as abbosed to doing it in an asic. But yea new bt headphones will probably be required

        As far as I know, all Bluetooth audio codecs are done on the main CPU, there's no ASIC offload. Qualcomm will provide you the necessary libraries and you have to sign a licensing agreement. Licensing fees are cheaper if you have a CSR (Cambridge Silicon Radio) Bluetooth chip (owned by Qualcomm, of course).

        But it's always been

        • by bn-7bc ( 909819 )
          Ok thanks for thst bit if detail, i was probably thinking more about video which us a whole other ballgame and not relevant here major brain fart
    • by Luckyo ( 1726890 )

      True, but that's because wired ones don't need to do any meaningful computations. You just send the analog signal with as many channels as you need.

      With wireless, you need to encode the sound, send it over the ether, receive it, decode it and play it back. All while having enough of a buffer not to have break ups in sound if there's a transient break in the signal.

      That compute part is actually nasty, because you can't throw much power or cost at it. This needs to be integrated in phones and headphones that

    • Ya, but wired shit sucks. It's the future man. I love everything being wireless now.
      Ultimately, my headphones already support SBC (duh), aptX-HD/LL, AAC, and LDAC (lossless)
      On the source side, it's a bit more complicated.
      All things support SBC, but it's a huge pile of steaming shit.
      Apple devices only support AAC.
      Android tablets support LDAC and aptX.
      My windows machine supports aptX.

      So anyway, chances are, whatever device you're using already supports some kind of good codec.
    • >".. you'll not need only new phone, but new wireless headphones that have new DAC in them capable decoding the proprietary format."

      Yep. My 4 year old phone doesn't even support ANY AptX codec. And you can bet no older equipment can or will be upgraded. And my 1 year old highest-end Sony wireless headphones also won't support this new lossless AptX.

      Everything supports SBC (since it is old and required) and it does suck. But lossless AptX vs. any lossy AptX- I suspect nobody will know any difference i

      • Everything supports SBC (since it is old and required) and it does suck. But lossless AptX vs. any lossy AptX- I suspect nobody will know any difference in quality.

        LineageOS 16 had? a feature to make SBC sound better than AptX by tweaking codec parameters and pushing more data.

        https://lineageos.org/engineer... [lineageos.org]

        • >LineageOS 16 had a feature to make SBC sound better than AptX by tweaking codec parameters and pushing more data."

          Wow, that is interesting. I didn't know that was possible and be compatible with existing devices (headphones, etc).

          The site they reference for real-time comparison is really neat:
          https://btcodecs.valdikss.org.... [valdikss.org.ru]

    • Yet wired ones work with all these fancy high bit rates "out of the box"

      I tried your solution, but when I got up from my computer my headphones inexplicably were janked off my head. I find your solution unworkable and look forward to someone developing an alternative.

      • I tried your solution, but every half hour or so the connection between my headphones and my phone would be inexplicably lost, and it'd take 10 minutes to get it back. Also my phone would insist on playing incoming ringtones via the headphones at +30 dB compared to my music, blowing out my eardrums. And after a few hours my headphones would stop working and need to be recharged.

        I'd rather deal with wires than with wireless earbuds.

        • I was being facetious, but really if you have dropout issues you should get that fixed. The only time my headset ever drops out is if I walk to the other side of the apartment and even then it takes only seconds to come back. Maybe your bluetooth stack in your OS is screwed? Maybe your headphones are made of chinesium? Either way your experience is not reflective of most of what has been available on the market for years already.

          Also why would your ringer play louder than your music? The Bluetooth remote pr

  • So instead of hearing static when the signal degrades, you'll hear the beautiful sounds of high quality, lossless static.
    • So instead of hearing static when the signal degrades, you'll hear the beautiful sounds of high quality, lossless static.

      That is not how digital coding works.

  • I'm curious how much, if any, this lossless codec improves on FLAC, which naturally we wouldn't want to standardize into the Bluetooth spec because it's well used, open source, and patent free.

    • by bn-7bc ( 909819 )
      Nit shore, but meupybe flack (even implemented in hw) draws more power than this new cidec, I have no idea, but it's certainly possible.
    • by Luckyo ( 1726890 )

      Flac isn't used on bluetooth because of computational requirements. One of the biggest problems with bluetooth is extreme low power and low cost decoders that are required as baseline.

      Flac is great, but you do need fairly decent computational power to encode it on the fly. Remember: you need both encode and decode components to be effectively real time AND low power + low cost.

      • If the material is stored on the phone as FLAC, just need to decode at the receiver.

        No reason why audio streams couldn't be FLAC encoded at the source too. I expect that FLAC requires a bigger buffer to handle latency, but minor detail nowadays.

        • by Luckyo ( 1726890 )

          And buffer is the single biggest problem with modern wireless. How do you play a game where audio requires a significant buffer? How do you talk on the phone when audio requires significant buffering? These are notably the problems that Qualcomm started solving with their earlier AptX codecs, driving the SBC's typical latency of around 300-400ms down to 150-ish. Second gen AptX LL drives it down to 50-70ms, which is actually good enough for a lot of games that don't require real time sound for survival (i.e

          • Reality is, Flac is utterly unfit for purpose because encoding it on the fly with low latency on devices with very limited computational power and low power usage is just not a thing. It's not what it was ever optimized for. As you correctly mention, it would need significant buffering, which definitionally makes it unsuitable for real time audio.

            I wouldn't say "utterly unfit". 10ms latency for CD audio is 441 samples. You could choose a small blocksize and fixed predictors with FLAC and get very fast, low

            • by Luckyo ( 1726890 )

              Bluetooth as a system and codecs it uses in particular are aimed at tackling a specific fundamental problem. That being computational problem/power problem of digital properly handshaked bi-directional audio. It's exceedingly taxing on both compute and power to encode sound on the fly, send it over wireless, decode in target device and play it back all in real time, all while without needing major buffering for interruptions AND not requiring so much power and expensive decoders that as many devices as reas

              • by Luckyo ( 1726890 )

                Typo: For Flac "as many times" should say "many times". I.e. encode once, playback many times.

              • Flac is designed to be encoded on a fairly powerful system once, and then decoded as many times on a much weaker system.

                Again, you're partially right... I can confirm as the inventor that my intention was to push as much of the complexity of higher compression onto the encoder. But it was carefully designed to not preclude the applications you are describing. Dig into the format some more; the simpler modes are very low complexity/latency and give most of the bang for the buck on compression.

                • by Luckyo ( 1726890 )

                  Right. So what are going to be the RAM and Tx/Rx requirements on those supposedly "low compression" modes?

                  Again. The codecs designed for "encode once, decode many" are categorically unsuitable for bluetooth. What bluetooth needs is "encode once, decode once, with minimal power, compute, memory, and Tx/Rx requirements".

                  This is simply NOT what "encode once, decode many" codecs are designed to do. In any way, shape or form. You can contort one in extreme ways, such as one you suggest to meet SOME of the blueto

    • I'm more curious how it improves upon LDAC, which already exists, my headphones already support, and seems to have nearly identical features (lossless @ 44.1kHz/16-bit, lossy at higher)
    • I'm curious how much, if any, this lossless codec improves on FLAC, which naturally we wouldn't want to standardize into the Bluetooth spec because it's well used, open source, and patent free.

      In a very key way: It dynamically scales down to lossy encoding. The key problems with Bluetooth is bandwidth. In only the most ideal situations does it have the bandwidth to deliver lossless audio, so unless you want to suffer dropouts every time your turn your head you need a codec that can work at very low bitrates, or can dynamically drop from lossless to lossy without glitching.

      Also FLAC for all its brilliance is not a low latency codec and also quite resource intensive to encode. It's completely unsui

  • https://en.wikipedia.org/wiki/... [wikipedia.org] The world does not need another proprietary audio format where someone is hoping to clip the ticket by pushing their own proprietary audio format. Commercialize a standard, and get royalty income forever is always some greedy companies wet dream. Qualcomm - I expect them to toss in some nasty DRM, so I would stay away for that reason alone. A couple of grams of headphone wire is not causing anyone neck problems relative to adding a 50 gram battery into the headphones.
    • Just implement AAC and be done with it

      I implement Opus and be done with it. AAC isn't designed for realtime use and has a largeish codec delay. Opus gets better compression and has lower delay.

      • Just implement AAC and be done with it

        I implement Opus and be done with it. AAC isn't designed for realtime use and has a largeish codec delay. Opus gets better compression and has lower delay.

        Don't be silly, how are they supposed to squeeze licensing fees out of bluetooth manufacturers with a fully-open, high-quality codec? (I really don't get why people don't just start implementing opus over bluetooth anyway, other than being heavily invested in "squeezing money out of people for prorprietary codec licenses").

    • Qualcomm already ships the most widely used non-SBC codec for bluetooth- aptX.
      Windows uses it, and so does Android.
      Apple is the odd man out using AAC.
      This appears to be Qualcomm's competitor to Sony's lossless codec LDAC (also supported by Android)
      It has roughly double the bandwidth that aptX-HD has.

      Honestly, on my Sony MX1000s, I can't hear the difference between aptX-HD and LDAC on FLAC recordings.
    • The world does not need another proprietary audio format

      Clearly it does since currently none of the CODECs on the market have achieved audible transparency and no one offering something open is willing to step up to the table.

      Qualcomm - I expect them to toss in some nasty DRM

      Why would you expect that? Qualcomm's current offerings are the single most widely used on the market and have been for years, and there's no DRM. Do you have some evidence you can point to that this will change *this* time? I mean this will be Qualcomm's 5th most probably widely supported and widely used on the market.

      A couple of grams of headphone wire is not causing anyone neck problems relative to adding a 50 gram battery into the headphones.

      You want neck problem

  • by bb_matt ( 5705262 ) on Saturday September 04, 2021 @03:48AM (#61762089)

    Given that so much music is mixed to sound good on consumer level gear, it's always questionable as to how often the ability to play back at 44.1kHz, 16-bit CD lossless, is going to make a blind bit of difference.

    There are just so many factors at play. The way a recording has been mastered, the quality of the equipment a listener has, the listeners own hearing.

    Add to this, the fact that this is a new propriety format, requiring a new phone and headphones, puts this into what exact use case?

    If you are somewhat of an audiophile, you will be using wired speakers - high quality all around.
    Pretty much reaching amateur studio recording equipment and beyond for some.

    When studio engineers mix, they use exceptionally high quality studio monitors. Headphones are used for rough work.

    If you get yourself a set of studio monitors and listen to, for example, spotify, it is quite noticeable that so many recordings sound very different to the way they would on consumer level equipment. Studio monitors give the purest representation of sound, but consumer level speakers will do all sorts of stuff with the audio signal, such as boosting bass levels etc.
    But if you get a high quality lossless recording and play that through a set of studio monitors, if the engineers that worked on that recording were good, the difference in quality is remarkable - and it really has little to do with the bit-rate, 24-bit isn't going to make any difference at all to the end users listening experience.

    So, the engineer has the job of ensuring it sounds good on as many devices as possible - and not all engineers are created equal.

    That gets back to the actual point of a proprietary format requiring new hardware - in that there is no point.

    So many people consume music at a quality barely higher than an old transistor radio and enjoy it just fine.

    • Given that so much music is mixed to sound good on consumer level gear, it's always questionable as to how often the ability to play back at 44.1kHz, 16-bit CD lossless, is going to make a blind bit of difference.

      Depends on the level of loss, of course ;)

      As far as bluetooth codecs are concerned, as an example:
      SBC will make it sound like a steaming pile of shit.
      AAC sounds great. aptX-HD and LDAC sound really fucking great.
      LDAC is lossless, but aptX-HD is not.
      I can't tell the difference between them.
      I can tell the difference between AAC and aptX-HD.
      It's difficult to tell the difference between SBC and a busy city street.

      • SBC will make it sound like a steaming pile of shit.

        Oh yeah - and people will gladly listen to that level of quality.
        Heck, before personal stereos, we'd listen on cheap mono-speaker transistor radios.
        The standard issue car stereo in affordable cars was a steaming pile of junk way back in the day.

        I've come to realise, that the vast majority simply don't care.
        For example, I'm probably (along with my partner), the only person on our street who listens to music all the time.
        I really care - it makes a difference to me. I live for music.

        Sure, this is anecdotal, bu

        • Airpod Pros are... "better" than the non-Pros. Still not fantastic. But very audibly better.
          Unfortunately, jogging with my 1000xm3's kind of sucks ass.

          Ultimately, even on the ultra shitty non-Pro airpods (I keep them for when I forget to charge my Pros) you can very easily tell the difference between SBC and AAC (Just pair them with anything... not Apple)

          SBC really is... really bad.

          So ultimately, I can't agree with you on the point "If you really care about music... Don't use headphones."
          I care abou
    • Given that so much music is mixed to sound good on consumer level gear, it's always questionable as to how often the ability to play back at 44.1kHz, 16-bit CD lossless, is going to make a blind bit of difference.

      Two things: 1) Technology should not cater to the lowest common denominator. What "so much" music is mixed for is irrelevant. There's music that doesn't cater to Beats wearing students sitting on a noisy train.
      2) Your assessment is false. Compression and the resulting artefacts does not care how music is mixed, just like a heavily encoded JPEG doesn't care if you're picture is from a landscape with a DLSR, or an anime drawing. The quality degradation is noticeable even on moderate gear if you know what to l

  • ...the new DRM overlords and digital chains. Wish Apple would put the 3.5mm courage jack would so that when the rest of the Android world sees that Samsung copied them, they would know it was OK. Color me skeptical, but I wonder if Qualcomm has more DRM capability now.
    • (edit) ...the new DRM overlords and digital chains. Wish Apple would put the 3.5mm courage jack back so that when the rest of the Android world sees that Samsung copied them, they would know it was OK. Color me skeptical, but I wonder if Qualcomm has more DRM capability now.

God help those who do not help themselves. -- Wilson Mizner

Working...