Forgot your password?
typodupeerror
Music Entertainment

Can You Really Hear the Difference Between Lossless, Lossy Audio? 749

Posted by Soulskill
from the not-on-these-terrible-speakers dept.
CWmike writes "Lossless audio formats that retain the sound quality of original recordings while also offering some compression for data storage are being championed by musicians like Neil Young and Dave Grohl, who say compressed formats like the MP3s being sold on iTunes rob listeners of the artist's intent. By Young's estimation, CDs can only offer about 15% of the data that was in a master sound track, and when you compress that CD into a lossy MP3 or AAC file format, you lose even more of the depth and quality of a recording. Audiophiles, who have long remained loyal to vinyl albums, are also adopting the lossless formats, some of the most popular of which are FLAC and AIFF, and in some cases can build up terabyte-sized album collections as the formats are still about five times the size of compressed audio files. Even so, digital music sites like HDtracks claim about three hundred thousand people visit each month to purchase hi-def music. And for music purists, some of whom are convinced there's a significant difference in sound quality, listening to lossy file formats in place of lossless is like settling for a Volkswagen instead of a Ferrari."
This discussion has been archived. No new comments can be posted.

Can You Really Hear the Difference Between Lossless, Lossy Audio?

Comments Filter:
  • by Anonymous Coward on Friday March 22, 2013 @01:13PM (#43248237)

    Usually if the bitrate is above 256kb/s, i dont notice any difference.
    Ofcourse it still effects some songs (especially the percussion parts).

    • by jlfose (1063282) on Friday March 22, 2013 @01:50PM (#43248819)
      It could be dependent on the gear that playback occurs on and the quality of the listener's ears. In watching Stan Lee's new show about "superhumans" it becomes clear that some people have, by training or genetics, better reflexes then the bulk of humanity. On my home gear I can't tell the difference above 160Kbs, but I'm more then willing to believe that some people can, either because they have much better gear to listen to, and/or they have superior hearing.
      • Re: (Score:3, Insightful)

        by Anonymous Coward

        That's a very apt description. Genetic factor, age, absence of damage, training to understand the difference/subtleties of overtones, and of course the equipment to playback sounds truly. I found the wired article about Peter Lyngdorf and Steinway building speakers good enough to detect the difference between an American and German manufactured pianos a fascinating read. http://www.wired.com/reviews/2012/10/steinway-lyngdorf-model-ls-concert/

      • by Joce640k (829181) on Friday March 22, 2013 @02:38PM (#43249503) Homepage

        You can actually practice listening to music, it's something you learn.

        Sometimes the difference between two sets of speakers can be as little as one clarinet in the middle of an orchestral piece. On one set it sounds good, on another it doesn't (or it's hardly there at all).

        It's not something you can pick out just by putting on a rap CD for ten seconds and turning the bass up to maximum in a store (which is how most "HiFi" systems are chosen these days and why the manufacturers produce so much garbage).

    • by Anonymous Coward on Friday March 22, 2013 @01:54PM (#43248853)

      I'd say it depends on what you're listening to.

      Most people, including most slashdot armchair pundits, who listen to Lady Gaga or some similar shit will never notice the difference. However, if you listen to things like Tchaikovsky's "1812 Overture", you will notice just how crappy lossy codecs really are. Especially towards the end.

    • by Bengie (1121981) on Friday March 22, 2013 @02:39PM (#43249517)
      For me, MP3 knocks out a lot of highs no matter the bitrate. Listening to most Jazz really brings out the flaws of MP3.
      • by Panaflex (13191) <convivialdingo@ y a h oo.com> on Friday March 22, 2013 @03:41PM (#43250409)

        10 years ago, MP3 encoders couldn't encode decent cymbals and saxophones below 384kbps... it was just a stream of high pitched garbage.

        These days they're both really good encoders. I still prefer AAC over MP3 just because the high freq nuances are better captured, but at AAC@256 and MP3@320, the differences are practically imperceptible to my ears.

        The only time I'd look at lossless music is for Orchestral pieces. Compressed pieces still sound flattened and don't have the wideness because there's a lot more overtones, harmonics and variety of tones in live recordings. Microphones, recordings and engineering have adjusted in the past 5 years to compensate - so recent pieces are not too bad however.

        Like anything, it's best to just try a few different methods and see what sounds best to you.

  • There is a long discussion among very qualified individuals on this subject. You can read it here [slashdot.org]
  • by Stentapp (19941) on Friday March 22, 2013 @01:14PM (#43248249) Journal

    I am quite sure I prefer a lossy compressed version of a 24 bit, 96 kHz track than a lossless compressed version of a 16 bit, 44.1 kHz track.

    • by Hatta (162192) on Friday March 22, 2013 @01:19PM (#43248351) Journal

      44.1hkz 16bit audio is completely transparent to the human ear. No one has ever been able to detect when a 16bit DAC ADC pair has been placed in a 24/96 audio path.

      Your preference for 24/96 audio as a listener is entirely due to the placebo effect. There are good reasons to master audio in high res, but for listening 16 bit 44.1khz audio is as good as anything.

      • by QRDeNameland (873957) on Friday March 22, 2013 @01:43PM (#43248701)

        Your preference for 24/96 audio as a listener is entirely due to the placebo effect.

        Well, in all fairness, listeners may actually hear perceptible differences between 24/96 and 16/44.1 audio sources due to different mastering, but of course that says nothing about whether they can actually tell the difference between the two bitrates when everything else is equal.

        This article [xiph.org] is a pretty good explanation of why 16/44.1 is as good as anyone needs for playback.

        • by hairyfeet (841228) <bassbeast1968@@@gmail...com> on Friday March 22, 2013 @02:02PM (#43248985) Journal

          Well to be really REALLY fair I have noticed it also matter if the original music was recorded in analog or digital, as I've taken some tracks we've cut in a classic studio with the analog 8-track and its really fricking hard to get those to sound really..."right" for want of a better term as its really hard to describe, when it is compared to digital.

          The closest I can get to describing it is this and sorry if you aren't a musician but they'll know of which I speak...you know how you have that great old tube amp for the guitar and it has that nice warm fat feel to it? Notice how the same amp when modeled digitally doesn't doesn't quite have the warmth? Its kinda...artificial sounding? That was the trouble we had, the tapes sounded nice and warm but trying to get that to switch over to digital was fucking hard, frankly it was easier to just cut the tracks again in a digital studio than it was to get the analog tapes to really convert well.

          Sorry if I'm not describing it correctly but music is one of those things where my terminology often fails me, its so hard to describe feelings and emotions and music for me is an emotional expression so I end up having to try to describe how I felt as I listened or played and my vocabulary fails me, the analog was a little fuzzy but it was warm and lived in feeling while trying to convert that to digital something was lost in translation, no other way I know how to say it. the same tracks recorded natively in digital sounded great, analog sounded great, but putting the two together was just something we never could get to work.

          • by nabsltd (1313397) on Friday March 22, 2013 @02:42PM (#43249551)

            The closest I can get to describing it is this and sorry if you aren't a musician but they'll know of which I speak...you know how you have that great old tube amp for the guitar and it has that nice warm fat feel to it? Notice how the same amp when modeled digitally doesn't doesn't quite have the warmth?

            The reason for this is that it's hard to capture distortion accurately.

            That "warm sound" is a result of the inacurracies of the tube amp. You may like it better (and that's just fine), but it is does not accurately reproduce the original signal. For me, it's really no different than the current "loudness war" where re-mastered releases are much louder. Many of today's listeners like that sound beter, but it isn't accurate.

      • by femtobyte (710429) on Friday March 22, 2013 @01:43PM (#43248713)

        You sure can hear the difference if you stick a 44.1kHz DAQ in a 96kHz signal chain before filtering out ultrasonic high frequency components (if there are enough to make a difference). The advantage of 96kHz recording isn't that it can capture any more human-audible frequencies than 44kHz can, but that you have a lot more leeway to prevent aliasing of signals above the Nyquist limit down into the audible range (a 25kHz tone sampled at 44kHz results in a spurious, highly audible (25-44/2)=3kHz aliasing signal).

        It's pretty much impossible to build analog frequency filters with a sharp cutoff (e.g. everything below 20kHz and below gets through, everything above 22kHz is -60dB attenuated), so recording at 44.1kHz sampling requires either being absolutely certain the original sound source has minimal high-frequency harmonics, or heavy analog filtering that cuts well into the audible high frequency range. With 96kHz sampling, it's much easier to build an analog filter that gradually rolls off high frequencies between 20kHz and 40kHz (...producing a >40kHz sound is tricky in the first place), preventing aliasing without the filter cutting into the audible range. Once digitized, it's trivial to make a *digital* filter with a perfect frequency cutoff to downsample the 96kHz to aliasing-free 44.1kHz.

        • Re: (Score:3, Informative)

          by ozydingo (922211)

          Two nits to pick:
          1) You can get arbitrarily close but you can't get "perfect" frequency cutoff.
          2) A 25 kHz tone sampled at 44 kHz gives you a 19 kHz tone. Remember the [-pi:0] (or [pi:2*pi]) frequency range comes first.A 41 kHz tone would get you a 3 kHz tone after sampling.

          Otherwise all true, which is why most recording devices do exactly that, sample at a high rate and digitally filter before downsampling to 44.1. But none of that has much to do with whether or not, once you've gotten past the aliasing

          • by femtobyte (710429) on Friday March 22, 2013 @02:23PM (#43249319)

            1) Digitally, yes you can. Take the DFT of the data; zero out all components above your frequency cutoff; reconstruct the signal as the sum of below-cutoff frequencies. Voila, a perfect sharp cutoff. The only subtlety is that you can only choose an exact cutoff corresponding to some integral number of cycles in your sampling window, so you can't cutoff at exactly sqrt(e*pi)kHz --- but you do have plenty of wave numbers from which to select a perfect cutoff (increasing with the size of your DFT window).

            2) Untrue: a 44kHz *sampling rate* has a 44/2=22kHz Nyquist cutoff. Frequencies f>22kHz Nyquist limit "wrap around" to f-22kHz difference frequencies.

            But yes, I agree, on the playback side there's no audible difference between a (sufficiently well made) 44.1kHz and 96kHz DAC.

            • by arth1 (260657) on Friday March 22, 2013 @02:40PM (#43249533) Homepage Journal

              But yes, I agree, on the playback side there's no audible difference between a (sufficiently well made) 44.1kHz and 96kHz DAC.

              No, but what makes a big difference is when you have a 48 kHz sound card that resamples everything to 48 kHz for an internal DSP stage that cannot be bypassed, and then back again. Yes, Soundblaster Audigy, I'm looking at you.
              44.1 -> 48 kHz gives a lot more audible artifacts precisely because they're so close. Think of it as audible moire.

              Also, for newer computer audio cards, if you have a choice, use 88.2 kHz for the internal rate instead of 96 kHz. The reason is that most high quality sound is in 44.1 which converts perfectly to 88.2. For 48 kHz, it's less of a problem in the first place, and likely also worse quality sound to start with.
              Of course, unless the rest of the audio path is good, it doesn't matter much, but if you like to listen to FLACs with high end headphones, it sure won't hurt to use 88.2 instead of 96 kHz.

      • by hairyfeet (841228) <bassbeast1968@@@gmail...com> on Friday March 22, 2013 @01:46PM (#43248755) Journal
        You are 100% correct, I have sat in a $100k studio with $5k reference monitors and heard my tracks played back at both 192k and at 44.1k and honestly? Couldn't tell the difference, i really couldn't. And while my midrange hearing may not be the greatest I'm picky as hell when it comes to low end and that is usually the first thing that goes when you compress but standard 44.1k? Couldn't tell the difference which if there was gonna be a difference i would have heard it on that system, it was top notch. I'm sure many here can bring citations showing double blind tests which i have no doubt show its all placebo, because if I can't hear it in a nice studio with the actual live instrument right beside it i doubt seriously anybody is gonna hear a difference with home gear, even high end home gear.
        • by hedwards (940851) on Friday March 22, 2013 @02:02PM (#43248983)

          The point of the equipment is that you have quality in reserve as you go through the process of mastering the tracks. The more quality you have in reserve the more you're able to do before you start having to deal with artifacts and other nastiness. As with all such things, you have to think about the order in which you do things and the order in which you throw out data to get the best results.

          The point of buying lossless music isn't so much that it's better for listening to, it's that you can compress it however you like later on without having to worry as much about the sound quality you get. Since you have more data to work with, you can get a better quality at a lower bitrate than if you were starting with an already compressed track.

      • by chipschap (1444407) on Friday March 22, 2013 @01:47PM (#43248769)

        44.1hkz 16bit audio is completely transparent to the human ear. No one has ever been able to detect when a 16bit DAC ADC pair has been placed in a 24/96 audio path.

        Your preference for 24/96 audio as a listener is entirely due to the placebo effect. There are good reasons to master audio in high res, but for listening 16 bit 44.1khz audio is as good as anything.

        As a former audio professional (specialized in location recording of choirs and orchestras) I must agree. But even my aging ears can hear the difference between 44.1 (or 48)kHz 16 bit uncompressed and a typical MP3. Side note: 24-bit has a few audible advantages for music with extremely wide dynamic range (from ppp to fff, say) where 16 bit will struggle a little at the very soft end.

        • by Anonymous Coward on Friday March 22, 2013 @02:11PM (#43249121)

          According to Wikipedia the audible range for human hearing is around 130dB. 16 bits can in best case offer a dynamic range of 96 dB, whereas 24 bits offer 144 dB.

          So it should be pretty obvious that you can't fit the entire audible range into 16 bits. This might not be relevant to modern day music. But if you want to record what the ear is actually capable of hearing (not including sound levels above the pain threshold) you will need those 24 bits.

        • by jonsmirl (114798) on Friday March 22, 2013 @02:30PM (#43249411) Homepage

          When the music gets soft in 16b you have a lot of zeros in front of the number. So you effectively only have a three or four bit signal being fed into the DAC. This is fixed point math, not floating. With 24b you can put all of those zeros in the front and still have eight or more bits to feed into the DAC. This is even more beneficial when the amp implements power supply volume control. PSVC raises the effective noise floor the DAC has to deal with.

        • by Entropius (188861) on Friday March 22, 2013 @02:38PM (#43249505)

          OT, as a choral performer:

          Classical music has a stupid wide dynamic range, more than any other genre I know of, and (in particular) soprano sections have a nasty talent for pegging meters that were supposed to be set with plenty of headroom.

          • by djdanlib (732853) on Friday March 22, 2013 @03:44PM (#43250447) Homepage

            As a live sound engineer dealing with vocalists who do that regularly (sing at normal program levels and then BELT A PHRASE OUT)... let me say... ARGH.

            I put a steep compressor on someone who's prone to doing that, and let me tell you, it makes my life much easier. I can't fix the clipping, but I can make sure they don't cause the audience to cover their ears.

      • I know in imaging that having better than the human eye can see is important in intermediate products as visual manipulation on low fidelity content could produce visible artifacts. Is it the case for audio as well? If someone is going to resample audio for a remix, is there risk of the decreased fidelity ultimately manifesting in the final product?

        • by ozydingo (922211)

          Yes, for both bit depth and sampling frequency. Here are two possible reasons why:

          1. Bit depth. Remix wants to amplify a sound in the original mix. At 16 bit depth, you have 2^16 possible values to cover everything from silent to max loudness. If you take a soft sound that uses only some of those values and amplify it, the result suffers from possibly noticeable quantization artifacts. This is like magnifying a small picture to produce a pixelated one.
          2. Sample frequency. Remix wants to frequency-shift /

      • by dgatwood (11270) on Friday March 22, 2013 @01:49PM (#43248807) Journal

        Speaking as someone who frequently does recording, your comment suggests that no one has done that test with classical music in a properly controlled listening environment using quality gear while giving the test subject the ability to control the volume arbitrarily. When you crank up the volume, the noise floor difference in soft passages alone should make the difference between 16-bit and 24-bit signal paths a dead giveaway, even for someone with moderate to severe hearing loss. It isn't even subtle. Of course, if the person doesn't turn it down for the loud passages, he/she will likely suffer hearing damage, but perhaps that's why he/she has moderate to severe hearing loss in the first place. :-D

        The 44.1 vs. 96 kHz difference is more subtle, requiring someone with top-notch hearing (very rare), headphones that can accurately reproduce frequencies above 20 kHz, and 96 kHz DAC hardware that does not have a bandpass filter starting at 16 kHz. If you fail to verify even one of those requirements, you would expect no one to be able to hear the difference, because there won't be any difference.

        • by Overzeetop (214511) on Friday March 22, 2013 @03:02PM (#43249793) Journal

          Actually, you've proven the GP's point. You can't tell the difference if you are listening to the program. Turning a program up in the "soft sections" is exactly what you should never, ever do when listening to a program. You may as well put on the IR headset with compression that came with your TV so you can watch late night TV without disturbing your wife.

          Mastering is an entirely different ball of wax and, yes, you want all the headroom you can get. It's no different than photographers using RAW formats instead of JPGs (even lossless JPGs) out of the camera. You want all the bits you can get. But after your done mastering, dropping to 16bits isn't going to affect the outcome. That's the whole point of mastering - if we didn't want to be that soft, we would have engineered it to be louder.

      • by Jane Q. Public (1010737) on Friday March 22, 2013 @02:07PM (#43249037)

        "There are good reasons to master audio in high res, but for listening 16 bit 44.1khz audio is as good as anything."

        The reasons for having "extra" fidelity in master recordings is the same reason for having high-resolution photos in "raw" format: there is lots more wiggle room for editing while still maintaining good enough fidelity that the end user can't tell the difference.

        For example: take a large (say 16M pixel) 8 x 10 photo, and reduce it to 4 x 5 at 600 dpi. Then take the same photo, edit it (for example, change some colors, remove a cloud from the sky, etc.) and reduce that to the same size and resolution. Even though the resulting photos are higher resolution (at arm's length) than the eye can perceive, they look different.

    • by fa2k (881632) <pmbjornstad@@@gmail...com> on Friday March 22, 2013 @01:28PM (#43248483)

      Depends on how good the sound engineers are. A lot can be gained by higher resolution and sample rate in the mastering stage, but by using a good low pass filter and dithering (and dithering is not really necessary, http://developers.slashdot.org/story/13/02/27/1547244/xiph-episode-2-digital-show-tell [slashdot.org] ) basically all audible information is captured in 44.1kHz / 16. Your speakers probably don't go much above 20 kHz anyway, so anything beyond 44.1kHz will only cause distortion (aliasing), see post by MetalliQaZ "Debunked" below.

    • by fatphil (181876) on Friday March 22, 2013 @01:42PM (#43248691) Homepage
      > I am quite sure ...

      In other words, you've never done an ABX test and are just spouting ill-informed supposition. The ABX is the gold standard, get back to us once you can distinguish those sources that way with a 95% confidence level.
  • One word: YES. (Score:5, Insightful)

    by Anonymous Coward on Friday March 22, 2013 @01:15PM (#43248259)

    Caveat: You have to have decent headphones (not Apple earbud BS), and/or good speakers, but that's about it. The difference is negligible once you hit ~320Kbps MP3, in my opinion, but anything under 256Kbps, regardless of lossy format, you can *clearly* hear cymbal hits turning to an underwater splooshy mess.

    • Re:One word: YES. (Score:4, Informative)

      by arth1 (260657) on Friday March 22, 2013 @02:12PM (#43249139) Homepage Journal

      Caveat: You have to have decent headphones (not Apple earbud BS), and/or good speakers, but that's about it. The difference is negligible once you hit ~320Kbps MP3, in my opinion, but anything under 256Kbps, regardless of lossy format, you can *clearly* hear cymbal hits turning to an underwater splooshy mess.

      Highhats are even worse than cymbals. Even at 256 kbps, highhats tend to sound like they're being hit with a bag of broken glass, and is the easiest way to identify lossy compression I can think of. Except, perhaps, some of Mike Oldfield's earlier works.

  • by jgtg32a (1173373) on Friday March 22, 2013 @01:15PM (#43248263)
    I can't tell which one is better though.
  • by BenSchuarmer (922752) on Friday March 22, 2013 @01:16PM (#43248283)
    ... and scratchy/poppy vinyl records. MP3s on my cheap ear buds are good enough most of the time.
    • I grew up with the same thing (AM radio, no less) and I've lost most of my highs in both ears and a lot of everything in my right ear at this point, so mono works fine for me...in fact, listening to some OLD recordings from the sixties and seventies when they really thought that separating the voices into different tracks was cool makes listening on headphones nearly impossible...I get left track only. Although a great take on the backup singers sometimes, depending on the mix. Frankly, if you stand behind
    • by Zemran (3101) on Friday March 22, 2013 @01:49PM (#43248799) Homepage Journal

      I listen in the truck with a blown exhaust and whilst getting high on the fumes, lossy or lossless? I have trouble noticing if the car radio is even turned on.

  • No (Score:5, Insightful)

    by Hatta (162192) on Friday March 22, 2013 @01:17PM (#43248299) Journal

    No you can't. Not with any reasonably modern encoder and bitrates above 256. Anyone who tells you otherwise is experiencing the placbo effect. BTW, you can't tell the difference between 16bit/44.1khz audio and 24/96 audio either. And vinyl might sound "better" than digital to you, but digital is objectively more accurate.

    Audiophilia is saturated with woo. This is the same market that brought us $500 ethernet cables [cnet.com].

    • by v1 (525388) on Friday March 22, 2013 @01:22PM (#43248401) Homepage Journal

      No you can't. Not with any reasonably modern encoder and bitrates above 256.

      And there's the rub of course. That general of a question can't be answered yes/no. It depends on a variety of factors, most notably the content, the codec, the bitrate, and the playback.

      I don't even know why this article submission got accepted. It's like asking "can you win a race against a Toyoda?" where do you even start with that....?

    • Re:No (Score:5, Insightful)

      by Spy Handler (822350) on Friday March 22, 2013 @01:26PM (#43248449) Homepage Journal

      Doesn't matter, the audiophile market is not rational (kind of like the wine market). After a certain quality threshold, say 256kbps mp3 or $100 bottle of wine, nobody can tell the difference in a blind test. Yet suckers keep paying money for $500 speaker cables and $1000 bottles of wine. Just stoking ego at that point.

      • Re: (Score:3, Funny)

        by osu-neko (2604)

        Doesn't matter, the audiophile market is not rational (kind of like the wine market).

        Show me a rational market, and I'll have to inquire as to the nature and evolutionary history of the species of aliens participating in it.

    • Re:No (Score:5, Insightful)

      by rudy_wayne (414635) on Friday March 22, 2013 @01:29PM (#43248503)

      In medical tests, people are given a placebo and yet claim to feel better or feel the same effects as people who are given the real medication. These must be the same people who rail against mp3s.

      Just because Neil young and Dave Grohl are famous musicians, it doesn't mean that they actually know what they are talking about. 40 years of exposure to loud music has probably damaged their hearing enough that they really don't know what they are hearing.

      Saying that A sounds better than B is completely subjective and affected by many things. Not just how the music was encoded, but the quality of the DAC used for playback and the quality of the speakers/headphones used.

      • Re:No (Score:4, Insightful)

        by fatphil (181876) on Friday March 22, 2013 @01:54PM (#43248855) Homepage
        And if you put them up for a test, and told them which source was which in advance, I'm sure they'd be able to tell you the flaws in the one you said was the mp3 (or whatever). Even if you deliberately swapped the cables over.
      • by zzsmirkzz (974536)

        In medical tests, people are given a placebo and yet claim to feel better or feel the same effects as people who are given the real medication.

        People don't claim to feel better, they do feel better. There is no incentive for them to lie, in fact, there is a disincentive for them to do so. The reason behind the cause of the "placebo" effect is in the mind of the patient. The patient believes they should be getting better and then they do. Power of thought, belief and, if defined correctly, faith. Really, it is the power of consciousness which no one fully understands.

        This can be applied to apparent differences in audio formats. The observer believ

    • by Waccoon (1186667)

      For chiptunes, I can hear a difference between 256 and 320, but just barely.

      The biggest factor is how the high frequencies are filtered out before the audio is compressed, because the filtering appears to be the same regardless of the final bitrate. Even ultra-high bitrate audio will sound awful if the stock frequency cutoff is used, and I have to fiddle with the settings in LAME to make my songs sound good, even at 320.

  • by Clueless Moron (548336) on Friday March 22, 2013 @01:17PM (#43248305)

    I'm listening to a performance, not some audio benchmark. If a bit of loss bothers you, it must be some pretty damned uninspiring music you're listening to.

    And if you're listening on some random mp3 player with bud headphones while walking around doing stuff, compression loss is the least of your worries.

  • by wiredog (43288) on Friday March 22, 2013 @01:18PM (#43248313) Journal

    as fast as a Ferrari.

    Since I do most of my listening in a car, and am almost 48, I can't hear the difference between an mp3 and a vinyl album, or a cd, most of the time. Well, except for the lack of skipping. Ever try to listen to an LP in a moving car? But I digress. Sure, people who are younger and $pend lot$ of dollar$ on the Finest Audiophile equipment areound can tell. Me in my Chevy? Not so much.

  • by scorp1us (235526) on Friday March 22, 2013 @01:18PM (#43248317) Journal

    We recently discovered [arstechnica.com] that human hearing beats the linear response assumptions used in lossy codecs. So yes, their criticisms are scientifically founded.

    • by Hatta (162192) on Friday March 22, 2013 @01:21PM (#43248383) Journal

      Unless you have people that can ABX the difference, no their criticisms are not scientifically founded. An actual blind test beats any theoretical reasoning any day.

      • by Trepidity (597)

        In particular, nobody claims that lossy codecs use a perfectly accurate model of human hearing; they don't need to. The goal is to have a psychoacoustic model that captures enough of the general mechanics of hearing, to enable a bunch of constants to be tuned empirically. If the model doesn't come anywhere near to capturing anything important, that would be a problem, because you'd never be able to tune the constants. But once it captures the general outlines, much of the real work on lossy encoders over th

      • by scorp1us (235526)

        Didja read the article? Some people can tell the difference down to one oscillation per second. That's not theoretical.

    • by ImprovOmega (744717) on Friday March 22, 2013 @01:44PM (#43248723)
      Subject:

      44.1khz ought to be enough for anyone...

      Body:

      human hearing beats the linear response assumptions used in lossy codecs. So yes, their criticisms are scientifically founded.

      These have nothing to do with each other.

  • Debunked (Score:5, Informative)

    by MetalliQaZ (539913) on Friday March 22, 2013 @01:18PM (#43248319)

    The concept of improving consumer listening experience using studio quality recording has been thoroughly debunked, right here on Slashdot...
    Why Distributing Music As 24-bit/192kHz Downloads Is Pointless [slashdot.org]

  • It doesn't matter (Score:5, Insightful)

    by Anonymous Coward on Friday March 22, 2013 @01:18PM (#43248323)

    The reason people use lossless compression for audio (i.e. FLAC or SHN) is not because they can tell the difference. Maybe you think you can, maybe you think you can't, but that's irrelevant anyway. The reason people choose lossless is that lossless is the only suitable solution for archiving. If you want to preserve your CD audio exactly as it appears on the CD, the only possible solution is lossless compression. If you choose lossy, you aren't making an archive or the original, but rather an approximation of the original.

    That's all there is to it.

  • Any good studies? (Score:5, Interesting)

    by Experiment 626 (698257) on Friday March 22, 2013 @01:19PM (#43248341)
    Anyone know of any good double-blind studies comparing people's ability to tell FLAC from 320kbps MP3? Googling just turns up people debating in forums whether you would be able to tell the difference rather than any serious academic research.
    • Re:Any good studies? (Score:5, Interesting)

      by FrankSchwab (675585) on Friday March 22, 2013 @01:53PM (#43248843) Journal

      I don't know if it's a good study, but I did exactly this test. Ten or fifteen years ago.

      I took four musical selections (from the latest Rolling Stones album at the time, a solo piano performance, a classical orchestra, a female vocal), and encoded them at 128, 192, and 256 Kbps with the Fraunhofer codec of the day (remember that?). I re-expanded them to 44.1 KHz CD tracks, and put them on a burned audio CD (remember those?). Each selection on the CD had five versions - the first was always the original bit-for-bit copy from the source CD, then followed (in random order) the 128, 192, 256, and the original again.

      I made ten copies, and handed them out to the audiophiles in the office to play on their home stereos, and gave them a test sheet - I asked them to identify for each selection which version was 128, 192, 256, or the original. Nobody came close to having a "golden ear" that could reliably tell the 128Kbps versions from the others, much less the higher bitrates. Overall, there was a slight ability to detect the 128 kbps versions - it got selected as the lowest quality one more times than random chance would suggest, but even it was still well below 50% (I don't remember the exact numbers any more).

      And this was with ancient MP3 encoders.

      Frankly, if you think you've got the golden ear, first of all I pity you - I'm sorry that you have to put up with all the crap you're going to hear. Second of all, I really recommend running the same test - prepare the tracks, have a friend randomly order them (but keep track), and then see if you can identify them. Don't simply say "Of course I can" - Actually do it and prove it.

      And, if I can be an old man with a bit of advice for a minute: if you can't tell the difference, don't go out of your way to train yourself to tell the difference. It'll just be an annoyance to you for the rest of your life. Kinda like the person who taught me about the reel-change indicators on film at the movie theatres - I see it, and my whole body tenses up waiting for the change. I wish I had never known about it. I really appreciate the change to digital projection so I don't have to deal with those anymore. /frank

  • Oblig (Score:4, Interesting)

    by jxander (2605655) on Friday March 22, 2013 @01:20PM (#43248363)
  • by Anonymous Coward on Friday March 22, 2013 @01:20PM (#43248367)

    The difference is the ability to transcode to different bitrates and formats without losing anything from the original source.

  • by jtownatpunk.net (245670) on Friday March 22, 2013 @01:26PM (#43248445)

    If you've got decent equipment and a quiet environment. With cheapo earbuds, I don't notice the difference. With my good headphones, the difference is obvious. When I'm driving down the highway, I can't tell. In my living room, I can tell.

    With storage so cheap and bandwidth so plentiful, there's really no reason not to use lossless audio. My $40 Clip+ with a $25 miscrosd card can hold 40 gigs of content and can play FLAC. There's no reason to use a lossy format.

  • by fahrbot-bot (874524) on Friday March 22, 2013 @01:26PM (#43248455)

    By Young's estimation, CDs can only offer about 15% of the data that was in a master sound track...

    ... Neil Young is neither a Mathematician or Audio Engineer.

    [ -- insert appropriate Neil Young lyric for satirical effect here -- ]

  • Nope, normally. (Score:4, Insightful)

    by BLToday (1777712) on Friday March 22, 2013 @01:26PM (#43248457)

    Nope. Not if the quality is high enough, I can't tell the difference 99% of the times. There are some musical instruments (harpsichord) and singers (Tori Amos) where compression is very obvious. The lossy version becomes almost unlistenable once you've heard the lossless version.

    On "normal" speakers I can rarely tell the difference, but on reference monitors the difference is noticeable on many tracks. Not terrible distracting but still noticeable.

  • by mpol (719243) on Friday March 22, 2013 @01:30PM (#43248523) Homepage

    There have been more posts on Slashdot in the last 14 years on Slashdot about this topic. What I recall of them, is that people have been tested with blind and double-blind tests. And about ten years ago you could hear a difference between lossless audio and low-bitrate mp3's. The latter has less high and low, and mostly a certain "Hiss" sound through it. The preference was with the lossless audio then.
    What struck me in later tests, was that people seemed to favour mp3's above lossless audio. I reckon it has to do with getting used to the Hiss-sound in mp3's, and therefore having it as a preference. A big factor in music taste is how much you are used to hearing similar music and sounds, and the hiss-sound does make a usual sound.

    To be fair, I do think that mp3's in a high bitrate like 320 kbit are almost as good as lossless audio. Even though I prefer the lossless audio, just to be sure.

  • by steveha (103154) on Friday March 22, 2013 @01:42PM (#43248687) Homepage

    I would pay more for audio tracks that are mastered properly.

    Far too much of the music released these days is mastered to sound "loud". A sound-level compressor removes the dynamic range, and then the music is gained up about as high as possible, or sometimes higher than that (gained so high there is hard-clipping).

    In the best case, the dynamic range is gone and the music loses some of the drama and impact it should have had. In the worst case, the sine waves are hard-clipped into square waves, which sounds terrible. Hard-clipping adds unpleasant harmonics and distortion and you definitely can hear this.

    I promise you that a properly mastered track at 16-bit/44.1 kHz will sound dramatically better than a poorly mastered one at 24-bit/96 kHz. Mastering trumps format.

    So if they are going to the trouble to make 24-bit/96 kHz tracks, I'm hoping that they will let the mastering engineers do their jobs properly! If they do, I would pay the extra money and bandwidth to buy the music in the higher-quality format.

    The music industry is convinced that most of their customers are idiots, unconcerned about sound quality, who can be distracted by shiny things or loud noises; so they try to make every album as loud as possible. But maybe, just maybe, they will be willing to try something different with the high-quality downloads.

    http://en.wikipedia.org/wiki/Loudness_war [wikipedia.org]

  • by AxemRed (755470) on Friday March 22, 2013 @01:58PM (#43248923)
    I don't think that lossy audio compression is inherently hurting recorded music. Lossy is fine as long as good encoders and sufficient bitrates are used. At a certain point, no one can tell which is which (lossy or lossless) in a blind test.

    I mostly listen to MP3 encoded rock music. The loss of quality is very noticeable to me at 128kbps. The loss of quality is much harder to discern at 192, especially if a quality encoder is used. I use LAME -V 2 when I rip CDs and usually end up with average bitrates from ~190-215, and I can't tell the difference between those MP3s and the original CD.

    IMO there are bigger problems facing recorded music anyway. See: http://en.wikipedia.org/wiki/Loudness_war [wikipedia.org]
  • by neoshroom (324937) on Friday March 22, 2013 @01:59PM (#43248947)
    I've been into compressed lossless audio from the start. First, AIFF is definitely not one of the most popular lossless audio formats for distributing music because the popular formats are compressed lossless audio and AIFF is uncompressed. The top formats are FLAC, APE and ALAC. FLAC is the most popular because it is open-source and versatile. APE was highly popular in the late 90's and early 00's and still is with some because it has better compression than any of the other formats. However, as time went on hard drive space became more plentiful and mobile devices started popping up. APE achieves its superior compression via calculations that are more intensive than FLAC uses and thus more taxing on mobile devices. It is also less cross-platform-compatible. ALAC is Apple's Lossless Audio Codec and is a latecomer onto the scene. It has good iTunes support and slightly better compression than FLAC, but that's about it.

    Also, it is definitely possible to tell lossless audio from lossy audio, even at higher bitrates. Around 2002 I had a friend who completely mocked my lossless ways, even though I'm not one of those gold-cable audiophile people -- just a normal guy who likes his music. I just had a decent pair of Klipsh speakers with a subwoofer. My friend was so certain that this was all in my head and I was so certain that it was not that we devised a simple test. He would show me two identical-looking files in iTunes, just showing the titles. One was a high-bitrate AAC and the other a FLAC file. I could click on them to play them as much as I wanted. I was then to decide which was lossless and which was lossy. We did this with 10 files. It was basically double-blind as he didn't know which was which either until he took the computer back to check my answer. He set up 10 files this way. All in all the test took just 5 or 10 minutes.

    I got 9 of 10 right. It is hard to describe sounds, but the lossless music is "deeper," especially bass, guitar vibrations and high notes. This makes it obvious for many songs.

    However, I expect not everyone has hearing like this. I suspect this because one day I heard this annoying buzzing sound and asked my girlfriend about it. She couldn't hear anything. So, I searched all over for what was causing it. It turned out it was a television that was on, but that was on a non-channel so it was completely black on the screen. However, the CRT television emitted a sound from being on in a silent room that I found annoying and my girlfriend couldn't even hear. My sister could also hear it when I tested her later. I also sometimes find the sounds fluorescent lights make annoying too.

    Anyway, lossless is great and, yes, you can hear the difference if you have hearing which can hear the difference. It's sort of tautological, but it's the truth.
  • by JBMcB (73720) on Friday March 22, 2013 @02:01PM (#43248963)

    The opening of Royal Oil by the Mighty Mighty Bosstones. It starts out with a quiet snare roll that gets progressively louder, joined by a simple bass line. I've yet to hear a lossy codec at any bitrate that doesn't turn it into watery gibberish.

    Disk space is cheap. Rip to FLAC or ALAC. For portables, 256kbps AAC seems to do the least amount of damage.

  • by tannhaus (152710) on Friday March 22, 2013 @02:11PM (#43249119) Homepage Journal

    Thank God my hearing isn't worth a crap and I don't have yet another thing to geek over.

    As long as Frank Sinatra doesn't sound like Donald Duck, I'm cool with it.

  • by SD-Arcadia (1146999) on Friday March 22, 2013 @02:16PM (#43249217) Homepage
    "By Young's estimation, CDs can only offer about 15% of the data that was in a master sound track"
    And nothing of value was lost in the remaining 85% of the *data* that is inaudible to the human ear.

    "Young, in fact, created his own digital-to-analog conversion (DAC) service called Pono. Young has tweeted that the Pono cloud-based music service, along with Pono portable digital-to-analog players, will be available by summer."
    There's your cash-in scheme lurking behind all the BS.

    "Young's service would increase the quality, or sampling rate, of the music from 44,100 times per second in a CD (44.1KHz) to 192,000 times per second (192KHz), and will boost the bit depth from 16-bit to 24-bit."
    I would like to repeatedly hit you over the head with http://people.xiph.org/~xiphmont/demo/neil-young.html [xiph.org]

    "The sample rate of a digital file refers to the number of "snapshots" of audio that are offered up every second. Think of it like a high-definition movie, where the more frames per second you have, the higher the quality."
    NO, do not think of it like that unless you're a charlatan. Refer to rebuttal on xiph.org.

    "Millions of people in the world are audiophiles."
    No doubt, Millions of people in the world are fools and they have money that could be yours.

    "It's just common sense that the higher the resolution -- the more data that's in an audio file -- the better the sound quality, Chesky said."
    Too bad this thing called SCIENCE has been trumping "common sense" for millenia now.

    "The site also recommends high-resolution player software such as JRiver, Pure Music, or Decibel Audio Player. The software, which basically turns your desktop or laptop into a music server or a digital-to-analog converter,"
    HILLARIOUS. I won't even begin to..

    "The most popular music server among audiophiles, according to Bliss, is an Apple Mac Mini."
    This is beautiful. I am not surprised in the least to see this audiophile-appleophile overlap.

"Trust me. I know what I'm doing." -- Sledge Hammer

Working...