Can You Really Hear the Difference Between Lossless, Lossy Audio? 749
CWmike writes "Lossless audio formats that retain the sound quality of original recordings while also offering some compression for data storage are being championed by musicians like Neil Young and Dave Grohl, who say compressed formats like the MP3s being sold on iTunes rob listeners of the artist's intent. By Young's estimation, CDs can only offer about 15% of the data that was in a master sound track, and when you compress that CD into a lossy MP3 or AAC file format, you lose even more of the depth and quality of a recording. Audiophiles, who have long remained loyal to vinyl albums, are also adopting the lossless formats, some of the most popular of which are FLAC and AIFF, and in some cases can build up terabyte-sized album collections as the formats are still about five times the size of compressed audio files. Even so, digital music sites like HDtracks claim about three hundred thousand people visit each month to purchase hi-def music. And for music purists, some of whom are convinced there's a significant difference in sound quality, listening to lossy file formats in place of lossless is like settling for a Volkswagen instead of a Ferrari."
Depends on the bitrate (Score:5, Informative)
Usually if the bitrate is above 256kb/s, i dont notice any difference.
Ofcourse it still effects some songs (especially the percussion parts).
Re:Depends on the bitrate (Score:5, Interesting)
Re: (Score:3, Insightful)
That's a very apt description. Genetic factor, age, absence of damage, training to understand the difference/subtleties of overtones, and of course the equipment to playback sounds truly. I found the wired article about Peter Lyngdorf and Steinway building speakers good enough to detect the difference between an American and German manufactured pianos a fascinating read. http://www.wired.com/reviews/2012/10/steinway-lyngdorf-model-ls-concert/
Re:Depends on the bitrate (Score:5, Informative)
I mean, if you're only listening to ear buds (even $$$ ones are limited in bass response, etc), or in a car (one of the worst listening environments conceived)....then sure it won't make a difference, and portability makes a lot of sense too.
However, in a nice listening environment, with good equipment...it is worth the effort IMHO.
For instance, I have a pair of Klipschorns [klipsch.com] ...paired with a couple of the much older models of the Decware SET amps [decware.com], running mono to each channel..plus an older 15" 800W Klipsch sub, etc......
Even with my older ears, I can hear differences in recordings and formats. Not as well as I used to be able to, but I figure, WHY would I want anything less than the best I can get for the given time/situation? When listening at home, I rip my music to flac, and have it play on my living room stereo.
And hey....kinda fun to watch the Flintstones in concert volume on tv too from time to time, or hell, once hooked the MAME machine to it....Robotron 2084 is fun with the room shaking around you.
God, my neighbors used to hate me when I live in a place where I had to share walls...
Re:Depends on the bitrate (Score:5, Informative)
In my humble opinion, this old hoary debate will always remain a debate for several reasons. As you right mentioned, the reproduction environment in most cases is woeful at best. Most speakers are not even full-range to begin with, their cabinets resonate, their drivers cannot often keep up in complex multi-layered music, their passive crossovers do a half-assed job in distributing the sound to the various drivers, and so on. Then, the amps are weak so they start bottoming out and start clipping when the speaker impedance and phase dips sharply in certain frequency bands. Then the electronics, especially the capacitors and power supply cannot keep up. Then the cables are not fat enough or are not shielded enough so they load up the power amp even more. Then the pre-amp adds its own coloration to the already feeble signal coming from the source. Then the DAC does its own thing and further colors or degrades the source signal even more. Then the source adds its own share of noise and jitter to the audio signal that screws up not just the signal quality (bad enough) but even the timing of the music.
On top of it, the room comes into play. The room adds its own coloration and effect that is often a far bigger factor that the audio system itself - boosting certain frequencies while muddying and deadening others, and even adding echoes, reflections, etc.
Then there is the human being at the end of the chain. I personally can't even listen above 16KHz, and I have average ears. I suspect many people are like me too, at either end of our audible spectrum. On top of it, we humans hear music very differently - while our audio range may be fairly similar (20hz to 20khz by popular definition), our sensitivity to *variations* in tone and timing varies drastically - many often have off the charts sensitivity to even slightly off-key music (I do) or slightly off-beat music (I do not at all).
All in all, a decent headphone setup is far far more revealing than a decent audio setup. At a thousand dollars, you can probably assemble a decent headphone, but an audio system will sound atrocious, unless you are willing to spend a whole lot more effort and research in second hand discrete gear OR are willing to do serious DIY.
Anyway - I also wanted to say one thing - the thing that gets neglected the most in all this is actually the quality of the source recording - or what people call "mastering".
Most people who say something like "SACDs sound far better than redbook CD" or "vinyl sounds far better than CD" are most likely saying this because a whole lot more care went into recording the SACD or vinyl compared to the cheaper mass market CD or mp3.
If I look back at all the albums I have purchased or listened to (in whatever format), the one thing that stands out to me personally is that I have found less than 10% of them to be "recorded with care". And I'm not even being picky! Across the board, I can say that recording quality sucks when it comes to rock (which is what I listen to most often) - and I mean all kinds of rock.
If Neil Young's initiative (and even his Pono device) and Dave Grohl's initiatives are successful in improving the audio quality of music in general, I strongly suspect it will be because recording quality will be done with greater care, not because they decided to use a fancier digital format or use higher number of bits and samples to store their music. While everything becomes a factor by the time the music reaches your ears (heck, by the time it is processed by your brain, you even have to factor in psychoacoustics and gear bias and the "burn-in" syndrome) - the recording quality in general needs to improve (except for the jazz and classical pieces that audiophiles love to love, and are hence recorded with care), and this improvement will arguably make the biggest difference in audio quality.
Re:Depends on the bitrate (Score:4, Insightful)
ignore the DAC the amp the source and everything... ...except the speaker drivers themselves. even the best in the world are wildly non-linear.
and then there's the air between your ears and the speakers
another non-linearity
Best source? .0001% THD. best amp? .0001% THD. Speakers? 1% THD haha good luck.
Re:Depends on the bitrate (Score:5, Informative)
You can actually practice listening to music, it's something you learn.
Sometimes the difference between two sets of speakers can be as little as one clarinet in the middle of an orchestral piece. On one set it sounds good, on another it doesn't (or it's hardly there at all).
It's not something you can pick out just by putting on a rap CD for ten seconds and turning the bass up to maximum in a store (which is how most "HiFi" systems are chosen these days and why the manufacturers produce so much garbage).
Re:Depends on the bitrate (Score:4, Insightful)
I'd say it depends on what you're listening to.
Most people, including most slashdot armchair pundits, who listen to Lady Gaga or some similar shit will never notice the difference. However, if you listen to things like Tchaikovsky's "1812 Overture", you will notice just how crappy lossy codecs really are. Especially towards the end.
Re:Depends on the bitrate (Score:4, Insightful)
I'd say it depends on what you're listening to.
The people who care about the difference aren't even listening to the music. Totally different goals.
Normal people use their stereo to listen to music.
Audiophiles use music to listen to their stereo.
Re:Depends on the bitrate (Score:4, Insightful)
It's not mutually exclusive. Some of us manage to listen to more than one type of music..._including_ classical.
Re:Depends on the bitrate (Score:5, Funny)
Re:Depends on the bitrate (Score:5, Interesting)
Re:Depends on the bitrate (Score:5, Informative)
10 years ago, MP3 encoders couldn't encode decent cymbals and saxophones below 384kbps... it was just a stream of high pitched garbage.
These days they're both really good encoders. I still prefer AAC over MP3 just because the high freq nuances are better captured, but at AAC@256 and MP3@320, the differences are practically imperceptible to my ears.
The only time I'd look at lossless music is for Orchestral pieces. Compressed pieces still sound flattened and don't have the wideness because there's a lot more overtones, harmonics and variety of tones in live recordings. Microphones, recordings and engineering have adjusted in the past 5 years to compensate - so recent pieces are not too bad however.
Like anything, it's best to just try a few different methods and see what sounds best to you.
Re:Depends on the bitrate (Score:4, Interesting)
A lengthy, thorough, and well-explained discussion (Score:5, Funny)
Re:A lengthy, thorough, and well-explained discuss (Score:5, Funny)
You jerk! I clicked on that link!
Re:A lengthy, thorough, and well-explained discuss (Score:5, Funny)
Depends on the source (Score:5, Insightful)
I am quite sure I prefer a lossy compressed version of a 24 bit, 96 kHz track than a lossless compressed version of a 16 bit, 44.1 kHz track.
Re:Depends on the source (Score:5, Insightful)
44.1hkz 16bit audio is completely transparent to the human ear. No one has ever been able to detect when a 16bit DAC ADC pair has been placed in a 24/96 audio path.
Your preference for 24/96 audio as a listener is entirely due to the placebo effect. There are good reasons to master audio in high res, but for listening 16 bit 44.1khz audio is as good as anything.
Re:Depends on the source (Score:5, Informative)
Your preference for 24/96 audio as a listener is entirely due to the placebo effect.
Well, in all fairness, listeners may actually hear perceptible differences between 24/96 and 16/44.1 audio sources due to different mastering, but of course that says nothing about whether they can actually tell the difference between the two bitrates when everything else is equal.
This article [xiph.org] is a pretty good explanation of why 16/44.1 is as good as anyone needs for playback.
Comment removed (Score:5, Interesting)
Re:Depends on the source (Score:5, Insightful)
The closest I can get to describing it is this and sorry if you aren't a musician but they'll know of which I speak...you know how you have that great old tube amp for the guitar and it has that nice warm fat feel to it? Notice how the same amp when modeled digitally doesn't doesn't quite have the warmth?
The reason for this is that it's hard to capture distortion accurately.
That "warm sound" is a result of the inacurracies of the tube amp. You may like it better (and that's just fine), but it is does not accurately reproduce the original signal. For me, it's really no different than the current "loudness war" where re-mastered releases are much louder. Many of today's listeners like that sound beter, but it isn't accurate.
Re:Depends on the source (Score:5, Informative)
No, not at all like 640K.
Re:Depends on the source (Score:5, Insightful)
kinda like 640K?
Unless you want to argue that human hearing is improving similarly to Moore's law, then no.
Re:Depends on the source (Score:5, Funny)
Yes! Someday, instead of having real dog whistles, we'll just play back mp3's of dog whistles for our dogs, and those will only work if recorded in 24bit/96kHz!
Also Monster Cable.
Re:Depends on the source (Score:5, Informative)
You sure can hear the difference if you stick a 44.1kHz DAQ in a 96kHz signal chain before filtering out ultrasonic high frequency components (if there are enough to make a difference). The advantage of 96kHz recording isn't that it can capture any more human-audible frequencies than 44kHz can, but that you have a lot more leeway to prevent aliasing of signals above the Nyquist limit down into the audible range (a 25kHz tone sampled at 44kHz results in a spurious, highly audible (25-44/2)=3kHz aliasing signal).
It's pretty much impossible to build analog frequency filters with a sharp cutoff (e.g. everything below 20kHz and below gets through, everything above 22kHz is -60dB attenuated), so recording at 44.1kHz sampling requires either being absolutely certain the original sound source has minimal high-frequency harmonics, or heavy analog filtering that cuts well into the audible high frequency range. With 96kHz sampling, it's much easier to build an analog filter that gradually rolls off high frequencies between 20kHz and 40kHz (...producing a >40kHz sound is tricky in the first place), preventing aliasing without the filter cutting into the audible range. Once digitized, it's trivial to make a *digital* filter with a perfect frequency cutoff to downsample the 96kHz to aliasing-free 44.1kHz.
Re: (Score:3, Informative)
Two nits to pick:
1) You can get arbitrarily close but you can't get "perfect" frequency cutoff.
2) A 25 kHz tone sampled at 44 kHz gives you a 19 kHz tone. Remember the [-pi:0] (or [pi:2*pi]) frequency range comes first.A 41 kHz tone would get you a 3 kHz tone after sampling.
Otherwise all true, which is why most recording devices do exactly that, sample at a high rate and digitally filter before downsampling to 44.1. But none of that has much to do with whether or not, once you've gotten past the aliasing
Re:Depends on the source (Score:5, Informative)
1) Digitally, yes you can. Take the DFT of the data; zero out all components above your frequency cutoff; reconstruct the signal as the sum of below-cutoff frequencies. Voila, a perfect sharp cutoff. The only subtlety is that you can only choose an exact cutoff corresponding to some integral number of cycles in your sampling window, so you can't cutoff at exactly sqrt(e*pi)kHz --- but you do have plenty of wave numbers from which to select a perfect cutoff (increasing with the size of your DFT window).
2) Untrue: a 44kHz *sampling rate* has a 44/2=22kHz Nyquist cutoff. Frequencies f>22kHz Nyquist limit "wrap around" to f-22kHz difference frequencies.
But yes, I agree, on the playback side there's no audible difference between a (sufficiently well made) 44.1kHz and 96kHz DAC.
Re:Depends on the source (Score:4, Informative)
But yes, I agree, on the playback side there's no audible difference between a (sufficiently well made) 44.1kHz and 96kHz DAC.
No, but what makes a big difference is when you have a 48 kHz sound card that resamples everything to 48 kHz for an internal DSP stage that cannot be bypassed, and then back again. Yes, Soundblaster Audigy, I'm looking at you.
44.1 -> 48 kHz gives a lot more audible artifacts precisely because they're so close. Think of it as audible moire.
Also, for newer computer audio cards, if you have a choice, use 88.2 kHz for the internal rate instead of 96 kHz. The reason is that most high quality sound is in 44.1 which converts perfectly to 88.2. For 48 kHz, it's less of a problem in the first place, and likely also worse quality sound to start with.
Of course, unless the rest of the audio path is good, it doesn't matter much, but if you like to listen to FLACs with high end headphones, it sure won't hurt to use 88.2 instead of 96 kHz.
Re:Depends on the source (Score:4, Informative)
In a finite window, *any* signal can be represented as a sum of elements with frequencies corresponding to n=0 (DC offset), 1, 2, 3, ...., infinity integral cycles in the window. A signal corresponding to a non-integral number of cycles, e.g. 100.5, is indistinguishable over the window from some (infinite) combination of integral cycle waves. If you measured in a window twice as long, the 100.5-cycle signal would now be a unique, identifiable 201-cycle component. So, in an important sense, in a finite window the "intermediate" frequencies "don't exist" --- they can't do anything different from the (infinite series) of integral frequencies. Thus, you can create a cutoff that is as "perfect" as is meaningful in a finite window.
Re:Depends on the source (Score:5, Interesting)
The trick you're playing on yourself here is:
x = [1 0 0 0 0 0 0 0]; % x is only defined on 8 samples over the interval. There are an infinite number of continuous signals that could be sampled this way.
Following your procedure through to y:
octave:5] y = ifft(Y);
octave:6] y
y =
0.87500 0.12500 -0.12500 0.12500 -0.12500 0.12500 -0.12500 0.12500
so y is also defined at 8 sample points; as for x, there are an infinite number of curves that could fit these. One of these curves is the sum of frequencies indicated by Y. But what does fft(y,256); mean? From the Matlab documentation,
"Y = fft(X,n) returns the n-point DFT. fft(X) is equivalent to fft(X, n) where n is the size of X in the first nonsingleton dimension. If the length of X is less than n, X is padded with trailing zeros to length n."
So, now you have y defined in a larger window (y = 0.87500 0.12500 -0.12500 0.12500 -0.12500 0.12500 -0.12500 0.12500 0 0 0 0 0 .... 0). See my response above to another poster's question: when you enlarge the sampling window, you "create" a lot of possible "intermediate" frequencies that "don't exist" (i.e. are indistinguishable from sums of integral frequencies in the shorter window). By padding y with zeros to a larger window, you're looking at a *different* signal from the un-padded y alone; consequently, you need the "extra frequencies" that you ascribe to the "non-sharp-cutoff" to correctly describe the different "y+0,0,0,0,...,0" signal (which is distinct from y). But that doesn't mean the cutoff isn't perfect as defined on the original signal x->y. In fact, if you periodically *repeat* y (y->y,y,y...,y instead of y->y,0,0,0...) you'll see the "sharp cutoff" still applies since the periodic signal is still the sum of the original frequencies in y.
Re: (Score:3)
Right, which shows why 96kHz digital sampling *is* critical, even if you immediately digitally downsample on-chip before passing it along to the next device in the processing chain.
Comment removed (Score:5, Interesting)
Re:Depends on the source (Score:5, Informative)
The point of the equipment is that you have quality in reserve as you go through the process of mastering the tracks. The more quality you have in reserve the more you're able to do before you start having to deal with artifacts and other nastiness. As with all such things, you have to think about the order in which you do things and the order in which you throw out data to get the best results.
The point of buying lossless music isn't so much that it's better for listening to, it's that you can compress it however you like later on without having to worry as much about the sound quality you get. Since you have more data to work with, you can get a better quality at a lower bitrate than if you were starting with an already compressed track.
Re:Depends on the source (Score:5, Informative)
44.1hkz 16bit audio is completely transparent to the human ear. No one has ever been able to detect when a 16bit DAC ADC pair has been placed in a 24/96 audio path.
Your preference for 24/96 audio as a listener is entirely due to the placebo effect. There are good reasons to master audio in high res, but for listening 16 bit 44.1khz audio is as good as anything.
As a former audio professional (specialized in location recording of choirs and orchestras) I must agree. But even my aging ears can hear the difference between 44.1 (or 48)kHz 16 bit uncompressed and a typical MP3. Side note: 24-bit has a few audible advantages for music with extremely wide dynamic range (from ppp to fff, say) where 16 bit will struggle a little at the very soft end.
Re:Depends on the source (Score:4, Informative)
According to Wikipedia the audible range for human hearing is around 130dB. 16 bits can in best case offer a dynamic range of 96 dB, whereas 24 bits offer 144 dB.
So it should be pretty obvious that you can't fit the entire audible range into 16 bits. This might not be relevant to modern day music. But if you want to record what the ear is actually capable of hearing (not including sound levels above the pain threshold) you will need those 24 bits.
Re:Depends on the source (Score:4, Informative)
When the music gets soft in 16b you have a lot of zeros in front of the number. So you effectively only have a three or four bit signal being fed into the DAC. This is fixed point math, not floating. With 24b you can put all of those zeros in the front and still have eight or more bits to feed into the DAC. This is even more beneficial when the amp implements power supply volume control. PSVC raises the effective noise floor the DAC has to deal with.
Re:Depends on the source (Score:5, Interesting)
OT, as a choral performer:
Classical music has a stupid wide dynamic range, more than any other genre I know of, and (in particular) soprano sections have a nasty talent for pegging meters that were supposed to be set with plenty of headroom.
Re:Depends on the source (Score:5, Informative)
As a live sound engineer dealing with vocalists who do that regularly (sing at normal program levels and then BELT A PHRASE OUT)... let me say... ARGH.
I put a steep compressor on someone who's prone to doing that, and let me tell you, it makes my life much easier. I can't fix the clipping, but I can make sure they don't cause the audience to cover their ears.
One question... (Score:3)
I know in imaging that having better than the human eye can see is important in intermediate products as visual manipulation on low fidelity content could produce visible artifacts. Is it the case for audio as well? If someone is going to resample audio for a remix, is there risk of the decreased fidelity ultimately manifesting in the final product?
Re: (Score:3)
Yes, for both bit depth and sampling frequency. Here are two possible reasons why:
1. Bit depth. Remix wants to amplify a sound in the original mix. At 16 bit depth, you have 2^16 possible values to cover everything from silent to max loudness. If you take a soft sound that uses only some of those values and amplify it, the result suffers from possibly noticeable quantization artifacts. This is like magnifying a small picture to produce a pixelated one.
2. Sample frequency. Remix wants to frequency-shift /
Re:Depends on the source (Score:5, Informative)
Speaking as someone who frequently does recording, your comment suggests that no one has done that test with classical music in a properly controlled listening environment using quality gear while giving the test subject the ability to control the volume arbitrarily. When you crank up the volume, the noise floor difference in soft passages alone should make the difference between 16-bit and 24-bit signal paths a dead giveaway, even for someone with moderate to severe hearing loss. It isn't even subtle. Of course, if the person doesn't turn it down for the loud passages, he/she will likely suffer hearing damage, but perhaps that's why he/she has moderate to severe hearing loss in the first place. :-D
The 44.1 vs. 96 kHz difference is more subtle, requiring someone with top-notch hearing (very rare), headphones that can accurately reproduce frequencies above 20 kHz, and 96 kHz DAC hardware that does not have a bandpass filter starting at 16 kHz. If you fail to verify even one of those requirements, you would expect no one to be able to hear the difference, because there won't be any difference.
Re:Depends on the source (Score:4, Informative)
Actually, you've proven the GP's point. You can't tell the difference if you are listening to the program. Turning a program up in the "soft sections" is exactly what you should never, ever do when listening to a program. You may as well put on the IR headset with compression that came with your TV so you can watch late night TV without disturbing your wife.
Mastering is an entirely different ball of wax and, yes, you want all the headroom you can get. It's no different than photographers using RAW formats instead of JPGs (even lossless JPGs) out of the camera. You want all the bits you can get. But after your done mastering, dropping to 16bits isn't going to affect the outcome. That's the whole point of mastering - if we didn't want to be that soft, we would have engineered it to be louder.
Re:Depends on the source (Score:4, Interesting)
"There are good reasons to master audio in high res, but for listening 16 bit 44.1khz audio is as good as anything."
The reasons for having "extra" fidelity in master recordings is the same reason for having high-resolution photos in "raw" format: there is lots more wiggle room for editing while still maintaining good enough fidelity that the end user can't tell the difference.
For example: take a large (say 16M pixel) 8 x 10 photo, and reduce it to 4 x 5 at 600 dpi. Then take the same photo, edit it (for example, change some colors, remove a cloud from the sky, etc.) and reduce that to the same size and resolution. Even though the resulting photos are higher resolution (at arm's length) than the eye can perceive, they look different.
Re:Depends on the source (Score:4, Informative)
Re:Depends on the source (Score:5, Interesting)
Depends on how good the sound engineers are. A lot can be gained by higher resolution and sample rate in the mastering stage, but by using a good low pass filter and dithering (and dithering is not really necessary, http://developers.slashdot.org/story/13/02/27/1547244/xiph-episode-2-digital-show-tell [slashdot.org] ) basically all audible information is captured in 44.1kHz / 16. Your speakers probably don't go much above 20 kHz anyway, so anything beyond 44.1kHz will only cause distortion (aliasing), see post by MetalliQaZ "Debunked" below.
Re:Depends on the source (Score:5, Informative)
If you have ever gone to a rock concert and been near the front or gone to most dance clubs and you will have sustained hearing damage. If you have ever left one of these venues with ringing ears, or been around loud machinery and noticed the same, then you have sustained hearing loss. Your hearing will recover mostly after the trauma and that will be indicated by the subsiding of the ringing of your ears.
If you want to find out how your good/bad hearing is, spend the money and see an audiologist. You will be surprised on to find out what your hearing is really like.
Re:Depends on the source (Score:4, Insightful)
In other words, you've never done an ABX test and are just spouting ill-informed supposition. The ABX is the gold standard, get back to us once you can distinguish those sources that way with a 95% confidence level.
Re:Depends on the source (Score:4, Insightful)
You don't have to do a personal ABX test when there are many others who have done them and confirmed his statement. In fact, it's a much more powerful statement citing many others than just yourself. One is a statistic and the other is an anecdote.
And for a MUCH more exhaustive and scientific discussion than any post on this article will ever make (anther post in this thread already linked it, but you must have missed it, and it's a great article): http://people.xiph.org/~xiphmont/demo/neil-young.html [xiph.org]
One word: YES. (Score:5, Insightful)
Caveat: You have to have decent headphones (not Apple earbud BS), and/or good speakers, but that's about it. The difference is negligible once you hit ~320Kbps MP3, in my opinion, but anything under 256Kbps, regardless of lossy format, you can *clearly* hear cymbal hits turning to an underwater splooshy mess.
Re:One word: YES. (Score:4, Informative)
Caveat: You have to have decent headphones (not Apple earbud BS), and/or good speakers, but that's about it. The difference is negligible once you hit ~320Kbps MP3, in my opinion, but anything under 256Kbps, regardless of lossy format, you can *clearly* hear cymbal hits turning to an underwater splooshy mess.
Highhats are even worse than cymbals. Even at 256 kbps, highhats tend to sound like they're being hit with a bag of broken glass, and is the easiest way to identify lossy compression I can think of. Except, perhaps, some of Mike Oldfield's earlier works.
I can hear a slight difference (Score:5, Insightful)
I grew up listening to music on the radio (Score:4, Insightful)
Re: (Score:3)
Re:I grew up listening to music on the radio (Score:5, Funny)
I listen in the truck with a blown exhaust and whilst getting high on the fumes, lossy or lossless? I have trouble noticing if the car radio is even turned on.
No (Score:5, Insightful)
No you can't. Not with any reasonably modern encoder and bitrates above 256. Anyone who tells you otherwise is experiencing the placbo effect. BTW, you can't tell the difference between 16bit/44.1khz audio and 24/96 audio either. And vinyl might sound "better" than digital to you, but digital is objectively more accurate.
Audiophilia is saturated with woo. This is the same market that brought us $500 ethernet cables [cnet.com].
the answer is obvious, isn't it? (Score:4, Insightful)
And there's the rub of course. That general of a question can't be answered yes/no. It depends on a variety of factors, most notably the content, the codec, the bitrate, and the playback.
I don't even know why this article submission got accepted. It's like asking "can you win a race against a Toyoda?" where do you even start with that....?
Re:the answer is obvious, isn't it? (Score:5, Funny)
It's like asking "can you win a race against a Toyoda?" where do you even start with that....?
Since Akio Toyoda is 30 years older than me, I'm pretty sure I could beat him in a race.
Re:No (Score:5, Insightful)
Doesn't matter, the audiophile market is not rational (kind of like the wine market). After a certain quality threshold, say 256kbps mp3 or $100 bottle of wine, nobody can tell the difference in a blind test. Yet suckers keep paying money for $500 speaker cables and $1000 bottles of wine. Just stoking ego at that point.
Re: (Score:3, Funny)
Doesn't matter, the audiophile market is not rational (kind of like the wine market).
Show me a rational market, and I'll have to inquire as to the nature and evolutionary history of the species of aliens participating in it.
Re:No (Score:5, Insightful)
In medical tests, people are given a placebo and yet claim to feel better or feel the same effects as people who are given the real medication. These must be the same people who rail against mp3s.
Just because Neil young and Dave Grohl are famous musicians, it doesn't mean that they actually know what they are talking about. 40 years of exposure to loud music has probably damaged their hearing enough that they really don't know what they are hearing.
Saying that A sounds better than B is completely subjective and affected by many things. Not just how the music was encoded, but the quality of the DAC used for playback and the quality of the speakers/headphones used.
Re:No (Score:4, Insightful)
Re: (Score:3)
In medical tests, people are given a placebo and yet claim to feel better or feel the same effects as people who are given the real medication.
People don't claim to feel better, they do feel better. There is no incentive for them to lie, in fact, there is a disincentive for them to do so. The reason behind the cause of the "placebo" effect is in the mind of the patient. The patient believes they should be getting better and then they do. Power of thought, belief and, if defined correctly, faith. Really, it is the power of consciousness which no one fully understands.
This can be applied to apparent differences in audio formats. The observer believ
Re: (Score:3)
For chiptunes, I can hear a difference between 256 and 320, but just barely.
The biggest factor is how the high frequencies are filtered out before the audio is compressed, because the filtering appears to be the same regardless of the final bitrate. Even ultra-high bitrate audio will sound awful if the stock frequency cutoff is used, and I have to fiddle with the settings in LAME to make my songs sound good, even at 320.
I usually can, but I rarely care. (Score:5, Insightful)
I'm listening to a performance, not some audio benchmark. If a bit of loss bothers you, it must be some pretty damned uninspiring music you're listening to.
And if you're listening on some random mp3 player with bud headphones while walking around doing stuff, compression loss is the least of your worries.
In traffic, a VW will get me someplace (Score:5, Insightful)
as fast as a Ferrari.
Since I do most of my listening in a car, and am almost 48, I can't hear the difference between an mp3 and a vinyl album, or a cd, most of the time. Well, except for the lack of skipping. Ever try to listen to an LP in a moving car? But I digress. Sure, people who are younger and $pend lot$ of dollar$ on the Finest Audiophile equipment areound can tell. Me in my Chevy? Not so much.
44.1khz ought to be enough for anyone... (Score:5, Informative)
We recently discovered [arstechnica.com] that human hearing beats the linear response assumptions used in lossy codecs. So yes, their criticisms are scientifically founded.
Re:44.1khz ought to be enough for anyone... (Score:5, Insightful)
Unless you have people that can ABX the difference, no their criticisms are not scientifically founded. An actual blind test beats any theoretical reasoning any day.
Re: (Score:3)
In particular, nobody claims that lossy codecs use a perfectly accurate model of human hearing; they don't need to. The goal is to have a psychoacoustic model that captures enough of the general mechanics of hearing, to enable a bunch of constants to be tuned empirically. If the model doesn't come anywhere near to capturing anything important, that would be a problem, because you'd never be able to tune the constants. But once it captures the general outlines, much of the real work on lossy encoders over th
Re: (Score:3)
Didja read the article? Some people can tell the difference down to one oscillation per second. That's not theoretical.
Re:44.1khz ought to be enough for anyone... (Score:5, Insightful)
44.1khz ought to be enough for anyone...
Body:
human hearing beats the linear response assumptions used in lossy codecs. So yes, their criticisms are scientifically founded.
These have nothing to do with each other.
Debunked (Score:5, Informative)
The concept of improving consumer listening experience using studio quality recording has been thoroughly debunked, right here on Slashdot...
Why Distributing Music As 24-bit/192kHz Downloads Is Pointless [slashdot.org]
It doesn't matter (Score:5, Insightful)
The reason people use lossless compression for audio (i.e. FLAC or SHN) is not because they can tell the difference. Maybe you think you can, maybe you think you can't, but that's irrelevant anyway. The reason people choose lossless is that lossless is the only suitable solution for archiving. If you want to preserve your CD audio exactly as it appears on the CD, the only possible solution is lossless compression. If you choose lossy, you aren't making an archive or the original, but rather an approximation of the original.
That's all there is to it.
Re: (Score:3)
EXACTLY!
Re:It doesn't matter (Score:4, Interesting)
Oh, for mod points.
While I can't (mostly) tell the difference between the original CD and a ~140Kbs VBR MP3, I _can_ tell the difference between a 140Kbs VBR MB3 made from the CD source, and a 140Kbs VBR MP3 made from a 256Kbs VBR MP3.
Lossless isn't for listening to, it's for archiving. And make sure you get the cuesheet, pregaps, etc. right when you're archiving too :)
Re:It doesn't matter (Score:5, Insightful)
And you never have to re-rip physical discs. 128kb/s CBR MP3 used to be the standard. Then 192 VBR. Then AAC. And so on and so forth. So by keeping a lossless archive, one will always be able to transcode to the latest-and-greatest lossy codec without a lot of hassle.
Any good studies? (Score:5, Interesting)
Re:Any good studies? (Score:5, Interesting)
I don't know if it's a good study, but I did exactly this test. Ten or fifteen years ago.
I took four musical selections (from the latest Rolling Stones album at the time, a solo piano performance, a classical orchestra, a female vocal), and encoded them at 128, 192, and 256 Kbps with the Fraunhofer codec of the day (remember that?). I re-expanded them to 44.1 KHz CD tracks, and put them on a burned audio CD (remember those?). Each selection on the CD had five versions - the first was always the original bit-for-bit copy from the source CD, then followed (in random order) the 128, 192, 256, and the original again.
I made ten copies, and handed them out to the audiophiles in the office to play on their home stereos, and gave them a test sheet - I asked them to identify for each selection which version was 128, 192, 256, or the original. Nobody came close to having a "golden ear" that could reliably tell the 128Kbps versions from the others, much less the higher bitrates. Overall, there was a slight ability to detect the 128 kbps versions - it got selected as the lowest quality one more times than random chance would suggest, but even it was still well below 50% (I don't remember the exact numbers any more).
And this was with ancient MP3 encoders.
Frankly, if you think you've got the golden ear, first of all I pity you - I'm sorry that you have to put up with all the crap you're going to hear. Second of all, I really recommend running the same test - prepare the tracks, have a friend randomly order them (but keep track), and then see if you can identify them. Don't simply say "Of course I can" - Actually do it and prove it.
And, if I can be an old man with a bit of advice for a minute: if you can't tell the difference, don't go out of your way to train yourself to tell the difference. It'll just be an annoyance to you for the rest of your life. Kinda like the person who taught me about the reel-change indicators on film at the movie theatres - I see it, and my whole body tenses up waiting for the change. I wish I had never known about it. I really appreciate the change to digital projection so I don't have to deal with those anymore. /frank
Oblig (Score:4, Interesting)
Difference is not in the listening. (Score:4, Insightful)
The difference is the ability to transcode to different bitrates and formats without losing anything from the original source.
Sure, you can tell. (Score:4, Insightful)
If you've got decent equipment and a quiet environment. With cheapo earbuds, I don't notice the difference. With my good headphones, the difference is obvious. When I'm driving down the highway, I can't tell. In my living room, I can tell.
With storage so cheap and bandwidth so plentiful, there's really no reason not to use lossless audio. My $40 Clip+ with a $25 miscrosd card can hold 40 gigs of content and can play FLAC. There's no reason to use a lossy format.
By my estimation... (Score:4, Funny)
By Young's estimation, CDs can only offer about 15% of the data that was in a master sound track...
[ -- insert appropriate Neil Young lyric for satirical effect here -- ]
Nope, normally. (Score:4, Insightful)
Nope. Not if the quality is high enough, I can't tell the difference 99% of the times. There are some musical instruments (harpsichord) and singers (Tori Amos) where compression is very obvious. The lossy version becomes almost unlistenable once you've heard the lossless version.
On "normal" speakers I can rarely tell the difference, but on reference monitors the difference is noticeable on many tracks. Not terrible distracting but still noticeable.
Re:Nope, normally. (Score:4, Insightful)
When you listen to music on electrostatic speakers, you can hear things you couldn't hear before. It makes normal speakers sound muffled as if you're listening through a pillow. So the speakers can mean the difference between hearing the mp3 compression and not hearing it.
Trends (Score:3)
There have been more posts on Slashdot in the last 14 years on Slashdot about this topic. What I recall of them, is that people have been tested with blind and double-blind tests. And about ten years ago you could hear a difference between lossless audio and low-bitrate mp3's. The latter has less high and low, and mostly a certain "Hiss" sound through it. The preference was with the lossless audio then.
What struck me in later tests, was that people seemed to favour mp3's above lossless audio. I reckon it has to do with getting used to the Hiss-sound in mp3's, and therefore having it as a preference. A big factor in music taste is how much you are used to hearing similar music and sounds, and the hiss-sound does make a usual sound.
To be fair, I do think that mp3's in a high bitrate like 320 kbit are almost as good as lossless audio. Even though I prefer the lossless audio, just to be sure.
Will hi-def be mastered properly? (Score:5, Insightful)
I would pay more for audio tracks that are mastered properly.
Far too much of the music released these days is mastered to sound "loud". A sound-level compressor removes the dynamic range, and then the music is gained up about as high as possible, or sometimes higher than that (gained so high there is hard-clipping).
In the best case, the dynamic range is gone and the music loses some of the drama and impact it should have had. In the worst case, the sine waves are hard-clipped into square waves, which sounds terrible. Hard-clipping adds unpleasant harmonics and distortion and you definitely can hear this.
I promise you that a properly mastered track at 16-bit/44.1 kHz will sound dramatically better than a poorly mastered one at 24-bit/96 kHz. Mastering trumps format.
So if they are going to the trouble to make 24-bit/96 kHz tracks, I'm hoping that they will let the mastering engineers do their jobs properly! If they do, I would pay the extra money and bandwidth to buy the music in the higher-quality format.
The music industry is convinced that most of their customers are idiots, unconcerned about sound quality, who can be distracted by shiny things or loud noises; so they try to make every album as loud as possible. But maybe, just maybe, they will be willing to try something different with the high-quality downloads.
http://en.wikipedia.org/wiki/Loudness_war [wikipedia.org]
sometimes, but lossy audio isnt the worst problem (Score:3)
I mostly listen to MP3 encoded rock music. The loss of quality is very noticeable to me at 128kbps. The loss of quality is much harder to discern at 192, especially if a quality encoder is used. I use LAME -V 2 when I rip CDs and usually end up with average bitrates from ~190-215, and I can't tell the difference between those MP3s and the original CD.
IMO there are bigger problems facing recorded music anyway. See: http://en.wikipedia.org/wiki/Loudness_war [wikipedia.org]
AIFF?, Flac!, Lossless in General. & Randomnes (Score:4, Interesting)
Also, it is definitely possible to tell lossless audio from lossy audio, even at higher bitrates. Around 2002 I had a friend who completely mocked my lossless ways, even though I'm not one of those gold-cable audiophile people -- just a normal guy who likes his music. I just had a decent pair of Klipsh speakers with a subwoofer. My friend was so certain that this was all in my head and I was so certain that it was not that we devised a simple test. He would show me two identical-looking files in iTunes, just showing the titles. One was a high-bitrate AAC and the other a FLAC file. I could click on them to play them as much as I wanted. I was then to decide which was lossless and which was lossy. We did this with 10 files. It was basically double-blind as he didn't know which was which either until he took the computer back to check my answer. He set up 10 files this way. All in all the test took just 5 or 10 minutes.
I got 9 of 10 right. It is hard to describe sounds, but the lossless music is "deeper," especially bass, guitar vibrations and high notes. This makes it obvious for many songs.
However, I expect not everyone has hearing like this. I suspect this because one day I heard this annoying buzzing sound and asked my girlfriend about it. She couldn't hear anything. So, I searched all over for what was causing it. It turned out it was a television that was on, but that was on a non-channel so it was completely black on the screen. However, the CRT television emitted a sound from being on in a silent room that I found annoying and my girlfriend couldn't even hear. My sister could also hear it when I tested her later. I also sometimes find the sounds fluorescent lights make annoying too.
Anyway, lossless is great and, yes, you can hear the difference if you have hearing which can hear the difference. It's sort of tautological, but it's the truth.
Re:AIFF?, Flac!, Lossless in General. & Random (Score:5, Interesting)
AAC (like MP3) is a frequency-domain codec, and can therefore never provide transparent audio. It has nothing to do with "deeper". but instead is an inability to represent transients... non-tonal components like percussive sounds and other noise.
If you had performed the test with Musepack/MPC or even MPEG-1 Layer II at high bitrates, you would have failed the test.
http://en.wikipedia.org/wiki/MPEG-1#Quality [wikipedia.org]
My Torture Test (Score:3)
The opening of Royal Oil by the Mighty Mighty Bosstones. It starts out with a quiet snare roll that gets progressively louder, joined by a simple bass line. I've yet to hear a lossy codec at any bitrate that doesn't turn it into watery gibberish.
Disk space is cheap. Rip to FLAC or ALAC. For portables, 256kbps AAC seems to do the least amount of damage.
Dodged the bullet (Score:3)
Thank God my hearing isn't worth a crap and I don't have yet another thing to geek over.
As long as Frank Sinatra doesn't sound like Donald Duck, I'm cool with it.
I knew this article was gonna be BS (Score:3, Interesting)
And nothing of value was lost in the remaining 85% of the *data* that is inaudible to the human ear.
"Young, in fact, created his own digital-to-analog conversion (DAC) service called Pono. Young has tweeted that the Pono cloud-based music service, along with Pono portable digital-to-analog players, will be available by summer."
There's your cash-in scheme lurking behind all the BS.
"Young's service would increase the quality, or sampling rate, of the music from 44,100 times per second in a CD (44.1KHz) to 192,000 times per second (192KHz), and will boost the bit depth from 16-bit to 24-bit."
I would like to repeatedly hit you over the head with http://people.xiph.org/~xiphmont/demo/neil-young.html [xiph.org]
"The sample rate of a digital file refers to the number of "snapshots" of audio that are offered up every second. Think of it like a high-definition movie, where the more frames per second you have, the higher the quality."
NO, do not think of it like that unless you're a charlatan. Refer to rebuttal on xiph.org.
"Millions of people in the world are audiophiles."
No doubt, Millions of people in the world are fools and they have money that could be yours.
"It's just common sense that the higher the resolution -- the more data that's in an audio file -- the better the sound quality, Chesky said."
Too bad this thing called SCIENCE has been trumping "common sense" for millenia now.
"The site also recommends high-resolution player software such as JRiver, Pure Music, or Decibel Audio Player. The software, which basically turns your desktop or laptop into a music server or a digital-to-analog converter,"
HILLARIOUS. I won't even begin to..
"The most popular music server among audiophiles, according to Bliss, is an Apple Mac Mini."
This is beautiful. I am not surprised in the least to see this audiophile-appleophile overlap.
Re:Audiophiles might. (Score:5, Funny)
Re:Better question (Score:5, Funny)
Re:Better question (Score:5, Funny)
Or not using Monster Cable
Re:Better question (Score:5, Funny)
Look, you want your 0's and 1's to look like stupid Comic Sans 0's and 1's or like high quality, stylish Zapfino 0's and 1's?
Re:Better question (Score:5, Informative)
This is the real point: People are so used to listening to music with no dynamic range, on ear buds, in crappy acoustic environments that they wouldn't know where to start listening for a difference.
Re:Better question (Score:4, Insightful)
This is the real point: People are so used to listening to music with no dynamic range, on ear buds, in crappy acoustic environments that they wouldn't know where to start listening for a difference.
Nor can they afford any better so while they are listening to a lesser quality, they couldn't begin to purchase equipment to give them what these artists say they are missing.
Re:Better question (Score:5, Informative)
I think the real point is that there are known limits to human hearing and many audiophiles fantasize about their hearing being superhuman. It just ain't so. Dynamic range compression is one thing, but perceptual compression, sample rate, and bit depth are a different matter. No audiophile has ever heard the difference between FLAC and 320Kbps mp3 audio in an ABX test at a statistical rate that is better than guessing.
Any time this argument starts, I refer people to this well written article [xiph.org] that lays out the limits of human hearing compared to the specifications of recording formats...
Re:Better question (Score:5, Informative)
Good point. Sadly, my $3k hearing aids don't seem to help either.
Bitrate doesn't matter much if your ears are the lossy part.