Is CD audio quality good enough for the final delivery of music?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{
margin-bottom:0;
}








29















According to Wikipedia, the audio contained in a CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz.



Of course, both the sample rate and the bit depth could be increased to improve the quality e.g. according to Wikipedia BluRay audio uses 24-bit/96 kHz or 24-bit/192 kHz linear PCM.



But, can anyone hear the improvement? I am fairly sure that I cannot. For a start, I cannot hear up to 22kHz (Nyquist frequency). A web search finds plenty of opinions but many are clearly nonsense and it is hard to determine which, if any, are the result of scientific testing e.g. double blind testing.



I have some BluRays of music (with and without video) and I find them better in some ways but I think that factors other than the bit depth or sample rate are the explanation.



The bass is often better which might just be that they were produced with the expectation of being played on a system with a sub-woofer.



The rear channels add some atmosphere. This is subtle but can enhance the impression of really being present at a performance.



Are there any good quality studies of whether enhancing the sample rate or bit depth could be detected by humans?



Clarifications:



I am only asking about the final delivery to the consumer. The merits of higher quality in the original capture or editing is an interesting but separate question.



I am not considering cases in which further processing is expected.



I am only asking whether the CD standard is good enough not whether it is more than good enough e.g. whether a lower quality would be good enough. Again, an interesting but separate question.



I am not asking about the value of additional channels. I mention BluRay audio because it is an example of greater bit depth and higher sample rate. However, that is complicated by the extra channels.



Finally, of course, poor recordings exist. However good your tools are, they can be badly used. However, the existence of poorly made recordings does not, by itself, invalidate the standard.










share|improve this question






















  • 1





    The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

    – Your Uncle Bob
    May 26 at 17:33











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 26 at 17:43






  • 1





    The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

    – Todd Wilcox
    May 28 at 14:11













  • @ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

    – badjohn
    May 28 at 14:23











  • If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

    – Todd Wilcox
    May 28 at 14:26


















29















According to Wikipedia, the audio contained in a CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz.



Of course, both the sample rate and the bit depth could be increased to improve the quality e.g. according to Wikipedia BluRay audio uses 24-bit/96 kHz or 24-bit/192 kHz linear PCM.



But, can anyone hear the improvement? I am fairly sure that I cannot. For a start, I cannot hear up to 22kHz (Nyquist frequency). A web search finds plenty of opinions but many are clearly nonsense and it is hard to determine which, if any, are the result of scientific testing e.g. double blind testing.



I have some BluRays of music (with and without video) and I find them better in some ways but I think that factors other than the bit depth or sample rate are the explanation.



The bass is often better which might just be that they were produced with the expectation of being played on a system with a sub-woofer.



The rear channels add some atmosphere. This is subtle but can enhance the impression of really being present at a performance.



Are there any good quality studies of whether enhancing the sample rate or bit depth could be detected by humans?



Clarifications:



I am only asking about the final delivery to the consumer. The merits of higher quality in the original capture or editing is an interesting but separate question.



I am not considering cases in which further processing is expected.



I am only asking whether the CD standard is good enough not whether it is more than good enough e.g. whether a lower quality would be good enough. Again, an interesting but separate question.



I am not asking about the value of additional channels. I mention BluRay audio because it is an example of greater bit depth and higher sample rate. However, that is complicated by the extra channels.



Finally, of course, poor recordings exist. However good your tools are, they can be badly used. However, the existence of poorly made recordings does not, by itself, invalidate the standard.










share|improve this question






















  • 1





    The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

    – Your Uncle Bob
    May 26 at 17:33











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 26 at 17:43






  • 1





    The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

    – Todd Wilcox
    May 28 at 14:11













  • @ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

    – badjohn
    May 28 at 14:23











  • If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

    – Todd Wilcox
    May 28 at 14:26














29












29








29


10






According to Wikipedia, the audio contained in a CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz.



Of course, both the sample rate and the bit depth could be increased to improve the quality e.g. according to Wikipedia BluRay audio uses 24-bit/96 kHz or 24-bit/192 kHz linear PCM.



But, can anyone hear the improvement? I am fairly sure that I cannot. For a start, I cannot hear up to 22kHz (Nyquist frequency). A web search finds plenty of opinions but many are clearly nonsense and it is hard to determine which, if any, are the result of scientific testing e.g. double blind testing.



I have some BluRays of music (with and without video) and I find them better in some ways but I think that factors other than the bit depth or sample rate are the explanation.



The bass is often better which might just be that they were produced with the expectation of being played on a system with a sub-woofer.



The rear channels add some atmosphere. This is subtle but can enhance the impression of really being present at a performance.



Are there any good quality studies of whether enhancing the sample rate or bit depth could be detected by humans?



Clarifications:



I am only asking about the final delivery to the consumer. The merits of higher quality in the original capture or editing is an interesting but separate question.



I am not considering cases in which further processing is expected.



I am only asking whether the CD standard is good enough not whether it is more than good enough e.g. whether a lower quality would be good enough. Again, an interesting but separate question.



I am not asking about the value of additional channels. I mention BluRay audio because it is an example of greater bit depth and higher sample rate. However, that is complicated by the extra channels.



Finally, of course, poor recordings exist. However good your tools are, they can be badly used. However, the existence of poorly made recordings does not, by itself, invalidate the standard.










share|improve this question
















According to Wikipedia, the audio contained in a CD consists of two-channel signed 16-bit Linear PCM sampled at 44,100 Hz.



Of course, both the sample rate and the bit depth could be increased to improve the quality e.g. according to Wikipedia BluRay audio uses 24-bit/96 kHz or 24-bit/192 kHz linear PCM.



But, can anyone hear the improvement? I am fairly sure that I cannot. For a start, I cannot hear up to 22kHz (Nyquist frequency). A web search finds plenty of opinions but many are clearly nonsense and it is hard to determine which, if any, are the result of scientific testing e.g. double blind testing.



I have some BluRays of music (with and without video) and I find them better in some ways but I think that factors other than the bit depth or sample rate are the explanation.



The bass is often better which might just be that they were produced with the expectation of being played on a system with a sub-woofer.



The rear channels add some atmosphere. This is subtle but can enhance the impression of really being present at a performance.



Are there any good quality studies of whether enhancing the sample rate or bit depth could be detected by humans?



Clarifications:



I am only asking about the final delivery to the consumer. The merits of higher quality in the original capture or editing is an interesting but separate question.



I am not considering cases in which further processing is expected.



I am only asking whether the CD standard is good enough not whether it is more than good enough e.g. whether a lower quality would be good enough. Again, an interesting but separate question.



I am not asking about the value of additional channels. I mention BluRay audio because it is an example of greater bit depth and higher sample rate. However, that is complicated by the extra channels.



Finally, of course, poor recordings exist. However good your tools are, they can be badly used. However, the existence of poorly made recordings does not, by itself, invalidate the standard.







recording






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 26 at 15:16







badjohn

















asked May 25 at 12:52









badjohnbadjohn

2,4017 silver badges26 bronze badges




2,4017 silver badges26 bronze badges











  • 1





    The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

    – Your Uncle Bob
    May 26 at 17:33











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 26 at 17:43






  • 1





    The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

    – Todd Wilcox
    May 28 at 14:11













  • @ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

    – badjohn
    May 28 at 14:23











  • If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

    – Todd Wilcox
    May 28 at 14:26














  • 1





    The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

    – Your Uncle Bob
    May 26 at 17:33











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 26 at 17:43






  • 1





    The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

    – Todd Wilcox
    May 28 at 14:11













  • @ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

    – badjohn
    May 28 at 14:23











  • If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

    – Todd Wilcox
    May 28 at 14:26








1




1





The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

– Your Uncle Bob
May 26 at 17:33





The close vote may be because some of the answers are beside the point (audio quality while recording and mixing, quality of audio compression, anecdotal evidence and personal preferences ...) and for some people this indicates that the question was unclear or too broad or asking for opinions. They should just downvote the answers they didn't like, but there seems to be a no-downvotes tradition on this site.

– Your Uncle Bob
May 26 at 17:33













Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 26 at 17:43





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 26 at 17:43




1




1





The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

– Todd Wilcox
May 28 at 14:11







The one existing close vote is because the question is primarily opinion based. There's no objective measure of "good enough". @YourUncleBob I would never vote to close based on answers, that seems... inexplicable. I have also downvoted the question. I do not have a personal "no downvotes" policy. Also, to me this is off-topic because it's more about consumer audio than it is about audio production. And the number of digital audio quality conversations/arguments that exist online is manifold and IMHO, completely unhelpful. No need for yet another here.

– Todd Wilcox
May 28 at 14:11















@ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

– badjohn
May 28 at 14:23





@ToddWilcox Indeed, there is plenty, too much, on the net about the subject. Note that I requested "Are there any good quality studies". My hope was to get a good quality answer.

– badjohn
May 28 at 14:23













If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

– Todd Wilcox
May 28 at 14:26





If there were good quality answers available, they would have been found in a cursory web search. The other side of this is, nobody is going to be delivering CD audio anymore. CDs are dead. Authoring today is for streaming services, iTunes, YouTube, and video of various formats. The producer and mastering engineers don't choose the delivery resolution and bit depth, they deliver what the services request/require. Delivering in a format different from the service's format means it will be re-encoded after delivery, and most producers want to avoid that. IMHO this question is pointless.

– Todd Wilcox
May 28 at 14:26










7 Answers
7






active

oldest

votes


















31
















Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42



















23
















The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:




  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.


Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.





  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.






share|improve this answer




























  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57



















13
















There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.






share|improve this answer


























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57



















3
















Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27



















0
















The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.






share|improve this answer





















  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46





















0
















No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41



















-1
















I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27













Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "240"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});















draft saved

draft discarded
















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmusic.stackexchange.com%2fquestions%2f85212%2fis-cd-audio-quality-good-enough-for-the-final-delivery-of-music%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























7 Answers
7






active

oldest

votes








7 Answers
7






active

oldest

votes









active

oldest

votes






active

oldest

votes









31
















Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42
















31
















Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42














31














31










31









Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.






share|improve this answer















Tentatively: Yes. As medium for final delivery, I've not yet seen any strong evidence that a well-designed 16-bit, 44,100 Hz system can be significantly improved on as a vehicle for listening to the final mix (and therefore, from a musician's perspective, as a vehicle for presenting the final mix.)



When I looked into this a few years back, I was disappointed to find a relative lack of seemingly well-conducted tests compared to the level of interest in the subject. I certainly didn't find anything that seemed to strongly suggest that there was any major listener benefit in 'higher-definition' audio. (I'm writing this answer partly because I'd be very grateful if anyone knows different!)



Just to provide a little further reading - here are some anecdotes of tests that deal with bit depth and sample rate.



Of course a particular 16-bit listening experience might be devalued by recording levels being too low (resulting in a perceptible noise floor), OR by over-aggressive limiting of peaks to stay within the headroom. That's an example of where recording initially to a higher bit-depth initially would have been valuable. Likewise, a 44.1K DAC with a badly-designed anti-aliasing filter might sound bad - but this doesn't seem to be inevitable with the current state of technology.



Edit: I have just found this paper, published since I last explored this, that concludes that "there was a small but statistically significant ability to discriminate between standard quality audio(44.1or48kHz,16bit) and high resolution audio (beyond standard quality)", based on review of a number of experiments in this area. However, it also states that this ability to discriminate is something that is far more significant when subjects were trained, and still concludes that "the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question". So it still seems reasonable to call CD quality 'good enough', even if 'very slightly better' may be possible.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 26 at 11:32

























answered May 25 at 16:20









topo mortotopo morto

34.9k2 gold badges54 silver badges132 bronze badges




34.9k2 gold badges54 silver badges132 bronze badges
















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42



















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 10:42

















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 28 at 10:42





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 28 at 10:42













23
















The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:




  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.


Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.





  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.






share|improve this answer




























  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57
















23
















The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:




  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.


Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.





  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.






share|improve this answer




























  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57














23














23










23









The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:




  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.


Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.





  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.






share|improve this answer















The short answer: 16-bit 44.1 KHz PCM encoding, when properly sampled
and played back, is close enough to perfect reproduction for human
hearing in virtually all situations that it's unequivocally "good
enough."



The main caveats:




  1. The material must be recorded and reproduced with properly
    engineered sampling and playback systems. While this is not
    particularly difficult or expensive for a competent engineer using
    modern technology, there are a number of mistakes the engineer
    (both equipment designer and recording engineer) could make that
    could be mitigated by a higher sampling rate and/or more bit depth.

  2. There are situations where 16-bit depth will have an audible noise
    floor. These do not occur "naturally" and the vast majority of
    listeners and even audio engineers would have neither the
    inclination nor care to spend the money to produce an environment
    where this would occur. (The noise floor is below audible in places
    such as a soundproofed cinema in a quiet neighbourhood.)

  3. This applies to the storage format only: intermediate processing
    uses appropriate higher bit depths and sampling rates as necessary.
    As a simplistic example for bit depth, when mixing one would
    normally mix multiple 16-bit input siganls to a 24-bit output
    signal and then scale that output signal back to 16 bits. A
    simplistic example for sampling rate is that one might sample at 8x
    or more the 44.1 KHz final frequency in order to use analogue
    filters that distort the signal less when filtering out signals
    above the 22.05 KHz Nyquist frequency.


Now to the details.



A seemingly little known fact of digital sampling of analogue signals
is that, so long as the sampled signal has no frequency components
above the Nyquist frequency of 1/2 the sampling
rate, a properly reproduced playback of that sample will be an exact
copy of the analogue input waveform. All those stair steps you see in
pictures of sampling? They're nonsense; that's a made-up waveform that
cannot be generated by a proper reproduction system because such a
signal would have the "steps" removed by the output filter. I'm not
going to go into more details on this here, but if you are not
convinced or just want to learn more, see Monty Montgomery's "D/A and
A/D | Digital Show and Tell" in
video (also on
YouTube) or
text form.



Note that other answers here get this wrong, and it does seem to be
very difficult to believe for some people. As this post
puts it quite eloquently:




The concept of the perfect measurement or of recreating a waveform
perfectly may seem like marketing hype. However, in this case it is
not. It is in fact the fundamental tenet of the Nyquist-Shannon
Sampling Theorem on which the very existence and invention of
digital audio is based. From WIKI: “In essence the theorem shows
that an analog signal that has been sampled can be perfectly
reconstructed from the samples”. I know there will be some who will
disagree with this idea, unfortunately, disagreement is NOT an
option. This theorem hasn't been invented to explain how digital
audio works, it's the other way around. Digital Audio was invented
from the theorem, if you don't believe the theorem then you can't
believe in digital audio either!




This tells us that in theory, with what we know about human hearing
limits and the noise floors of professionally-designed low-noise
listening environments (such as recording studio or good cinema), the
frequency response and noise floor of 44.1 KHz 16-bit digital audio
recordings will be essentially perfect. (There's a lot more detail on
this in 24/192 Music Downloads...and why they make no
sense. As an
interesting aside, it also mentions that providing wider spectra may
actually make things worse: playback of ultrasonic signals of any
significant amplitude into standard analogue audio amplifiers may well
create intermodulation distortion products in the audio frequencies.)



So the question now becomes, can we do the reproduction well enough in practice?



Well, the way to do this is to test it, of course.



These sorts of tests have been rife with major problems, some as bad as comparing different recordings of the "same" material, such as an SACD remaster of an album against its original master mix from the CD. Even very skeptical experts on testing can accept badly-advised shortcuts such as not double-blinding the test. And of course the listening environment has a massive and difficult-to-correct-for influence on the audio. Even just small movements of your head can result in massive spectrum changes due to comb filtering.



That said, amongst the enormous number of bad tests, a few good ones have been done and they have all invariably shown that nobody, not even professional recording engineers or people with "golden ears," can tell the difference between 44.1 KHz 16-bit and higher rate/depth source recordings.



The canonical paper on this dates from 2006 or so: Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback. The abstract:




Claims both published and anecdotal are regularly made for audibly
superior sound quality for two-channel audio encoded with longer word
lengths and/or at higher sampling rates than the 16-bit/44.1-kHz CD
standard. The authors report on a series of double-blind tests
comparing the analog output of high-resolution players playing
high-resolution recordings with the same signal passed through a
16-bit/44.1-kHz “bottleneck.” The tests were conducted for over a year
using different systems and a variety of subjects. The systems
included expensive professional monitors and one high-end system with
electrostatic loudspeakers and expensive components and cables. The
subjects included professional recording engineers, students in a
university recording program, and dedicated audiophiles. The test
results show that the CD-quality A/D/A loop was undetectable at
normal-to-loud listening levels, by any of the subjects, on any of the
playback systems. The noise of the CD-quality loop was audible only at
very elevated levels.




I'd like point out particularly section 4 of the paper because I think may give some insight into how this whole "high-definition" audio" mess happened:




Though our tests failed to substantiate the claimed advantages of
high-resolution encoding for two-channel audio, one trend became
obvious very quickly and held up throughout our testing: virtually
all of the SACD and DVD-A recordings sounded better than most CDs—
sometimes much better. Had we not “degraded” the sound to CD quality
and blind-tested for audible differences, we would have been tempted
to ascribe this sonic superiority to the recording processes used to
make them. Plausible reasons for the remarkable sound quality of
these recordings emerged in discussions with some of the engineers
currently working on such projects. This portion of the business is
a niche market in which the end users are preselected, both for
their aural acuity and for their willingness to buy expensive
equipment, set it up correctly, and listen carefully in a low-noise
environment. Partly because these recordings have not captured a
large portion of the consumer market for music, engineers and
producers are being given the freedom to produce recordings that
sound as good as they can make them, without having to compress or
equalize the signal to suit lesser systems and casual listening
conditions. These recordings seem to have been made with great care
and manifest affection, by engineers trying to please themselves and
their peers. They sound like it, label after label. High-resolution
audio discs do not have the overwhelming majority of the program
material crammed into the top 20 (or even 10) dB of the available
dynamic range, as so many CDs today do. Our test results indicate
that all of these recordings could be released on conventional CDs
with no audible difference. They would not, however, find such a
reliable conduit to the homes of those with the systems and
listening habits to appreciate them. The secret, for two-channel
recordings at least, seems to lie not in the high-bit recording but
in the high-bit market.




Here are my references and some more reading if you want to get more deeply into this.





  • Audibility of a CD-Standard A/D/A Loop Inserted into
    High-Resolution Audio Playback.
    The best study I know of on this, though there are probably others.

  • Paul D. Lehrman, The Emperor's New Sampling Rate,
    Mix magazine. This is what led me to the article above, and it
    serves as a higher-level summary, along with some further
    information.

  • Monty Montgomery, "D/A and A/D | Digital Show and Tell"
    video (also on
    YouTube) or
    text form. If
    you don't instinctively think "rubbish" when you see a stair-step
    waveform associated with digital sampling, you really need to see
    this. Even if you prefer reading things, the video is well worth
    watching as the demonstrations of what's going on are very clear.

  • Monty Montgomery, 24/192 Music Downloads...and why they make no
    sense. The
    science behind hearing and why you can't hear "better" than 44.1
    KHz/16-bit, and some information on sampling. Includes 16-bit WAV
    files with 0 dB and -105 dB tones if you want to try to hear the
    full dynamic range of 16-bits. Also a long list of what listening
    tests may be testing instead of the source recording frequency and
    depth.

  • image-line.com, Audio Myths & DAW Wars.
    A quick recapitulation of various things that usually cause changes
    in audio quality outside of source rate/depth. Oriented towards
    people who do music production.

  • Ethan Winer, High Definition Audio Comparison. Do your own personal test of
    "high-definition" vs. 44.1 KHz/16-bit!

  • Ethan Winer, Ethan's Magazine Articles and Videos. Lots of other good
    information on audio, listening tests, gear, and so on.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 26 at 16:09

























answered May 26 at 8:53









Curt J. SampsonCurt J. Sampson

3351 silver badge7 bronze badges




3351 silver badge7 bronze badges
















  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57



















  • Let us continue this discussion in chat.

    – Curt J. Sampson
    May 26 at 23:52






  • 6





    Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

    – Mr Lister
    May 27 at 14:57

















Let us continue this discussion in chat.

– Curt J. Sampson
May 26 at 23:52





Let us continue this discussion in chat.

– Curt J. Sampson
May 26 at 23:52




6




6





Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

– Mr Lister
May 27 at 14:57





Maybe this is the moment to point out that there are Sound Design and a Signal Processing sites on SE...

– Mr Lister
May 27 at 14:57











13
















There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.






share|improve this answer


























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57
















13
















There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.






share|improve this answer


























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57














13














13










13









There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.






share|improve this answer













There are two separate issues here - resolution and frequency. And we also need to separate out recording and playback.



16-bit resolution is plenty good enough for playback. However when recording you want to allow extra headroom, because the worst thing you can do to a sampled signal is to clip it at the limits of its range. It's normal to record at -10dB or so to give that headroom. With 16-bit recordings we would lose substantial recording fidelity that way - but with 24-bit we're fine.



For playback, it's maybe possible to hear the difference, but you'd need good ears. More significantly, you'd also need good equipment. You won't notice the difference on anything short of decent studio kit.



44kHz is in theory good enough to reproduce 22kHz. The problem though is aliasing. If you don't cut everything above 22kHz when you record then those inaudible higher frequencies reflect back on the opposite side of the Nyquist frequency and become audible. When 20kHz is your threshold for hearing, that means your filter needs to pass 20kHz but have cut hard by 22kHz, which is really hard to do. We have filters now which can do it, but certainly older hardware (especially back in the early days of CDs) couldn't do it well at all. Recording at 96kHz though gives you a Nyquist frequency at 48kHz, and it's relatively easy to build a filter which passes 20kHz and cuts hard by 48kHz.



Again, this is for recording. Unless your ears can hear above 22kHz, you won't get any benefit from playback at 96kHz.



For playback though... All the above does assume that playback is done competently. It's not unknown for software (and hardware) to handle one sample rate better than another. I remember a few interesting articles about this in Sound On Sound back in the mid 00s. I doubt these issues still apply today, but it's worth mentioning.







share|improve this answer












share|improve this answer



share|improve this answer










answered May 25 at 23:20









GrahamGraham

2,1735 silver badges14 bronze badges




2,1735 silver badges14 bronze badges
















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57



















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 28 at 7:57

















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 28 at 7:57





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 28 at 7:57











3
















Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27
















3
















Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27














3














3










3









Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.






share|improve this answer















Most "try it yourself" experiments on this are meaningless, because you have no way to know what your complete audio reproduction chain is doing to the digital data before you hear it. That doesn't just include the most obvious distortion source of the speakers or headphones, but also the digital-to-analog converter circuits in your CD player!



Also, there have been many psychoacoustic experiments, dating back long before the digital recording era, comparing live performers, live performers with an acoustic filter between performers and listeners, and recorded or broadcast-quality music. Many of those found that the general public preferred the limited frequency range of recorded music to the sound from live performance. One explanation is that this is simply an example of the general principle of "I never listen to X, therefore I don't like it" - most of the subjects in those early test would have heard much more music over low quality AM radio (with a frequency cut-off at only 8 KHz!) than live performance, and they preferred what they were accustomed to hearing.



A second reason why a test like Rick Beato's are meaningless is that the "uncompressed wav file" may already have had the high frequency content removed from the original recording. The upper frequency limit for FM radio broadcasting is 16kHz, so for commercial recordings there is no point producing a final mix that wastes bandwidth which can't be broadcast, when that bandwidth could be used to raise the apparent "volume level" of the mix by another fraction. In Beato's test, the classical piano recording might not have been filtered in that way, but all the other recordings certainly would have been. You can't hear the presence or absence of silence!



There is a fundamental theoretical issue here which is usually ignored. Most of the "basic" theory of digital signal processing is only applicable when the digital data has infinitely fine amplitude resolution. That includes statements like "you can reproduce the audio exactly up to the Nyquist frequency of half the sampling rate" which are bandied around as if they were incontrovertibly true.



To see the problem, consider the sampling rate of 44100 per second, and a signal of 9800 KHz. Each cycle of the 9.8 KHz signal take 44100/9800 = 4.5 samples of the digital data. Therefore, the digital data does not repeat exactly with a frequency of 9.8 Hz, but every 9 samples, i.e. every 4.9 kHz.



The original 9.8KHz signal (periodic, but not necessarily a sine wave) has only two harmonics in the typical human audio range, i.e. 9.8 and 19.6 KHz. However the digital audio signal has four. There are two more at 4.9 KHz and 14.7 KHz.



Of course the amplitudes of those two additional frequencies are "small", since they are only caused by the amplitude quantization of the original analog audio signal. But human hearing does not have a flat frequency response. It has a peak in its response curve around 3 kHz to 4 kHz (which most likely evolved to optimize the ability to process human speech.) A human brain's audio processing functions have evolved to detect quiet sounds at 3-4kHz mixed with lounder sounds in the rest of the frequency band - i.e. it is optimized to detect this sort of digital audio artefact!



These "ghost tones" are audible in controlled conditions and there is no way to remove them when converting the digital data back to analog. Dithering the digital signal (which is often done as the final step in processing) doesn't remove them, it simply smears them out across a range of frequencies.



Increasing the bit resolution from 16 to 24 does reduce them by a factor of 256. Increasing the sampling rate from 44.1k/sec to say 96k/sec can also reduce them, since a dithering algorithm can now "dump" all the "noise" into the inaudible frequency range above 22 kHz.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 26 at 9:35

























answered May 26 at 9:26









guestguest

472 bronze badges




472 bronze badges
















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27



















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27

















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 29 at 22:27





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 29 at 22:27











0
















The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.






share|improve this answer





















  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46


















0
















The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.






share|improve this answer





















  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46
















0














0










0









The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.






share|improve this answer













The German "Audio" magazine published an article some time 25-30 years ago. The found a high-end CD player that for some reason allowed to turn individual bits of the 16 bit signal on or off - why you would do that is beyond me, but that is what this CD player did.



What they found: Turning bit #16 off (with a top quality amplifier and top quality speakers) made no audible difference. Turning bit #15 off made an audible difference but there was no agreement in blind test which version was better or more accurate, just that there was a difference. Turning off bit #14 was a definite loss of quality.



Not in any way peer reviewed, just the published opinion of reporters who made their living reviewing and comparing high-end audio equipment. So according to them, 15 and 16 bit was indistinguishable.







share|improve this answer












share|improve this answer



share|improve this answer










answered May 27 at 20:16









gnasher729gnasher729

1471 bronze badge




1471 bronze badge











  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46
















  • 3





    This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

    – Curt J. Sampson
    May 28 at 1:46










3




3





This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

– Curt J. Sampson
May 28 at 1:46







This isn't actually useful because they were misusing the system. Digitally recording an audio signal without using dither and noise shaping (which is effectively what they were doing) is comparable to doing analogue tape recording with an incorrect or missing bias signal: you are reducing the quality of the signal through incorrect use of your equipment. Had they properly downsampled 16-bit to 14-bit, it's unlikely anybody would have noticed the difference.

– Curt J. Sampson
May 28 at 1:46













0
















No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41
















0
















No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41














0














0










0









No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.






share|improve this answer















No - on some cell phones, audio recorded with HD video will be higher & there is a noticeable difference from the 16-bit default recording of the audio app to the 24-bit HD audio in the HD video recording. My family has a weird ear thing.. one hears lower pitches, one hears higher pitches. Both my brother & I have this, & we can both hear clear data loss when comparing those two files. The closer you are to recording the best native format for a live feed, the closer you are to perfection.



Just as 24-bit is better than 16, 32-bit is better than 24. However, frequency beyond 48 kHz is multiplied as a sampling of 44.1 or 48 kHz, so you may not hear the difference through frequency changes. Look at this through a speakers analogy on the receiving end.. if one sampling is 2 speakers, then for each next sampling it might be like the recipient is inside a circle of two more speakers. At what point does it all just become noise?



32-bit 48 kHz is a great recording level for #Audacity, & with a clean recording mixer, like #Cerwin-#Vega with the USB interface, just the right oxygen-free copper or silver wire leads, I really enjoy the 32-bit 48 kHz recordings far better than the lower settings.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 28 at 12:50

























answered May 28 at 12:27









Joseph PoirierJoseph Poirier

173 bronze badges




173 bronze badges
















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41



















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 31 at 8:41

















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 31 at 8:41





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 31 at 8:41











-1
















I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27
















-1
















I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.






share|improve this answer




























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27














-1














-1










-1









I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.






share|improve this answer















I've read that the ear cannot distinguish between anything higher than 16bit 44khz. Some studies show that as the sampling frequency increases above that level, the listener experiences a reduction in quality and loudness, or no improvement.



When you're recording, or working in a DAW, having higher bitrates and frequencies can increase the quality of your final mix (according to my research). This is probably because higher sampling rates helps the ability of the DAW to do the calculations necessary for mixing with higher resolution.



The post from @graham is very interesting, and also adds information to why working with higher bitrates in a DAW is ideal. I've read that they're helpful, but don't know about exactly why. However, I don't always cut my highs with a low pass. I've noticed that it changes the way the track sounds. I definitely reduce anything above 8khz, but a lowpass even at 20khz seems to change the way the audio sounds; for me, it's not always ideal. He's definitely right that cutting frequencies in this range will give your mix more headroom. And if recording with a higher sampling rate allows for more precise and wide ranging ability to cut frequencies, that would justify using higher sampling rates.



Blind test where they couldn't tell the difference:
https://www.tomshardware.com/reviews/high-end-pc-audio,3733-18.html



Another anecdotal example:
https://www.avsforum.com/forum/91-audio-theory-setup-chat/2909658-192khz-24bit-vs-44-1khz-16bit-audio-no-quality-difference.html



There are endless cases of people claiming they can't tell the difference. I know I can't. I actually like 16bit 44.1khz more than 24bit or 32bit when it comes to the final rendered product.



However, it's worth mentioning that if you're uploading to a website like soundcloud.com .. your track is going to get compressed into mp3 format for streaming. This goes with a lot of streaming services; they will compress your music. So basically, your track is going through another DAW after final render. If you're uploading to a website that will compress your music, it may be beneficial to render in the highest possible quality so that the track has maximum input resolution. I've uploaded tracks in 24bit and 16bit and can't tell the difference. I've listened to tracks rendered in 16/24/32 bit and I can't tell the difference. Note that in Fl Studio, dithering is only available in 16bit wav renders and I've read that dithering makes the final sound more palatable to the ear.



The quality of your recording setup, equipment, and personal ability is far more important to how the final sound will be perceived than anything above CD quality on your rendered product.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 26 at 15:28

























answered May 26 at 2:18









hexagodhexagod

554 bronze badges




554 bronze badges
















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27



















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – Doktor Mayhem
    May 29 at 22:27

















Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 29 at 22:27





Comments are not for extended discussion; this conversation has been moved to chat.

– Doktor Mayhem
May 29 at 22:27



















draft saved

draft discarded



















































Thanks for contributing an answer to Music: Practice & Theory Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmusic.stackexchange.com%2fquestions%2f85212%2fis-cd-audio-quality-good-enough-for-the-final-delivery-of-music%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum

He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

Slayer Innehåll Historia | Stil, komposition och lyrik | Bandets betydelse och framgångar | Sidoprojekt och samarbeten | Kontroverser | Medlemmar | Utmärkelser och nomineringar | Turnéer och festivaler | Diskografi | Referenser | Externa länkar | Navigeringsmenywww.slayer.net”Metal Massacre vol. 1””Metal Massacre vol. 3””Metal Massacre Volume III””Show No Mercy””Haunting the Chapel””Live Undead””Hell Awaits””Reign in Blood””Reign in Blood””Gold & Platinum – Reign in Blood””Golden Gods Awards Winners”originalet”Kerrang! Hall Of Fame””Slayer Looks Back On 37-Year Career In New Video Series: Part Two””South of Heaven””Gold & Platinum – South of Heaven””Seasons in the Abyss””Gold & Platinum - Seasons in the Abyss””Divine Intervention””Divine Intervention - Release group by Slayer””Gold & Platinum - Divine Intervention””Live Intrusion””Undisputed Attitude””Abolish Government/Superficial Love””Release “Slatanic Slaughter: A Tribute to Slayer” by Various Artists””Diabolus in Musica””Soundtrack to the Apocalypse””God Hates Us All””Systematic - Relationships””War at the Warfield””Gold & Platinum - War at the Warfield””Soundtrack to the Apocalypse””Gold & Platinum - Still Reigning””Metallica, Slayer, Iron Mauden Among Winners At Metal Hammer Awards””Eternal Pyre””Eternal Pyre - Slayer release group””Eternal Pyre””Metal Storm Awards 2006””Kerrang! Hall Of Fame””Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Bullet-For My Valentine booed at Metal Hammer Golden Gods Awards””Unholy Aliance””The End Of Slayer?””Slayer: We Could Thrash Out Two More Albums If We're Fast Enough...””'The Unholy Alliance: Chapter III' UK Dates Added”originalet”Megadeth And Slayer To Co-Headline 'Canadian Carnage' Trek”originalet”World Painted Blood””Release “World Painted Blood” by Slayer””Metallica Heading To Cinemas””Slayer, Megadeth To Join Forces For 'European Carnage' Tour - Dec. 18, 2010”originalet”Slayer's Hanneman Contracts Acute Infection; Band To Bring In Guest Guitarist””Cannibal Corpse's Pat O'Brien Will Step In As Slayer's Guest Guitarist”originalet”Slayer’s Jeff Hanneman Dead at 49””Dave Lombardo Says He Made Only $67,000 In 2011 While Touring With Slayer””Slayer: We Do Not Agree With Dave Lombardo's Substance Or Timeline Of Events””Slayer Welcomes Drummer Paul Bostaph Back To The Fold””Slayer Hope to Unveil Never-Before-Heard Jeff Hanneman Material on Next Album””Slayer Debut New Song 'Implode' During Surprise Golden Gods Appearance””Release group Repentless by Slayer””Repentless - Slayer - Credits””Slayer””Metal Storm Awards 2015””Slayer - to release comic book "Repentless #1"””Slayer To Release 'Repentless' 6.66" Vinyl Box Set””BREAKING NEWS: Slayer Announce Farewell Tour””Slayer Recruit Lamb of God, Anthrax, Behemoth + Testament for Final Tour””Slayer lägger ner efter 37 år””Slayer Announces Second North American Leg Of 'Final' Tour””Final World Tour””Slayer Announces Final European Tour With Lamb of God, Anthrax And Obituary””Slayer To Tour Europe With Lamb of God, Anthrax And Obituary””Slayer To Play 'Last French Show Ever' At Next Year's Hellfst””Slayer's Final World Tour Will Extend Into 2019””Death Angel's Rob Cavestany On Slayer's 'Farewell' Tour: 'Some Of Us Could See This Coming'””Testament Has No Plans To Retire Anytime Soon, Says Chuck Billy””Anthrax's Scott Ian On Slayer's 'Farewell' Tour Plans: 'I Was Surprised And I Wasn't Surprised'””Slayer””Slayer's Morbid Schlock””Review/Rock; For Slayer, the Mania Is the Message””Slayer - Biography””Slayer - Reign In Blood”originalet”Dave Lombardo””An exclusive oral history of Slayer”originalet”Exclusive! Interview With Slayer Guitarist Jeff Hanneman”originalet”Thinking Out Loud: Slayer's Kerry King on hair metal, Satan and being polite””Slayer Lyrics””Slayer - Biography””Most influential artists for extreme metal music””Slayer - Reign in Blood””Slayer guitarist Jeff Hanneman dies aged 49””Slatanic Slaughter: A Tribute to Slayer””Gateway to Hell: A Tribute to Slayer””Covered In Blood””Slayer: The Origins of Thrash in San Francisco, CA.””Why They Rule - #6 Slayer”originalet”Guitar World's 100 Greatest Heavy Metal Guitarists Of All Time”originalet”The fans have spoken: Slayer comes out on top in readers' polls”originalet”Tribute to Jeff Hanneman (1964-2013)””Lamb Of God Frontman: We Sound Like A Slayer Rip-Off””BEHEMOTH Frontman Pays Tribute To SLAYER's JEFF HANNEMAN””Slayer, Hatebreed Doing Double Duty On This Year's Ozzfest””System of a Down””Lacuna Coil’s Andrea Ferro Talks Influences, Skateboarding, Band Origins + More””Slayer - Reign in Blood””Into The Lungs of Hell””Slayer rules - en utställning om fans””Slayer and Their Fans Slashed Through a No-Holds-Barred Night at Gas Monkey””Home””Slayer””Gold & Platinum - The Big 4 Live from Sofia, Bulgaria””Exclusive! Interview With Slayer Guitarist Kerry King””2008-02-23: Wiltern, Los Angeles, CA, USA””Slayer's Kerry King To Perform With Megadeth Tonight! - Oct. 21, 2010”originalet”Dave Lombardo - Biography”Slayer Case DismissedArkiveradUltimate Classic Rock: Slayer guitarist Jeff Hanneman dead at 49.”Slayer: "We could never do any thing like Some Kind Of Monster..."””Cannibal Corpse'S Pat O'Brien Will Step In As Slayer'S Guest Guitarist | The Official Slayer Site”originalet”Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Kerrang! Awards 2006 Blog: Kerrang! Hall Of Fame””Kerrang! Awards 2013: Kerrang! Legend”originalet”Metallica, Slayer, Iron Maien Among Winners At Metal Hammer Awards””Metal Hammer Golden Gods Awards””Bullet For My Valentine Booed At Metal Hammer Golden Gods Awards””Metal Storm Awards 2006””Metal Storm Awards 2015””Slayer's Concert History””Slayer - Relationships””Slayer - Releases”Slayers officiella webbplatsSlayer på MusicBrainzOfficiell webbplatsSlayerSlayerr1373445760000 0001 1540 47353068615-5086262726cb13906545x(data)6033143kn20030215029