Abbreviation for 24 bits, 96 kilohertz. Refers to a particular resolution and sampling rate often found in very high end audio equipment. 24/96 has the potential for absolutely stellar audio quality, compared to CDs which "only" sample 16 bits at 44.1 kilohertz.

The benefits of 24/96 recording are actually quite subtle and apply mainly to recording artists and mixing engineers. In fact, as we shall see, it is decidedly overkill.


The Rationale Behind 24 bits

16 bits was chosen as the resolution for the CD-DA format because it provides a 96 decibel signal-to-noise ratio, just about spanning the dynamic range of human hearing (that is, a single CD could reproduce the loudest and the quietest sound you could perceive without alteration of the volume knob). This is fine if the recording is "hot" (mixed as loud as possible without clipping), as all the bits are then used. This is called "saturating the bits", and nearly all pre-recorded CDs are mastered this way. Unfortunately, one doesn't always have that kind of control when recording stuff live. Musicians can be a spontaneous and unpredictable bunch, and keeping the volume juuust loud enough (but not too loud) can be rather a challenge with a budding guitar hero thrashing wildly at the strings during a solo. Digital clipping is ugly, so to prevent any nasty clicks and pops in the audio stream it is a good idea to err on the side of quietness. 24 bits give 144 decibels signal-to-noise, many times more than could ever be achieved in the real world1. This grants the recording engineer the freedom to record the sound at a lower volume, then digitally amplify it as they see fit later on without loss of dynamic range (especially since it will eventually be downsampled to 16 bits for a CD anyway).


The Rationale behind 96 Kilohertz

The Nyquist Theorem states that the maximum frequency a digitally sampled waveform can reproduce is one-half the sampling rate. For CDs, this is 22.05 kilohertz, several kilohertz higher than the highest-pitched noise a human ear can theoretically perceive. So far, so good, right? Unfortunately, if you try and record anything containing frequencies that exceed this "Nyquist frequency" (such as cymbals with many harmonics), the sound will "wrap round" into the lower part of the spectrum and become audible again, as lower-pitched noise. This is called "aliasing", and to prevent it it is necessary to filter all sound through an analog "lowpass filter" to remove any frequencies too high for the digital equipment to handle. Obviously, one would prefer that such a lowpass filter remove all unwanted frequencies while leaving the desired ones untouched; but unfortunately, analog equipment is never so perfect, and instead of a sharp cutoff it is more common to see a gradual2 reduction in volume as the frequency climbs, with some distortion. This means that to completely eliminate all frequencies too high to reproduce one must set the cutoff point fairly low, possibly extending the distortion into the audible range. Recording at 96 kilohertz puts the Nyquist frequency at a whopping 48 kilohertz, giving a very large margin of error for the lowpass filters. Any distortion incurred will be well out of audible range, and will be eliminated completely when the recording is downsampled to CD quality (since a CD would be incapable of reproducing it).


The Irrationale for 24/96

There is also a school of thought that insists that the human ear is more sensitive than we give it credit for, and that even sub-audible noise floors and ultrasonic frequencies in the sound do make a difference to our emotional response, despite the fact that we cannot consciously perceive them. Its proponents then go on that this is therefore an argument for widespread adoption of 24/96 as an end-user format. I don't know if there's any weight to their claims, but the decidedly lackluster sales of the rival pioneering high-definition audio formats DVD-A and SACD indicate that the consumers have decided it isn't worth it. Indeed, the industry is in the midst of learning a hard lesson: in a war between high-definition formats, nobody wins.


Caveat Emptor

You will see more and more consumer equipment nowadays jumping on the bandwagon and claiming 24/96 quality as a selling point; while it may certainly be true "by the numbers", much of it is so low-quality in all other respects that it doesn't deserve the title. Why, virtually all computer motherboards come with "24/96 5.1 Surround Sound" onboard soundcards nowadays, yet fail to achieve even cassette tape quality in terms of their actual performance! Similarly, many DVD soundtracks are encoded at 96 kilohertz, yet are compressed with a perceptual codec that automatically drops the high frequencies that 96k is intended to reproduce anyway! For shame.

In any case, they're missing the point. The average Joe doesn't need 24/96 equipment. He needs a well-recorded CD that sounds good no matter what he plays it on. The true purpose of 24/96 is to help recording engineers give it to him.



1No equipment truly has the signal-to-noise ratio that 24 bits would imply. At very high s/n ratios, even the movement of individual molecules in a single component becomes a factor.

2Generally, the steeper the cutoff, the more expensive the equipment.



BaronWR says re 24/96: Nice write-up: a few other points 1) depressingly, the eternal conflict to have your track appear "louder" when played over the radio means that many recorded CD are mixed in a way that creates clipping 2) I think the argument with keeping low frequency sound is that while you can't hear the base frequency, you can hear its harmonics (although this is a moot point as most speaker systems can't reproduce that range of sound) 3) Blu-ray has now won...

I say: 1) This is true, and reprehensible. However, in all forms of recording (not just digital) it is a good idea to record at as high a volume as possible to make full use of the s/n ratio. 2) Not sure I get you... nobody ever argued against keeping the low frequencies, it's the high ones that get dropped... 3) How many Blu-Ray discs do you own? Blu-Ray's victory is looking decidedly Pyrrhic. Truth of the matter is, format wars are only meaningful when consumers are demanding the product - with VHS and Betamax, there was no prior consumer-level video recording technology. Right now though, everybody is pretty much happy with DVD.

An update, on a related note (no pun intended). I was reading the manual to sox, an audio conversion program for unix, when I read this:

Playing an audio file often involves re-sampling, and processing by analogue components that can introduce a small DC offset  and/or  amplification, all of which can produce distortion if the audio signal level was initially too close to the clipping point.

For these reasons, it is usual to make sure that an audio file's signal level does not exceed around 70% of the maximum (linear) range available, as this will avoid the majority of clipping problems.
So yet another reason why having overhead in dynamic range is beneficial.

Log in or register to write something here or to contact authors.