TurnTables Sound better than Digital !!! - Really ???

Adding to my previous post: -

In class A amps, a constant current as per the bias point set continues to flow through the out output device

The DC current constantly flowing through the output device when the amp is idling, would get modulated by the incoming signals waveform in it's entirety thereby getting transformed to AC current ready to flow through the voice coil which in turn moves the cone or the membrane to reproduce the original music.
 
This is another interesting article. No doubt about all the compromises one had to make with vinyl, which are neatly listed in the article. I personally would never go back to it having heard the best of digital playback. However, there are sadly reasons that modern CD recordings are also compressed - for commercial play......
A Recording Engineer's Plea for Dynamic Range
 
Why are you using the word approximate with only digital. Both vinyl and digital are approximate. How about getting some real numbers about the distortion+noise of the analog chain (analog master -> vinyl -> phono out) vs digital chain (analog masters -> CD/SACD -> DAC out). Making claims about a DFT/iDFT destroying your music without specifying the quantity of the change is pointless.



It is not just an engineering problem. It is backed by pretty solid theory.



Thanks for the info. I will take a look at libsamplerate. Still unsure about the DFT part, but I need to read up.

Dynamic Comparison of LPs vs CDs - Part 4 — Reviews and News from Audioholics

There you go - an unbiased quantitative comparison between vinyl, cd and high res audio.
 
I am not going to contest that digital (even in theory) is an approximation. Primarily because it doesn't satisfy the criteria of the sampling theorem. Sampling theorem only holds for band limited signals. One wouldn't know where to put

the anti-aliazing/imaging filters if the signal was not band limited. A 5 minute song is time limited => not band limited.

The fact also has it that "the loss" in digital medium is only a ONCE phenomena. After a signal has been approximated, it never changes on the storage medium.

My conclusion is that: digital suffers from a single (and predictable) flaw (loss in rounding/approximation), but analog suffers from number of flaws.

From my reading of responses on this thread (I've just quoted two but there are others), I feel there may be confusion among us about where in the entire sampling process the "loss" in signal information actually occurs. So I will try to explain my understanding of what the sampling theorem is, in an effort to reconcile these differences, and learn something myself in the process.

The Nyquist-Shannon sampling theorem states (for a sine wave with frequency f) that if you sample this signal (i.e. pick points (samples) off this signal at periodic time instants) at a frequency at least twice the signal frequency f, you will lose NO information about the original signal and will be able to reconstruct it faithfully again from ONLY the samples you have and knowledge about f. The details of the theorem also describe the process of reconstruction i.e., the mathematical operations needed on the samples to obtain the original sine wave in the time domain. There is NO ambiguity about this; the mathematics are well-defined and consistent.

The theorem extends to any arbitrary signal that is BAND-LIMITED i.e., has information up to a maximum frequency f_m and no information beyond this frequency. This works because Fourier proved that any periodic signal can be composed by superimposing sine and cosine signals with multiples of frequencies (called harmonics). For non-periodic signals, the Fourier Transform is the limiting case of the Fourier series, both are mathematically precise and consistently defined. Hence, for any arbitrary signal that contains infinite frequencies, one needs infinite sine/cos components to create it. But for any arbitrary signal that is band-limited we need a limited/finite number of sine/cos components to create it. And that is the "trick" behind why we can recreate a band-limited (periodic or non-periodic) analog signal only from its samples.

In essence, the knowledge that there is no information beyond a certain frequency (in the "frequency domain" perspective) helps us to "compensate for" the lack of information in the samples (in the "time domain" perspective). This statement is a very loose way of condensing the precise mathematics of the theorem but it is reasonably apt for someone who wants to intuitvely understand where the infinity of one representation (the analog signal) is being accounted for/balanced in the other representation (frequency spectrum of the signal).

So, if we assume that audio signals for music are band-limited to 20 Hz - 20 Khz (by relying on tests and measurements from various studies/labs/people that humans cannot hear beyond 20 Khz, even with golden ears or implants), then the sampling theorem IS completely applicable to audio signals in the real world. With my description so far, there has been no approximation encountered in the sampling process.

So where are all the places that approximations to the theorem manifest themselves in a real-world sampling operation on actual audio signals? The first place is the operation of sampling itself. The theorem assumes that the sampler is modeled by a Dirac-Delta function (a pulse of infinitesimal duration). This function is a mathematical entity and cannot exist in the real world in its ideal form. However, we can and have come extremely close in approximating it because today's electronics can create pulses nano to microseconds in duration, which is "good enough" for sampling operations. This approximation does not have a measurable detrimental impact on the samples themselves. In fact, such approximations (creating a signal of very small duration or magnitude with respect to the rest of the system to approximate the ideal infinitesimal signal) are done all the time in micro-eletronics, including assuming the transistor base current I_b to be zero for small-signal transistor models that many engineers still use as a first phase of design.

The next approximation is more significant because it does have an impact on the fidelity of the information captured in the samples. And this is the fact that any digital process has finite word-length. What this means is that any data
captured in any digital device (including the computer which is based on binary, and hence digital, logic) can be represented by a certain number of electronic states. In the computer, these are in terms of the number of bits contained in each "word" of information in memory/storage. This is where the 16-bit word of CD-based audio comes from. And any finite number of bits cannot represent the mathematics (infinity) of real numbers, which is what the sampling theorem assumes. So there is definitely loss of information in EACH sample about the precise magnitude/value of the original signal from which the sample was taken from, at each time instant.

Does this cause a significant degradation in the fidelity of the sampled representation? It depends on whom you speak to. Some folks think that 16 bits is enough to capture the dynamic range of today's music. Others feel 24 or even 32 bits should be the standard. However, don't be fooled into thinking that capturing the audio signal with an analog process is free from this loss of signal fidelity either. As Thad, ReignofChaos, ThatGuy, Ranjeetrain, and others have pointed out the deficiencies in the analog capturing process admirably, I will just reiterate one example: the grooves in an LP are of finite depth and width too! Which means, again, that the precise value of the magnitude of the original audio signal is being recast into the range of values that can be represented by the finite dimensions of the vinyl groove. Both processes approximate the original signal, one in terms of the bit-depth of the samples captured, the other in terms of the physical dimensions of the LP grooves or the resolution of the magnetic particle density that exists on the master analog tape. Summary message: The act of recording is itself an approximating process, whether analog or digital.

Once the signal is stored in the computer as a series of bits (now with reduced fidelity because of the limited word length per sample), this stored signal can be used to completely and faithfully reproduce the original analog signal, according to the sampling theorem (and in reality), taking into account again that the magnitude of the output signal may not be the exact value of the original signal (to repeat, because of the limit of the finite word-length representation). At this point, except for the "rounding error" (which is what I assume Ranjeetrain means in his post?) this signal contains ALL the information in the original analog signal. There is NO loss in this representation. I am repeating the same statement to drive home the point, so apologies to those who've already grasped it :|


So how to we recover the original signal from the sampled version? This is where the biggest approximation to the sampling theorem takes place. And this requires a bit of mathematics to understand completely. I will try to give an intuitive description, the mathematics are well described in any signal processing textbook (see sections on anti-aliasing and sampling reconstruction or similar...or use Google). The sampled signal, by the very nature of the sampling operation, has frequency components that contain the original signal's frquency components, and periodically repeating harmonics i.e., higher-order frequency components that are an artifact of the samping process. We don't want these to be reproduced in the reconstructed signal since they are added (wrong) information. What we want is only to extract frequency information upto what is contained in the original signal. And how do we know where to cut off the frequency?

That is why the sampling theorem assumes BAND-LIMITED input signals! We know that the original signal did not have any frequency content after f_m (the maximum frequency contained in the signal). Hence, if we design a low-pass filter that only allows frequencies upto f_max to go through (also called anti-aliasing filter, as mentioned by ThatGuy), and pass the analog signal reconstructed from the samples through this filer, the output signal will contain only the frequency content of the original signal. And we have recovered exactly what we sampled, because the relation between the Fourier transform of a signal and the signal itself is one-to-one. In other words, given the frequency content of a signal, only that unique signal corresponds to that frequency content. So where is the problem?

The problem is the filtering operation. A perfect low-pass filter (so called "brick-wall" filter) is again a mathematical idealisation. It is not possible for electrical devices to suddenly cut off frequencies after a certain maximum.
This leads to a discontinous function and unimplementable in its ideal form. As engineers, we have come very close to the ideal, just like in the Dirac-Delta function approximation, but we have to be honest and say that this is an
approximation to the mathematics of the theorem. How appproximate? Depends on the implementation of the filter. And that is where different designers can work their magic and experience in filter design, noise reduction et al. And as Asit had pointed out a while back, "ringing" is a very real and problematic phenomenon at this stage as well. I am not going into the details of any of these only because this post is trying to clearly define the points where approximations are present in the digital process, not in the intricacies of what the various problems in approximations are.

Another way to think about the approximation is that the Inverse Fourier Transform of an ideal low-pass filter (frequency domain perspective) is a Sinc function (time domain perspective). In the time domain, this sinc function has to be "convolved" (another mathematical operation) with the sampled signal to get back the original signal. So where is the problem? Well, the sinc function is again a mathematical idealisation of a real-word function that has a main lobe and infinite side lobes that decay over time. Yes, Infinite. And this is where we have traded the infinity neglected in the sampling process by reintroducing it in the reconstruction process. So, you see, the infinity of the real-number representation never goes away, we just juggle it from one stage to another. In any case, a real-world sinc signal is extremely difficult to create. I just said the same thing a few sentences back when I said that an ideal low-pass filter is extremely dificult to implement. The time and frequency domain perspectives are just two ways of looking at the same thing.

I hope this post makes the approximations to the sampling theorem in the real-world digitisation process a little clearer. If any of my assumptions or explanations are inadvertently in error, I will be glad to understand and correct them, and also learn something new in the process.

-Jinx
 
Last edited:
Jinx, a pleasure to read this post. Must read it again after some time. Keep going.

Meanwhile, the ever vigilant Ranjeetrain has missed a crucial vulnerability of vinyl ... Which is that unless you use a linear tracking arm, other than at two null points, there is a definite departure from perfect alignment of the cartridge stylus. The two null points represent perhaps 60 seconds of listening in a 20 minute side in an LP. However the cutting stylus must have been perfectly aligned throughout.

Captain, I done quite follow what you have written, and can sense that I have written something which you have not understood also, so limited by my comprehension in one case and my eloquence in the other case, I have concluded that at least we are perfectly placed, symmetrically, with respect to each other, unlike carts and needles. However the broad point you are making seems to be this...class B improvements have managed to largely reduce the distortion supposedly inherent in class B. however, that puts the onus for explanations on the many audiophiles who are then wastefully chasing the gains of a pure Class A, ignoring efficiency and output sacrifices they are needed to make. Time for them to pipe up... But I am grateful for the link you posted. It is very well written.
 
There you go - an unbiased quantitative comparison between vinyl, cd and high res audio.

Interesting read of course, but I just wanted to point out two things:

1. The article was written in 2004 and digital audio processing has progress since then, particularly in terms of jitter reduction.
2. Using heavily compressed pop CDs as an example to judge the medium as a whole may be slightly unfair. In an earlier post I did mention this was really one of the big downfalls of digital recording that has always given it a bad rap.
 
take an analog signal, pass it through a good quality ADC and DAC. feed both of the signals to a comparator ckt (opamp based should do). if you get an output (or should I say significant output), you know that nyquist is lying:lol:.
 
Regarding audio amplification loss, I think Captain is right. Audio amplification is nothing but transistor manipulating large power supply reservoir* which is directly affected by small ac signals given to it which are generated by cartridges/magnetic tapes/dacs. There should be no loss of original AC millivolts representing music by amplification. Only inherent output device or crossover distortion arises which gets mixed in the signal. There may be other distortions that I dont know. Hope I am right :eek:
* which is needed to move speaker cone.
Regards
 
Interesting read of course, but I just wanted to point out two things:

1. The article was written in 2004 and digital audio processing has progress since then, particularly in terms of jitter reduction.
2. Using heavily compressed pop CDs as an example to judge the medium as a whole may be slightly unfair. In an earlier post I did mention this was really one of the big downfalls of digital recording that has always given it a bad rap.

And, it seems to be very much more to do with the application of compression by mastering engineers. He drops one word, "singles" which shows that he is talking about something that goes way back into pre-digital days.

Jinx, you have the knack of making this stuff almost comprehensible to non mathematicians. That is a really great, informative and balanced post, which really deserves to be called an article. Many thanks :)
doors666 said:
take an analog signal, pass it through a good quality ADC and DAC. feed both of the signals to a comparator ckt (opamp based should do). if you get an output (or should I say significant output), you know that nyquist is lying
You mean, one approximation will not exactly equal another approximation? Almost certainly you are right! One can immediately say that you are, if we are digitizing 16/44.1, because everything above 21khz will be removed. Whilst you probably won't be able to hear that anyway, the result of subtracting one wave from the other will not be zero. Thus your "point" will be "proved."

On this one, I'm sidling over to the trust-your-ears camp ;). The test has been done --- and can be done any time anyone feels like it. Of course, it requires some investment in black cloth to be a proper test, but, none-the-less, I have never noticed anything inferior in digitized-LP sound except when so much noise correction has had to be applied that the signal is noticeably less too. But, in that case, a) I don't really call it a successful digitisation (perhaps someone skilled could do better) and b) the original LP is not really wonderful to play anyway.

It might be interesting, if such testing was to be set up (Can HiFiVision replicate the Matrix test?) to invite some children to join the listening test, simply on the basis that they might actually be able to hear that >21K content.
 
Last edited:
Thanks Ajinkya, for the great post.

So, if we assume that audio signals for music are band-limited to 20 Hz - 20 Khz (by relying on tests and measurements from various studies/labs/people that humans cannot hear beyond 20 Khz, even with golden ears or implants), then the sampling theorem IS completely applicable to audio signals in the real world. With my description so far, there has been no approximation encountered in the sampling process.

IIRC, mathematically, any signal that is time limited cannot be band limited. If we are sampling a 3:05 song, at least theoretically, there must be some approximation somewhere, however minute it is.

The deviation may be very small, but it would be good if we could get some rigor in. This has been playing on my mind and I have not been able to find a clear answer.

... Inverse Fourier Transform of an ideal low-pass filter (frequency domain perspective) is a Sinc function (time domain perspective). In the time domain, this sinc function has to be "convolved" (another mathematical operation) with the sampled signal to ...

Thanks for reminding that Sinc *is* the low pass filter's impulse response. Really need to do some reading.
 
Last edited:
Interesting read of course, but I just wanted to point out two things:

1. The article was written in 2004 and digital audio processing has progress since then, particularly in terms of jitter reduction.
2. Using heavily compressed pop CDs as an example to judge the medium as a whole may be slightly unfair. In an earlier post I did mention this was really one of the big downfalls of digital recording that has always given it a bad rap.

(If the original intent of the experiment was to compare the capabilities of the two mediums) What you read was a very poorly designed experiment. The conclusions one can draw from an experiment can be limited by many factors, some of which are

- accuracy of the measuring instruments
- the variability of the factors which can affect the output (in this case, say mastering of the disks)
- the design of the experiment itself
...

What the gentleman did fails on multiple counts, some of which you have rightly pointed out.

Unfortunately in the days of Internet, it doesn't take much for an easily refutable idea to become an enduring myth. One random person posts some data and the believers start to quote him as the gospel.

So what are the options for someone who wants to *really* know. Well, we wait till some researcher with resources decides to do a real experiment. Till then I prefer my ignorance to answers which could possibly be wrong.
 
Last edited:
the broad point you are making seems to be this...class B improvements have managed to largely reduce the distortion supposedly inherent in class B. however, that puts the onus for explanations on the many audiophiles who are then wastefully chasing the gains of a pure Class A, ignoring efficiency and output sacrifices they are needed to make. Time for them to pipe up... But I am grateful for the link you posted. It is very well written.

I hope you have meant class AB here.

I had problem with the content of your post (quoted below) that "full input signal is not processed" which IMO is not correct.
most of us use Class B or Class AB Amplifiers, which are anyway lossy, as the full input signal is not processed.

Having said that, I definitely do not believe that those who are "chasing the gains of pure Class A" are running after a lost cause; in spite of the advances made in the Class AB topology, they can't still quite match the sonic superiority of well designed Class A Amps IMO. Incidentally I'm also in the list of those who are "chasing the gains of pure Class A" and building Pass DIY F5 Turbo.

But I am grateful for the link you posted. It is very well written.

C'mon GTM, you don't have to be "grateful" for just providing a link. :eek:hyeah:
 
Unfortunately in the days of Internet, it doesn't much for an easily refutable idea to become an enduring myth. One random person posts some data and the believers start to quote him as the gospel.

Quite true. I've lost count on the number of times the controversial Peter Aczel is quoted.
 
I like the premises of this rock album. It posits that there are three sides to every story - your's, mine, and the truth

extremeiiisidestoeverys.jpg
 
As long as these belivers have actually listened to the setups rather than simply forwarding someone's else's opinion. :eek:; just like we forward a humorous email.:p

Well .. one may hear a setup that sounds inferior or superior to ones personal benchmark within ones head, but the conclusions about why it sounds superior or inferior may be mistaken or plain wrong or based on half-truths that cannot be proven scientifically or out of ones abilities.

--G0bble
 
^ Quite true @ G0bble ... that is what usually happens most of the time. Its more of an individual clinging on to a belief and an unconscious determination to believe what he likes/wants to believe.
 
As long as these belivers have actually listened to the setups rather than simply forwarding someone's else's opinion.
+1:thumbsup:
one may hear a setup that sounds inferior or superior to ones personal benchmark within ones head, but the conclusions about why it sounds superior or inferior may be mistaken or plain wrong or based on half-truths that cannot be proven scientifically or out of ones abilities.

Though true to some extent, IMO it is far better than reading up something on the net and believing it; not only believing it himself but also trying to rub it on to others.
 
For excellent sound that won't break the bank, the 5 Star Award Winning Wharfedale Diamond 12.1 Bookshelf Speakers is the one to consider!
Back
Top