TurnTables Sound better than Digital !!! - Really ???

If class AB was so imperfect people will not pay thousands of grands for them. The fact is that even the highest priests in the business have ridiculously expensive class AB amps that perform at par with or better than a number of pure class A amp.

If pure class A amps were any holy grail as it is being portrayed here, all expensive and super expensive amps would just be class A all the way. But being class A is no guarantee of being a good amp. There are as many bad class A amps as there are good class AB amps. Beat the theoretical advantage of class A as much as you would want, the fact remains that a well implemented class AB amp sounds as good as class A amps. As a matter of fact some of the most reputed amps these days are class AB devices.

Good comment.... +1
 
I'm confused about this. If we can not hear them as individual frequencies, how can we hear them as harmonics, especially as those are going to be at much lower levels than primary frequencies? Or is it that the presence of something we can't hear influences and changes something that we can hear? And if that is the case, why is that sound not already thus changed by the time it arrives on our over-21khz-filtered CD recording?

If you have the patience to explain to a non-mathematician, please!

Sorry for the late response. I have been seriously tied up ever since I wrote that long post.

That's very impressive and really a smart question, and the answer is not easy, because as far as I can figure out, not everything is known about psychoacoustic properties of the ear and the brain. Again, this is a subject of research, although I feel, not a lot has been done, but I may be wrong because this is in a domain that is far away from my areas.

It has been proven beyond reasonable doubt that many instruments have harmonics far far above 20 kHz, and some too with significant fraction of the energy. For example, let us look at a paper by James Boyk of the California Institute of Technology (CalTech): There's life above 20 kilohertz! A survey of musical instrument spectra to 102.4 kHz . I am really impressed by the pains taken by him and his group of students to correctly determine the spectrum of a variety of instruments, taking into account all kinds of perceivable corrections. You can find out what these guys do at this link: Caltech Music Lab Home Page

Obviously, the interesting question is: So what? Can we hear the harmonics above 20 kHz or whatever the personal upper limit?

In section X (Significance of the results) of James Boyk's article, this issue is discussed. Boyk is giving reference to other scientific work where people have claimed that higher harmonics do matter. I can also give a few references where similar proclamation has been made. But somehow I have a feeling that a great deal of scientific work has not been done in this area.

With the above as background of what we know with some definiteness, let me elaborate on my previous post regarding this matter.

When I said in my previous post that every single musical note (for example the middle C of the piano) comprises of many waves each having frequencies which are integer multiple of the frequency of the wave with the lowest frequency (called the fundamental frequency), I agree, this to start with is a very confusing statement. A wave is a periodic pattern in time and space, and the periodic pattern corresponding to a sound (a single musical note in our case) is not the pattern of a sine (or a cosine) wave, rather it is distorted to a different shape by the presence of the harmonics. Theoretically, a periodic shape like that can be expressed as a classical superposition (addition) of the so-called normal modes, each normal mode is a sine wave and each having a different amplitude and frequency (fundamental and the higher harmonics). Experimentally also, a spectral analysis can be done, as done by Boyk and his group.

The shape of this wave (actually called a wavepacket) is responsible for the quality (timbre) of the sound produced. Unless human ear has a spectral analyzer in-built, there is no reason for the ear to split up the received sound into its component sine waves and then act as a filter so that components above 20 kHz does not pass through resulting in a change of the timbre. On the other hand, there are audio equipments like a DAC which presumably does this for an implementation of the recovery of the sampled data - because the sampling theorem works in the Fourier (frequency) space rather than in the time domain. I have to confess, I know very little about the ear and its actual functioning procedures, but it seems reasonable to me that it is not a ADC+DAC device and there is no reason for the ear to go into the Fourier domain.

However, there is a reason for the ear to have a cut-off of the fundamental frequency. The period in space of the wave-packet is basically the wavelength (wavelength of the wave packet is the same as the wavelength of the fundamental sine wave). It is well known that none of the body parts has infinitesimally small space resolution - hence there is a smallest wavelength of the incident wave packet that the ear can be expected to resolve. The frequency is proportional to the inverse of the wavelength - as a result there is an upper limit on the fundamental frequency that is perceivable by the human ear.

Unless there is a reason for frequency domain to enter, a sound is completely described by the wavelength of the wave packet (or alternatively the fundamental frequency), the intensity and the shape of the wave packet (the so-called quality of the sound).

Regards.
 

Please correct me if I am wrong.

So what I understood from your explanation is that assuming the cut off frequency of the human ear is 20khz.. we won't be able to hear notes with a fundamental frequency of above 20khz. But for notes with harmonics extending above 20k.. cutting off signals above 20k will alter the shape of the wavepacket by removing many of the harmonics. That will alter the timbre of the note as perceived by the listener, and hence may alter the actual timbral difference between the same note played by two different instruments.
 
Last edited:
... assuming the cut off frequency of the human ear is 20khz.. we won't be able to hear notes with a fundamental frequency of above 20khz. But for notes with harmonics extending above 20k.. cutting off signals above 20k will alter the shape of the wavepacket by removing many of the harmonics. That will alter the timbre of the note as perceived by the listener, and hence may alter the actual timbral difference between the same note played by two different instruments.

Your understanding is absolutely correct. However, the effect may not be quite as dramatic because the amplitudes (or the energy content) of the very high harmonics of most musical sound (such as human voice) are pretty low. Only for certain instruments (like the cymbal as shown in Boyk's article), the very high harmonics carry a significant amount (more than a percent?) of the total energy.

Regards.
 
Thank you..
I guess that explains why I feel higher resolution music to have a degree of airiness absent in Redbook copies of the same music. It is more pronounced in the texture of cymbals and similar higher frequency instruments.. less, if at all pronounced in lower treble and below.
 
Last edited:
Asit, thank you very much for another big contribution. No way can I understand it at one sitting, but after looking at James Boyk's paper, There's life above 20 kilohertz! I'm tempted to respond, Wouldn't it have been more surprising if there wasn't?

Even though he found significant amounts of energy in the very high frequencies of only a few instruments, it occurs to me that instruments were designed by people, and those people (although they may not have thought in these terms) worked on the sound quality of the audible spectrum. What they did not do was to work on the sound quality of the inaudible spectrum, for the simple fact that they could not hear it.

The past designers of musical instruments did not have spectral analysers (or even microphones!) so, in they manipulation of sonics, they may well, unknown to them, have been making adjustments to supersonics. I wonder if the manufacturers of the present-day models of their instruments do such analysis on their products?

Plainly, if the whole world's sounds stopped at 20Khz, there would be no point in having the hearing of a dog, let alone a bat, and, thinking 'aloud' here, it then makes sense to suppose that music does not sound the same to a dog or a bat as it does to us.

So, if, accompanied by our dog to a live concert, we take accurate 44.1K digital recording equipment, when we listen to the recording later, does the dog say, "how can you listen to that incomplete rubbish recording?" But we don't when we listen to it.

Infinitely curious!
 
Last edited:
Thad E Ginathom said:
Even though he found significant amounts of energy in the very high frequencies of only a few instruments, it occurs to me that instruments were designed by people, and those people (although they may not have thought in these terms) worked on the sound quality of the audible spectrum. What they did not do was to work on the sound quality of the inaudible spectrum, for the simple fact that they could not hear it.

It looks like you are talking about the pitch of the instrument which depends on the fundamental frequency. .
But what Asit da talked about is timber .. which makes the same note from different sources sound different because of difference in harmonic content.
 
No, I'm talking about the same: pitch plus harmonics, which, as you say, is what makes sounds different.

A dog hears more harmonics. But does that make the music sound better? I might even make it sound worse!

Another interesting point from James Boyk's paper is that, whilst it may be true that we cannot hear those very high frequencies, that does not mean that we cannot detect them or be affected by them. Note the Gamelan experiment, where subjects flatly denied being able to tell the difference with added ultrasonics, but EEG traces were different when they were being played them.

Now, when I last tried, I absolutely could not tell the difference between digitised music at 44.1k and at 96k, but hey, I recorded at 96k just in case someone, listening to my files could.

The last (for now) point of great interest (and some comfort to me) is that even people with hearing loss seem to react to those high frequencies. So yes, the next time I digitise, I'll stick to 96, which is the best my hardware can do. Just in case :)
 
Some of those points might elicit a what-the-hell-have-I-been-trying-to-tell-him. In some instances, fair enough: it is a discovering and learning process. I still have major reservations about music for bats, but I am always prepared to find out there's more going on than I ever thought :eek:
 
Music for bats .... ROFL!
Them should be hearing 'noise' not music :p
AND bats too are nocturnal. Someone's given them privileged company ;)
 
A dog hears more harmonics. But does that make the music sound better? I might even make it sound worse!

He He! Finally a dog enters our discussion.

Things happen for a reason. When there is no reason, all outcomes are equally probable. You cannot say, there is equal possibility of the proverbial apple to go vertically up rather than falling to the ground. There is a reason, and it falls to the ground.

Please look at how the musical scale according to what is known as "Just Intonation" is formed (in contrast to what is known as the "Equal Temperament" or the keyboard/piano scale). Just Intonation is also called the Harmonic Scale, and the harmonics of the individual notes are the basis of this scale. Different implementations of the harmonic scale (or just intonation) makes melodic scales, known as the Ragas, in the Indian music system. I have discussed all this in great detail in some music thread before, and you participated in that thread too.

Any way, harmonics are very important for all music, in fact the more of it, the better. All stringed musical instruments are looked after periodically with great care, so that they retain the harmonics. Vocalists train for tens of years to get the richness in the voice so that when one stays at the tonic one can actually hear other notes which are harmonically or melodically related to the tonic.

Regards.
 
Bros,
effects of harmonics are little difficult for me to comprehend but here are my vinyl specific thoughts. Vinyls are cut with specific bandwidth in mind for RIAA stage ahead. And at RIAA preamplifying stage attenuation starts even before 20k and very drastically after it to control surface noise, clicks and pops. Otherwise it would not serve its purpose. Though amplifiers can be of high bandwidth to accommodate frequencies higher than 20k what happens to frequencies above 20k after preamplifier stage is a point to ponder.
regards
riaa_1_1.jpg

But I like vinyl sound :)
 
2) However, in the midst of such discussions, there are technical claims made which are dubious and at times plainly wrong. For example, one such notion very often floated is that human beings do not hear above 20 kHz, so there is no need to worry about frequencies higher than 20 KHz.

[*]However, the proof is done with the assumption that the continuous information that is being sampled at discrete points of time extends from the infinite past to the infinite future, while in the case of a music recording, it has only a finite extension in time.
[/LIST]

This issue has been specially addressed in this previous post of mine (http://www.hifivision.com/phono-tur...-vinyl-sounds-better-digital-5.html#post37385). For the mathematically minded among you, please go through the post. It's actually very simple, and shows the consequences of having a time-limited signal in simple words.


Finally, let me say that, recently I have discovered that the issue of a time-limited signal and the sampling theorem is also an active area of research and people have published recently addressing precisely this issue (references are available with me). I am very happy to note that the general conclusion that is drawn from these works is that an increase in the sampling frequency will improve matters, something I also concluded after a casual look at the problem long ago (as evident in some of my posts). But this is something we already know, 24/96 music sounds better than the 16/44 music (I have done this experiment myself with my Sony Pro digital recorder which can record up to 24/96).

Regards

Hi Asit,

Thanks to you (and thatGuy in a previous post) for pointing out the important fact that band- and time-limited are mutually exclusive properties for a signal to have. I confess in my post, I had (implicitly) assumed the audio signal to be of infinite duration, and had concentrated on the approximations in the sampling process with respect to the frequency content only, one that, by the same implication, can be made band-limited. With this assumption (infinite duration and band-limited) the samples do contain all the information that is needed to reconstruct the original signal, as I have mentioned. But you are absolutely right in pointing out that any real-world signal (not just audio) is inherently infinite band-width. I shall edit my post accordingly to include this important approximation to the sampling theorem as well.

After again going through your earlier posts in the other threads, I now realise that this was the point you were trying to make there as well, while I was talking about the frequency spectrum again. So we were both talking about different aspects of the theorem and where the infinity comes in. After re-reading your post, I understand and accept your valid point about the need for infinitely many sampling points of the original signal, in order to reconstruct the analog signal faithfully from the samples. The time-shifted infinite sinc functions of the reconstruction filter are a direct consequence of this phenomenon.

However, band-limiting real-world signals and then using that as a "reasonable" approximation to the sampling theorem is a process that is regularly done in all signal processing applications, as you are well aware. One example is image processing where any image is of finite resolution and size. The various "kernels" (filters) that all image-processing algorithms apply to the image have this approximation built-in to the mathematical calculations, as far as I know. Similarly in digital telecommunications, a finite-duration voice signal is still assumed to be band-limited to 0-4Khz by passing it through an appropriate low-pass filter, and then sampling and reconstructing it at the other end. That does not mean of course that the reconstruction is a perfect replica of the original, just that the finite time-duration limitation is mitigated to a large extend by choosing the maximum frequency that is required for the system at hand.

All this assumes, in some way, that frequency content (spatial or temporal) above a certain frequency is not perceived important enough for the overall system to work. However, one of your points makes a statement about the harmonic content above the fundamental frequency affecting the wave shape, a point I am not entirely clear about. I shall post separately in an effort to understand this better, after reading again through Thad's and your discussion in this thread.

Another interesting point that you bring is the observation that increasing the sampling frequency in general seems to help matters in the real world. Control engineers know this from (bitter) experience with automotive and aerospace systems, where one routinely sample at 10 to 30 times faster than the fastest time constant of the system. Sampling at just greater than twice the system response time never gives us a good enough digital controller that is remotely close to the one designed in continuous-time. I would be very interested if you could point me to one of these papers where this phenomenon has been mentioned and studied. I feel there is a deeper link between the sampling theorem behaviour for time-limited signals and digital controller approximations for analog controllers.
 
in spite of the advances made in the Class AB topology, they can't still quite match the sonic superiority of well designed Class A Amps IMO.
Since some members have brought exceptional and expensive amplifiers into equation, I'd like to add "similarly priced amps"
Everyone is a believer in something eh ;) And everyone clings to the evidence supporting their beliefs..
It is perfectly alright to have beliefs based on one's own experiences but but questioning clearly perceivable things is what I find despicable; more so, blindly believing the so called experts floating in the net is ludicrous.
Class B is also called push-pull configuration.
Not very long ago, even I used to think so but it is incorrect. Class of operation and configuration are mutually exclusive. Read the link that I've posted in this post

Quoting from the article:
Lets start by making an important distinction between class of operation and power configuration. These are two separate concepts that describe two different aspects of an amplifier and how it works. Most people mix them together and that only adds to the confusion, even though they are related. So lets try to straighten it out by explaining each one separately. These terms are usually used when describing the power output section of an amplifier because thats where the differences occur.
If class AB was so imperfect people will not pay thousands of grands for them. The fact is that even the highest priests in the business have ridiculously expensive class AB amps that perform at par with or better than a number of pure class A amp.

If pure class A amps were any holy grail as it is being portrayed here, all expensive and super expensive amps would just be class A all the way. But being class A is no guarantee of being a good amp.
I don't think anyone is saying Class AB is bad and Class A is the only way to go. A well designed Class AB amp would be as good as a good Class A amp but would probably cost more on account of increased design and manufacturing costs. Jls001 has put is across the point quite well.
The bulk of R&D efforts is on AB and A, so AB has attained a level of performance where it practically indistinguishable from an A.

I think it is easier to make a Class A sound good compared to an AB because A doesn't have to deal with the issue of crossover distortion whereas a lot of design energy and effort goes into mitigating this issue in AB designs. Some designers do it better than others.
Class A brings bragging rights, class AB brings performance.
Not that's what is called a "sweeping statement"
Hmm...if Class A is superiority is not evident, then something is wrong with the system...got it.;)
You said it;)
 
Bros,
effects of harmonics are little difficult for me to comprehend but here are my vinyl specific thoughts. Vinyls are cut with specific bandwidth in mind for RIAA stage ahead. And at RIAA preamplifying stage attenuation starts even before 20k and very drastically after it to control surface noise, clicks and pops. Otherwise it would not serve its purpose. Though amplifiers can be of high bandwidth to accommodate frequencies higher than 20k what happens to frequencies above 20k after preamplifier stage is a point to ponder.
regards
riaa_1_1.jpg

But I like vinyl sound :)

These were exactly my thoughts. Additionally, keeping in mind that the harmonics are 1) essential 2) can be much higher in frequency and 3) can be MUCH LOWER in amplitude, how exactly is the cutter making these impressions?

What is the dimension of these extremely closely packed undulations of very small amplitude on the groove? Can dust materially distort the groove, and disturb the harmonics coming through? What about after the record has been played 20 times? Also, the room will not have flat frequency response in terms of its impact on sound, especially highest frequencies.

OT... have a look at this...L'Affaire Belt | Stereophile.com

Before you dismiss him (Peter Belt) as a fool, please read the full article, and the pedigree of the people who buy into this stuff...
 
Please correct me if I am wrong.

So what I understood from your explanation is that assuming the cut off frequency of the human ear is 20khz.. we won't be able to hear notes with a fundamental frequency of above 20khz. But for notes with harmonics extending above 20k.. cutting off signals above 20k will alter the shape of the wavepacket by removing many of the harmonics. That will alter the timbre of the note as perceived by the listener, and hence may alter the actual timbral difference between the same note played by two different instruments.

Your understanding is absolutely correct. However, the effect may not be quite as dramatic because the amplitudes (or the energy content) of the very high harmonics of most musical sound (such as human voice) are pretty low. Only for certain instruments (like the cymbal as shown in Boyk's article), the very high harmonics carry a significant amount (more than a percent?) of the total energy.

Regards.

Actually its not.
Since all waveforms can essentially be formed by superposition of sine waves (finite or infinite depending on the fourier analysis), anyone claiming that ears cannot detect above 20 kHz sine wave, BUT able to hear difference in harmonic content above 20 kHz is contradicting himself.

There is nothing magical about harmonics. They exist peacefully just like fundamental / root notes. Its because we humans are extremely lazy, we never calculate and talk about all the harmonics, for us guitar's open A string = 110 Hz. In reality, this "A" will contain a lot of harmonics, most below 20 kHz, perhaps a few above 20 kHz.

And these harmonics are nothing but sine waves. They may cause a change in timber, or tone, or feeling - but they are essentially sinusoidal waves (just like the root note) - and have an independent existence of their own. It is only that they are produced from the same string or air column that produced the root note.

Once again the shape of wavepacket tells us nothing about how it will sound to us. (I have posted this earlier too when someone showed a jagged sampled signal and assumed that this is what the ears will hear)

Every recording engineer knows that speech sibilants (Figure 10), jangling key rings (Figure 15), and muted trumpets (Figures 1 to 3) can expose problems in recording equipment. If the problems come from energy below 20 kHz, then the recording engineer simply needs better equipment. But if the problems prove to come from the energy beyond 20 kHz, then what's needed is either filtering, which is difficult to carry out without sonically harmful side effects; or wider bandwidth in the entire recording chain, including the storage medium; or a combination of the two.

They have never said that filtering at 20 kHz will actually deteriorate the signal, all they have said is that it is difficult to carry and meet the ideal conditions set.
 
The Marantz PM7000N offers big, spacious and insightful sound, class-leading clarity and a solid streaming platform in a award winning package.
Back
Top