A practical blind test : Just for kicks

@OP: I have read through the thread and am still lost about what the experiment aims to achieve. I get a feeling that even after following the procedure you have mentioned, it is unlikely you will be in a position to conclude anything.

Firstly, before you write the detailed procedure you might want to state a simple one line hypothesis that the experiment will prove or disprove. Examples are "Smoking causes cancer", or "Drinking causes liver damage", or "Power cords make a discernible change to the sound in a music system". Without a hypothesis to prove/disprove, it is very difficult to talk about the validity of an experiment.

Second, the experiment isn't practical. You will be lost in permutation explosion. If a music system has five components, and you have five or each type to play with, the number of combinations will be 3125. It is hard enough to find the better system when faced with just two. With 3125 permutations and human subjects, even if you manage to complete the tests, your results will be unreliable. (I even have a problem with the word 'better' when conducting an experiment, but I would leave that for another post).

Third, from what I could gather you seem to have something against blind testing (I could be wrong here). Though it is not clear what. Rubbishing blind testing is probably a bad idea. It is an extremely useful tool available to scientists when they want to cut out the human factors. Whats the alternative?

I am not denying that an experienced audiophile can assemble an very good system. I just can't see what the experiment is trying to achieve.
 
Last edited:
I think most people haven't really understood what square_wave is suggesting.

I think he is suggesting that a system that is built out of components that have "won" blind tests will not be as musically satisfying as a system that has been built out of components chosen by somebody who knows what he's doing.

What he's saying is that a few individual blind tests should be conducted (between really cheap kit and really expensive kit) to choose the candidates that will constitute System A. For example:

- The results of Blind Test 1 will choose what source is to be used.
- The results of Blind Test 2 will choose what amp is to be used.
- The results of Blind Test 3 will choose what interconnects are to be used.
- The results of Blind Test 4 will choose what speaker cables are to be used.
- The results of Blind Test 5 will choose what power cables are to be used.

Now, the winners of Blind Test 1 to 5 will be assembled into System A.

System A and System B will use the same speakers.

An experienced Audiophile (or Audiophool, depending on the perspective) will setup another system, System B, by choosing components that sound good to him.

Now, a final Blind Test will be conducted between System A and System B. square_wave is asking us which system is likely to win.

The objective of this exercise, as I understand it, is to try and understand whether the results of Blind Testing as an exercise are actually relevant while one is building a system. And also that intelligent system-matching (even with components that have been dismissed by Blind Tests) can bring out a system that is a lot better than a system that is built out of components that have "won" blind tests.

Please correct me if my understanding is wrong.

BTW, I'll put my money on System B. :)
 
We can argue till the land's end without any tangible result.

Seek volunteers, seek equipment. Get on ground and get real. Do some real testing. That is the way to go.
I agree with you Captain. Its the only sensible way to know such things. Friends should come together with unbiased views and without presumptions and listen. Else we can go on and on.

hee hee but I see debate has turned in to from 'does so and so things make difference' to the 'testing method.' Whats next ? debate on the way we debate :lol:

Regards

P.S. : I apologize for any meaningless debate in the past I have participated and if offended anyone. Need to get back to some music (Vinyls) on my vintage system. Is this what they call ignorance is bliss ?
 
Last edited:
When i read this i get a feeling we have Intellectualised this simple Q from Square_wave into something far more than what he intended :ohyeah:
 
I think most people haven't really understood what square_wave is suggesting.

I think he is suggesting that a system that is built out of components that have "won" blind tests will not be as musically satisfying as a system that has been built out of components chosen by somebody who knows what he's doing.

What he's saying is that a few individual blind tests should be conducted (between really cheap kit and really expensive kit) to choose the candidates that will constitute System A. For example:

- The results of Blind Test 1 will choose what source is to be used.
- The results of Blind Test 2 will choose what amp is to be used.
- The results of Blind Test 3 will choose what interconnects are to be used.
- The results of Blind Test 4 will choose what speaker cables are to be used.
- The results of Blind Test 5 will choose what power cables are to be used.

Now, the winners of Blind Test 1 to 5 will be assembled into System A.

System A and System B will use the same speakers.

An experienced Audiophile (or Audiophool, depending on the perspective) will setup another system, System B, by choosing components that sound good to him.

Now, a final Blind Test will be conducted between System A and System B. square_wave is asking us which system is likely to win.

The objective of this exercise, as I understand it, is to try and understand whether the results of Blind Testing as an exercise are actually relevant while one is building a system. And also that intelligent system-matching (even with components that have been dismissed by Blind Tests) can bring out a system that is a lot better than a system that is built out of components that have "won" blind tests.

Please correct me if my understanding is wrong.

BTW, I'll put my money on System B. :)

If this is the intention, I dont understand what is the objective of the test. I missed that part where the blind test system is assembled from parts without being able to change the system when its assembled. That is it doesnt matter if the winner of a blind test matches with other parts of the system or not.

If the objective is to prove that a system built out of the best in each category need not be the best, or that system synergy is more important etc, then I think its already over. System B wins.

Here are the reasons -

1. In blind test, you pair a bright cd player with warm amp, the cd player gets selected. same case with amp. result - a bright cd player and a bright amp and a very bright/harsh combo.

2. My ref rev e wins the amp category, you have provided 4 ohm speakers. the amp wont even last half an hour. it cant drive 4 ohm speakers.

3. A sound card gets picked that has the output voltage less than the minimum voltage required for the preamp. same case between pre and power.

4. input/output impedance dont match for various components.

Most of the times, when you are selecting for system B, unintentionally, you will be taking care of the above criteria anyway. You just do it by the ear rather than going by the numbers. There might be more to it, but i dont know about that:).

No person believing in blind test has ever said that you dont need to match components and you can throw together whatever you want to make a system.
 
System A
Assemble a system only based on blind testing.

System B
Now ask an experienced audiophile to choose whatever he wants including all these so called snake oil devices of his choice.

The room and speaker remains the same.

Now listen to both :D

Who do you guys think will win ?

Synergy between components will yield far better results than choosing the best of the components and forming a system out of them.

Please keep in mind that the System B will consist of state of the art (according to current industry trends) and put together by an experienced audiophile who understands synergy.

The system A will also have synergy but it will be proven electrical synergy according to engineering concepts.

For system A

The cable testing will start from coat hanger. It will not go any further since it has been proven that coat hanger is all that is needed.

Source testing will probably stop at a DVD player.

The amp testing will probably stop at a random 800 $ muscle amp which has enough muscle to power the speaker.

The setup will be on a rickety table. It will not go any further since it has been proven that a rickety table is all that is needed.

Coming from you, I find it perplexing Square_wave.

@ capn,
These are results from various blind testing experiments.

I think most people haven't really understood what square_wave is suggesting.

I think he is suggesting that a system that is built out of components that have "won" blind tests will not be as musically satisfying as a system that has been built out of components chosen by somebody who knows what he's doing.

I think you are right Hydra. I did not understand what the objective was but you put it across nicely.

@OP: I have read through the thread and am still lost about what the experiment aims to achieve.

you might want to state a simple one line hypothesis that the experiment will prove or disprove.

I think this has become clear in the light of Hydra's explanation.
 
@OP: I have read through the thread and am still lost about what the experiment aims to achieve. I get a feeling that even after following the procedure you have mentioned, it is unlikely you will be in a position to conclude anything.

Firstly, before you write the detailed procedure you might want to state a simple one line hypothesis that the experiment will prove or disprove. Examples are "Smoking causes cancer", or "Drinking causes liver damage", or "Power cords make a discernible change to the sound in a music system". Without a hypothesis to prove/disprove, it is very difficult to talk about the validity of an experiment.

Second, the experiment isn't practical. You will be lost in permutation explosion. If a music system has five components, and you have five or each type to play with, the number of combinations will be 3125. It is hard enough to find the better system when faced with just two. With 3125 permutations and human subjects, even if you manage to complete the tests, your results will be unreliable. (I even have a problem with the word 'better' when conducting an experiment, but I would leave that for another post).

Third, from what I could gather you seem to have something against blind testing (I could be wrong here). Though it is not clear what. Rubbishing blind testing is probably a bad idea. It is an extremely useful tool available to scientists when they want to cut out the human factors. Whats the alternative?

I am not denying that an experienced audiophile can assemble an very good system. I just can't see what the experiment is trying to achieve.

#thatguy,

You actually beat me to it bro. What you wrote is what I was arriving at. None of these tests including mine can actually arrive at any conclusion that is relevant to someone on an endeavor whose conclusion is the assembling of a fine sounding music system that will satisfy him / her. It is just that one individual prefer one type of test over the other depending on their disposition and will try to infer his own version of the truth. This is called armchair knowledge.

Now if you go one step further and try to implement the knowledge you infer from these tests in ones own system, my experience tells me that many folks will be shocked at what they will find.

Take the Matrix test as an example.

You can derive various versions for this test.

1. How it was originally done.

2. Do the test only using System B to figure the quality of individual components: Remove the wadia player and insert the dvd player in this system OR remove the YBA/Classe and insert the Behringer into the system.

What will be the result in this case ?

3. Pack the behringer + dvd player and YBA/Classe + Wadia and carry them to 5 different audiophiles homes and try and insert each of the combination into their existing system and test.

What will be the result in this case ?

I can bet my last dollar that results will vary drastically.

I can give an example from my personal experience.

I once auditioned a certain well regarded cd player against another one which is a very popular newcomer (model x). The first time I tested them, I used a very popular audiophile bookshelf speaker which is quite famous for throwing fantastic imaging and is quite popular in the market. I found that the model x worked very well and sounded more impressive. The well regarded cd player showed a slight problem. The bass was sounding a bit thick and unresolved making the sound slow. Very evident with the double bass. The rest of the frequencies were impressive. I was astonished. A talk with another audiophile leads to another testing, but this time on a speaker system which was top of the line. These speakers were very resolving across the frequency band and quite big. Testing on this system gave a completely different story that I was flabbergasted. The thickness in the bass I noted earlier blossomed into glorious detail in the lower bass region. In comparison the (model x) sounded rolled off in the bass region. In fact I found the (model x) was rolled off almost everywhere and lacking in body across the frequency spectrum. A typical cd player which was designed to impress when played with similar audiophile gear (these type of pseudo audiophile gear is very common nowadays).

Now what do you infer from these two tests ? I am sure if the test was done using the first speakers; most people would have preferred model x.

This is a classic example of you cannot see an ant when an elephant is blocking the view

No audiophile worth his salt will say that expensive is always better. A better saying is Horses for courses . The original Matrix test is actually a classic example for Horses for courses . It means that System A has better synergy. Nothing more. The test actually validates high end audio. High end audio = finding the right component to assemble a fine sounding system. So I have not arguments with the Matrix test :):)
 
Hi Square wave

I agree with you. Synergy is the most important. It just does not make sense auditioning a 5 lakh cd player with a 1 lakh speaker or amp (there will always be exceptions so don't jump on this statement) The cd player is likely to throw out so much info that the amp/speaker will not be able to handle it. I find that most people rubbish expensive cd players based on this.
 
I think most people haven't really understood what square_wave is suggesting.

I think he is suggesting that a system that is built out of components that have "won" blind tests will not be as musically satisfying as a system that has been built out of components chosen by somebody who knows what he's doing.

What he's saying is that a few individual blind tests should be conducted (between really cheap kit and really expensive kit) to choose the candidates that will constitute System A. For example:

- The results of Blind Test 1 will choose what source is to be used.
- The results of Blind Test 2 will choose what amp is to be used.
- The results of Blind Test 3 will choose what interconnects are to be used.
- The results of Blind Test 4 will choose what speaker cables are to be used.
- The results of Blind Test 5 will choose what power cables are to be used.

Now, the winners of Blind Test 1 to 5 will be assembled into System A.

System A and System B will use the same speakers.

An experienced Audiophile (or Audiophool, depending on the perspective) will setup another system, System B, by choosing components that sound good to him.

Now, a final Blind Test will be conducted between System A and System B. square_wave is asking us which system is likely to win.

The objective of this exercise, as I understand it, is to try and understand whether the results of Blind Testing as an exercise are actually relevant while one is building a system. And also that intelligent system-matching (even with components that have been dismissed by Blind Tests) can bring out a system that is a lot better than a system that is built out of components that have "won" blind tests.

Please correct me if my understanding is wrong.

BTW, I'll put my money on System B. :)

Hey thanks Bro. You put it across much better. This is what I meant :)
 
Now the million dollar question is, will the numbers (assuming they are honestly provided by the manufacturer) tell you this information. Lets say a component didnt go well with others in a system. Can we look at the numbers and say that hey, this is why it doesnt work. If I get another component that has this parameter's value between this and this range, all else being same, that will give me better performance and will work fine.
Or is it just pure magic or coincidence, nothing to do with science?
 
Doors, I think there are three different things, and they should not be confused:

1. Manufacturer-published specifications

2. The same parameters, but as measured independently

3. The sound itself.

You can find recent posts by one of HFV's hi-end-experienced members, Dr.Bass, in which one of those numbers turned out to be very much the cause of a problem. But it takes an experienced interpreter, or an engineer to answer those questions. Faced with DrBass's problem, I would have had no clue to look at the specifications where he found the problem.

I typed a long reply to previous posts. Some will be relieved that my UPS shut down and caused it to be lost! :lol: Briefly:

Blind testing does not mean blind buying. I can join square_wave "on his page" about that.

However, I feel that he is just not getting many of the points of blind testing at all. Maybe because he is focussing on this one example. Sometimes, the tests are as much about our brains as they are about the equipment. This is not a which-to-buy test: it is a test that challenges perceptions and conceptions.

Much has been said in various threads here recently about this. There are lots of interesting (and challenging) examples, and I'm too lazy to dig them up and post them again. Most of them I only heard of through HFVers contributing the links. The 'famous' ones are, perhaps...

--- the JVC speaker test, which showed that company loyalty, appearance and price influenced sighted testing, and this among professionals too.

--- the coathanger test

--- the Matrix test (not famous yet, but we're working on it ;))

I do think it is worth digging the quote from the founder of Stereophile from the 45th-anniversary interview:

Audio as a hobby is dying, largely by its own hand. As far as the real world is concerned, high-end audio lost its credibility during the 1980s, when it flatly refused to submit to the kind of basic honesty controls (double-blind testing, for example) that had legitimized every other serious scientific endeavor since Pascal. [This refusal] is a source of endless derisive amusement among rational people and of perpetual embarrassment for me, because I am associated by so many people with the mess my disciples made of spreading my gospel. For the record: I never, ever claimed that measurements don't matter. What I said (and very often, at that) was, they don't always tell the whole story. Not quite the same thing.

Remember those loudspeaker shoot-outs we used to have during our annual writer gatherings in Santa Fe? The frequent occasions when various reviewers would repeatedly choose the same loudspeaker as their favorite (or least-favorite) model? That was all the proof needed that [blind] testing does work, aside from the fact that it's (still) the only honest kind. It also suggested that simple ear training, with DBT confirmation, could have built the kind of listening confidence among talented reviewers that might have made a world of difference in the outcome of high-end audio.

my bold.
 
Last edited:
Check out our special offers on Stereo Package & Bundles for all budget types.
Back
Top