Problems can occur with badly-made anythings!
As I said before, we, the hifi-buying community, with our progress through more expensive (and probably often genuinely better) speaker cables and interconnects have set ourselves up good and proper. We have mentally conditioned ourselves.
In this country, at least, it would probably be a lot better for our health, safety and continued musical pleasure, if we translated that into a bit more concern for our house wiring, rather than fussing over digital cables!
For those who wish to continue this thread, here is what a physicist (not me)is saying:
cheers.
murali
How often have you heard this?
Its digital, so its 1s and 0s. That means there cant be any errors or distortion. You either get the signal perfectly or you dont.
For someone who knows little about electronic systems, this would sound perfectly logical and true. In reality its a rather nave way of thinking and it completely over simplifies the complexity of the world (and of physics). To show you why this statement is a lie, we need to go slightly into electro-magnetic physics. Dont worry, not too much, but just enough so that we can see how digital transmission, and in particular, a thing called channel coding (and digital coding in general) works.
The Digital Signal
The digital signal is often classified as an array of 1s and 0s. This is true in the logic sense we represent them in 1s and 0s. However theyre merely symbols. We could easily represent them as Xs and Ys or apples and oranges. In the real world, the digital signal is encoded and transmitted as a variety of alternating bits. This is usually a high and low signal. The signal is always encoded to minimise the probability of error for a given channel at its signal power limit. With all signals, there is a chance of noise (because, as you would of course know, anything in the real world is essential analogue, not digital).
Bandwidth
When we picture the transmission of a digital signal, we usually think of it in the way of the following:
Theorical Digital Signal
This is true in theory, and for higher level applications such as computer programs, controllers, etc. this is enough. In real life, the same digital signal (especially during transmission) looks more like this:
Real Life Digital Signal
Why is this? Because unfortunately the real world isnt as simple as on and off or high and low. Almost every communication channel (e.g. a cable, optical system, radio transmission) has a finite bandwidth. What is bandwidth? Thats the maximum speed that we can transmit data. I.e. theres a limit to how fast we can switch from 0 to 1 or 1 to 0. When we put a signal with more bandwidth than the channel can handle, the signal will not come out the same shape. How differently the shape changes depends on the nature of the system and the signal. However, take it from me that when you put in a perfectly square digital signal like that of the first diagram, youre most likely to get a real life signal like the second diagram.
The Bandwidth vs Cost Tradeoff
Engineering is often more about compromises than anything else. While we can build every network, cable, radio transmission mast with a huge amount of power to increase their bandwidth, it is not efficient. Generally theres an acceptable number of errors we can take for a particular signal. For example, most people would be satisfied with a TV signal that works 97% of the time for 97% of the population. To achieve the extra 3% coverage, by nature, may cost double the investment in resources. Hence, it is generally not worth spending so much money to get perfection.
A digital signal, therefore, is usually designed to be optimised in terms of resources used. A good designer of a product would attempt to use the least amount of resources to achieve a satisfactory outcome. So if we talk about bandwidth, it would mean that we would use the minimum bandwidth to still achieve signal recognition to a satisfactory level.
The Eye Diagram
Heres a concept that may interest those with a deeper understanding of probability and physics. When we pass a digital signal into a bandlimited channel, the signal is distorted. The receiver system will attempt to guess what the transmitted signal is. Lets use a numerical scale where -1 is 0? and 1 is 1?. How would the system guess? Simple: if the received signal is more than 0, we assume the transmitted signal is 1?. Otherwise if its any negative number, we assume the original transmitted level was -1, and the transmitted signal was 0?.
So in this instance, as long as no signal is distorted to the point where the signal crosses 0, we will still get an error free transmission.
The eye diagram is a diagram where multiple digital bit signals are laid upon one another. We can then see the variance in signal levels that occur to each bit, and see where they vary. The eye is the gap between the 1 and the 0. The gap therefore needs to exist in order for the receiver to be able to detect whether a signal is 1 or 0.
Eye Diagram
Here we have an eye diagram. We can see the most probable bit locations as the darkest areas. The eye is the two gaps which is formed by the two bit signals shown. As the signal becomes more distorted, the width of the signals increase (probability of error increases) and the eye starts to close. The point where the eye is completely closed is the point where the digital signal is distorted to the point of pure noise (no useful information can be extracted).
Most systems are designed with eye diagrams similar to that above. Theres little chance of error. However, errors still occur. We can still see a few bits getting rather close the center of the eye. These are the bits which are likely to cause error bits.
What Does All This Mean?
This means that any commercial product (i.e. a product designed to make money) will be designed to optimise for cost as well as performance. If we talk about say, a HDMI cable, we would expect that any cable more than a very short length will expect to have a more than negligible probability of error. Generally speaking, with most digital signals theres always a chance of error. When an error occurs, it doesnt mean that you wont get a signal at all. It just means that theres an error. These could be single bit errors which may be visible and uncorrected, or may be corrected depending on the mechanism. However errors exist, even though you may not know about them. Digital is far from perfect.
Going back to the HDMI cable, lets assume that the cable is now quite long. This means that the bandwidth is reduced. Performance is reduced because bandwidth is reduced and more errors can now occur. A badly made HDMI cable therefore shows much more significant errors. (People say that digital signals are perfect, yet its no secret that long HDMI cables can have problems. Why doesnt anyone question this?)
Channel Coding
Heres more proof that digital is not perfect. Channel coding is a study of the logics behind digital transmission. How do we best encode digital signals to ensure that the transmission has the best probability of success, and uses the least resources?
Channel coding exists everywhere. Lets talk about one common place you might see this in action your CDs. Most people think about CDs as the forementioned perfect and distortion free transmission. But think about it: tiny specs of dust, scratches, imperfections exist on every CD. The bits recorded on a CD are also tiny, and a laser will have no chance in telling a piece of dust for something else. This means that no CD will ever play without distortion at the bit level. I bet most people dont know this fact.
However, theres good reason why you may have not known about this. The guys who made the CD, Blu Ray, HDMI, etc. were pretty smart. They used a number of methods of digital transmission called channel coding. In a CD, this consisted of interweaving bits of information, optimising the type of transmitted information, and using checking bits to ensure that when errors did occur, there was a high chance that it was corrected and/or a close guess was used.
How does this work? There are a whole heap of methods they use and I wont even scratch the surface. But let me demonstrate onevery basically. Lets take for example, a number we need to transmit over a channel. Lets say 17. In binary, 17 is 10001. Lets assume a one bit error occurs. Because there are 5 digits, we could get 5 possible errors 10000, 10011, 10101, 11001, 00001. Notice what these numbers equal: 16, 19, 21, 25, 1. A 1 bit error could result in a slight distortion (16 instead of 17), or a huge distortion (1 instead of 17). In fact, if transmitting numbers in this way, the magnitude of distortion varies with the number of digits transmitted. If we transmit a 256 bit number, we could get a massive error due to just one bit being incorrect.
The engineers behind this technology realised this and decided that it was better to encode bits which were close together (read about Hamming distance). What they did was formulate a number of bit sequences which represented the transmitted data, where the sequences were different in the same way that the data was different. For example, lets arbitrarily represent 16 as 1011, 17 as 1001 and 18 as 0001. Notice that the difference between 17 and either 16 or 18 is 1 bit, and the difference between 16 and 18 is two bits. They are of the same difference as that between the numbers themselves.
What does this mean?
This means that when a spec of dust distorts a bit on a CD, it may mean that instead of 25355, the CD reads 25356. During a track, you will probably not be able to hear this, but the distortion will occur much like analogue distortion will occur. Digital simply means that there is less chance of this happening.
Digital is undoubted much better than analogue in many ways, and without it we would not live in the same world that we do today. However, the technology behind digital is much more analogue than you think. Digital is far from perfect dont assume that just because something is digital, it would be distortion free.