Solid Snake-Oil Storage: This SSD Is Aimed at Audiophiles

Your point self explains why technology and equipment is important. GPS satellites are equipped with atomic clocks to be precise. They don't do that with after market chips and they can't do that with after market chips. For time to be precisely synchronized among devices, almost all devices on internet or cellular devices are synchronised to global time servers (which are synced to reference clocks) which can calculate time more precisely than after market chips. Normal day to day devices have tendency to drift time calculation as clocks are not precise and one can easily observe the drifts, no fancy equipment needed here. So it means that good equipment is neccessary everywhere where precision is needed.
we are digressing rather spectacularly here - but hey, why not :)

The moot point here being this :

- Your low grade device running on super noisy power can still calculate your precise position on a body as large as the earth

- And it does this simply by factoring in the ultra miniscule difference of the time that it takes for a few signals to reach you from their respective satellites

- In other words, it is multilaterating on the basis of the time stamp differences of very precise signals reaching you at the speed of light - from objects that are barely a few thousand kms apart

So if could posit that the precision of these digital signals could be easily altered by electrical nasties (I love this phrase) either in terms of accuracy or time drift.
And thus pretty much nothing based on digital signalling would work today, let along something that is working at the level of extreme precision required for this example.

After all, the clean , precise unaltered signal from the transport (satellite) is going for processing into the nasty cheap noisy phone on planet earth for the final output -
and thus should be subject to time domain drift or jitter or corrupting noise from the electrical nasties while passing through the output chain that sits on a Micromax device charging on the alternator of a Bajaj Auto Tempo, isn’t it? :)
 
Last edited:
I used to test GPS and DVB-S solutions (@ -152dB and -120 dB signal strength respectively)
Frankly, I am stumped with the body of knowledge in audio.

PS:
Everything in audio reproduction is measurable with a justifiable measure of accuracy.
Digital stuff more so. Until the sound hits your ears. That is where the grey are lies.

Cheers,
Raghu
 
Last edited:
we are digressing rather spectacularly here - but hey, why not :)

The moot point here being this :

- Your low grade device running on super noisy power can still calculate your precise position on a body as large as the earth

- And it does this simply by factoring in the ultra miniscule difference of the time that it takes for a few signals to reach you from their respective satellites

- In other words, it is multilaterating on the basis of the time stamp differences of very precise signals reaching you at the speed of light - from objects that are barely a few thousand kms apart

So if could posit that the precision of these digital signals could be easily altered by electrical nasties (I love this phrase) either in terms of accuracy or time drift.
And thus pretty much nothing based on digital signalling would work today, let along something that is working at the level of extreme precision required for this example.

After all, the clean , precise unaltered signal from the transport (satellite) is going for processing into the nasty cheap noisy phone on planet earth for the final output -
and thus should be subject to time domain drift or jitter or corrupting noise from the electrical nasties while passing through the output chain that sits on a Micromax device charging on the alternator of a Bajaj Auto Tempo, isn’t it? :)

You are missing the point again. My low grade noisy device relies on data from gps signal from satellite to calculate the time taken. Still after using so much sophisticated technology, the difference in clocks produces errors. Also noisy electronics inside cell phones are source of errors. These factors need to be taken into account before calculatimg the location. Oh did I talk about noise again. Sorry I did not, both of these facts are taken from this article. Remember my noisy device was invented just to calculate only so it's doing what it should.

 
Last edited:
You are missing the point again. My low grade noisy device relies on data from gps signal from satellite to calculate the time taken. Still after using so much sophisticated technology, the difference in clocks produces errors. Also noisy electronics inside cell phones are source of errors. These factors need to be taken into account before calculatimg the location. Oh did I talk about noise again. Sorry I did not, both of these facts are taken from this article. Remember my noisy device was invented just to calculate only so it's doing what it should.

Not sure who is missing the point.
The precise atomic clocks sending the data are analogous to your perfect source. And they need to be precise at a level that is orders of magnitude beyond any audio system ever created.

However, the local noisy device is still in charge of capturing , processing and interpreting said data..
Data that is way more precise than the cleanest music file can ever aspire to be..

And then performing its calculations on the ultra minuscule deltas on the arrival time of RF travelling at the speed of light, no less.
To determine your exact position on the surface of the earth down to a meter of wiggle room.

Now the argument on this thread was that electrical nasties on an audio processing device can cause time domain and other drifts on audio signals .. (Correct me if I am going on a complete tangent)

So if that were indeed the case, surely the same would show up in this scenario too ?
 
Last edited:
A digital gear manufacturer once told me that he too was stumped to hear differences between a normal laptop vs a modded pc ( made specifically for audio) while using usb using the same dac. Similar observations with ethernet via cables vs wifi. Improvements made to the usb input module implementation reduced the differences to minuscule levels. Overall the conclusion was that it is NOT the data integrity that mattered. It was the nasties injected into the analogue areas of the dac that made the differences. I think the discussion on this thread is all about data integrity which is completely missing the point IMHO.
 
I will be relying on my ear as always, not on "measurement, science or physics".
As you should.
My money will always go to whatever sounds good to me,
As it should.
people shouting snakeoil makes no difference to me
As it shouldn't.
Those who will buy will buy regardless and those who won't, will not buy regardless.
As they should/shouldn't.
I see no point to such threads.
And yet, here you are, in a "pointless" thread! :p
I think the discussion on this thread is all about data integrity which is completely missing the point IMHO.
Was! Past tense! The goalpost shifted a long time ago! ;)
 
now the way SSD and HDD read data are completely different.
The HDD is like a combination of turntable and CD (Disc and platters with magnetic head) or kinda 'Digital data read in an analogue way' ;) .

The SSD is all flash memory.

So does that make a difference then :cool:
 
Working 8 hours a day in front of a computer and writing algorithms to improve the slightest of time/space complexities and facing an misinterpreted bit in your career, it is evident that there is nothing in digital domain that will affect sound let alone improve it. If bits are bits weren't true, all the satellites would have crashed and none of the precision timing devices would have worked. If there were even minute errors in reading and timing the bits from error prone volts with which they were coded, whole of the internet would cease to exist. There are far more complex problems solved and elegant solutions deployed that will fail miserably if the bit manipulation at the lowest levels fails just once in a million attempts. Checksum and auto correction algorithms between components will rule out any chances of misreading that could ever happen. On top of this, a mistimed or misinterpreted bit will transform sound to unrecognizable screech or thud in the grand scheme of things rather than reducing the fidelity. Any remote chance that a fast or real time activity can spoil fidelity can be compensated by placing a local buffer in the receiver side. Netflix servers 1000s of kms apart do exactly this to deliver prestine 4k content. Audio transmission size and distance between 2 components is comparitively negligible at best.

Yet, all digital transports sound different, every power cable switch in digital components sound different, every digital cable swap sounds different and all hell breaks once you sit to experiment with these digital components. Even if a digital component maker intentionally tries to fiddle in the digital domain, he should NOT be able to do so owing to above reasons, let alone in the positive direction.

Only possible explanation is some unknown science or known placebo. You cant create digital devices with unknown science, that leaves only placebo which clearly is not the case once you sit to experiment. Results are so consistent so as to assign components and cables characters which dont change when you replace them in a different system. You discuss a million times trying to assign reason to it, you fail a million times. People are going to go back with same opinion as they started until they see it first hand themselves in their own systems. It is never ending.
 
Pl. do Wake me up, when this company releases a much more refined SSD specifically tailored to store "Jagjit Singh" Audio files only!
A Purist , no sub genre.
:confused:
 
Many people seem to have perception that noise doesnt matter in digital transmission as bits are bits despite the noise. Pls make a diy unshielded usb cable without any twisting and connect it to dac or a storage device. Mostly it will happen that device won't even be recognized because noise has crept in from somewhere and affected something. Noise is a crucial factor in any transmission. Don't know why it becomes too hard to understand this point. Preventing noise definitely improves any transmission whether analog or digital.
 
Many people seem to have perception that noise doesnt matter in digital transmission as bits are bits despite the noise. Pls make a diy unshielded usb cable without any twisting and connect it to dac or a storage device. Mostly it will happen that device won't even be recognized because noise has crept in from somewhere and affected something. Noise is a crucial factor in any transmission. Don't know why it becomes too hard to understand this point. Preventing noise definitely improves any transmission whether analog or digital.
In digital transmission, you either lock to the device signal or you do not. There is no intermediates like analog. Also any misinterpretation of bits results in screech noise (eg badly scratched cds) and not loss in fidelity like in analog. That is the contention.

In analog, the signal reader overshoots or under shoots to various degrees when reading the analog signal and it is relative to the audio content being played and hence results in loss of fidelity. In digital, you can read a 0 as 0 or 1, nothing else and misreads results signal which is not relative to recorded sound but a completely different sound depending on where the error happens. None of this results in fidelity loss.
 
In digital transmission, you either lock to the device signal or you do not. There is no intermediates like analog. Also any misinterpretation of bits results in screech noise (eg badly scratched cds) and not loss in fidelity like in analog. That is the contention.

In analog, the signal reader overshoots or under shoots to various degrees when reading the analog signal and it is relative to the audio content being played and hence results in loss of fidelity. In digital, you can read a 0 as 0 or 1, nothing else and misreads results signal which is not relative to recorded sound but a completely different sound depending on where the error happens. None of this results in fidelity loss.
In analog, it cannot happen that transmitted signal slows down. Electricity travels at the speed of light. The worst that will happen is receipt of a signal that is different than the original. However, there is nothing you can do to recover the original. If the original has lost details, you can never correct it.

In digital you don't simply transfer the digital signal to the speakers. It needs to be converted to analog and here lies the problem. it is impossible for the 0s and 1s to arrive at the precise time because it is being continuously being transformed by a processor and timing based on a clock whose precision depends on a precise crystal and is randomly affected by noise. Both, imperfection because of clock and delay in transmission due to various issues results in loss of fidelity. The promise of digital is that one doesn't have to recover the original (which analog cannot guarantee). The hope for digital is that this will get better and better with time with new clocks and new chips and one day totally eliminate jitter.

Digital can guarantee no loss of the original data but cannot guarantee timing of the delivery to the ears. Analog can guarantee precise timing of the signal delivery to ears but cannot guarantee originality of the signal.

NOTE: However how the original digital copy was created (analog to digital conversion) has the same issue with clock and timing issues that plagues digital to analog conversion.
 
Last edited:
In digital transmission, you either lock to the device signal or you do not. There is no intermediates like analog. Also any misinterpretation of bits results in screech noise (eg badly scratched cds) and not loss in fidelity like in analog. That is the contention. In analog, the signal reader overshoots or under shoots to various degrees when reading the analog signal and it is relative to the audio content being played and hence results in loss of fidelity. In digital, you can read a 0 as 0 or 1, nothing else and misreads results signal which is not relative to recorded sound but a completely different sound depending on where the error happens. None of this results in fidelity loss.
Various digital transports sound different. Even different cd players using their respective digital outputs will sound different. If it was not true then all cd players, laptops, digital transports should sound same but unfortunately its not true. Digital playback does get affected. More than 0s and 1s, it's the steady and precise flow of those 0s and 1s that matters during digital playback that can be affected by noise and jitters. 0s and 1s are not transmitted as 0s and 1s, they are transmitted as analog voltages and then interpreted as 0s and 1s.
 
Various digital transports sound different. Even different cd players using their respective digital outputs will sound different. If it was not true then all cd players, laptops, digital transports should sound same but unfortunately its not true. Digital playback does get affected. More than 0s and 1s, it's the steady and precise flow of those 0s and 1s that matters during digital playback that can be affected by noise and jitters. 0s and 1s are not transmitted as 0s and 1s, they are transmitted as analog voltages and then interpreted as 0s and 1s.

Agree to the first part that every single digital component sound different. Even 2 batches of components from same manufacturer sounds different.

But trying to explain it is futile as I also noted. Jitter can be a non issue if the signal is reconstructed using a small buffer in the receiving side. Misinterpreting 0s as 1s or vice versa once in a while will not result in loss of fedility but altogether different sound depending on where the bit is misinterpreted. Eg: If the last bit of your bank balance is misinterpreted, you get rupee more or less bit if the first bit is misinterpreted, you get a negative amount not even close to what you have in the bank. Audio data is not different. Fidelity doesnt get reduced magically if few bits are flipped. So analog voltages too dont contribute to fidelity loss.
 
Buffering doesn't solve the problem of inaccuracy in the timing of the "ticks" of the clock that transfers the samples of digital data into the D/A converter chip.

I am often asked how does digital streaming audio and data transmissions to disks or printers differ. It is after-all only data being transferred from one point to another. Actually, there is more to it than this. Digital audio streaming is a "real-time" process, meaning that the actual timing of the transfer of each data bit from the source to the D/A converter is important and must be as precise as possible. Data transfers to disk or a printer are not real-time because there is no urgency for the data to arrive at the printer or disk to prevent errors from happening. The data arrives whenever it does and then the device does its job with the data, either writing it to the disk or storing it in a print buffer. If the data does not arrive in time to be written on a particular sector of a disk, the hardware just waits for the disk to rotate again. Streaming audio data on the other hand must arrive at precise time intervals in order that the D/A device create an accurate representation of the original recording. The clock that moves the data into the D/A cannot be missing any "ticks" and each tick must be precisely placed in time. The audio data transfer must include both:

1) accurate data and

2) accurate timing, whereas non-real-time transfers only require accurate data.


Another article from nwavguy on jitter where he says that jitter is inaudible on well designed dacs now a days.
My personal view is jitter is a lot like THD. Once it's below a certain level it's safe to assume it's inaudible. The catch is it's harder to quantify jitter--especially in the analog domain such as the output of a DAC. So it's open to debate as to what's "low enough" and what measurement or criteria should be used.

And I believe, like so many other things in the audiophile world, much of what we read about jitter is pure marketing hype, audiophile myth, psychological bias (i.e. imagined differences), or genuine audible differences related to something else. I don't believe jitter is an audible problem in reasonably well designed products.
 
Last edited:
Was! Past tense! The goalpost shifted a long time ago! ;)
The goal post may have shifted in the discussion. But I am wondering what the point of the overall discussion is ?

A. There "can " or "cannot" be data loss from different types of equipment that reside pre dac in a data file playback chain.
B. There "can " or "cannot " be any audible difference between different types of equipment that reside pre dac in a data file playback chain.

Anyone who can hear these differences are least bothered by " A". They will continue to hear and buy whatever they like.
Many a times, differences heard may not have anything to do with the conclusion in " A". It could just be the manyfold vagaries in digital gear electronics where noise or any other nasties getting passed onto the analogue circuits down the chain. Good design can reduce there issues.

I am not here for a battle but just pointing out something. I also cannot validate or explain these kind of phenomenon in electronic design jargon to anyone. Just something that has come across in conversations with friendly digital gear manufacturers after couple of beers down. I can also hear these differences irrespective of the beer or not.

I will leave you gentlemen to your rigorous banter.
 
Last edited:
All I can say Venkat is you have not heard a properly set up vinyl system. The bass is exemplary of vinyl. It’s better than any digital player I have heard. I have heard/owned the best of digital for over 25 years. I have heard enough stuff at studios. I have had sound engineers from the industry visiting my house. Nobody had a problem with bass in my vinyl set up.

And for your information Lata’s voice is not sibilant at all. I have heard some of her masters. Her voice is very sweet. It’s just that digital is unable to handle her high pitched voice.
Indeed. Bass is not a problem. Lower frequencies were reduced using a RIAA equalization curve equation, so that the grooves became smaller to allow more songs to be fitted into the vinyl. During playback the reverse was done. The RIAA curve is actually brilliant.

Records and record player needles are sensitive, so much that even the slightest amount of dust and hair accumulated on either will cause high frequency hiss sounds and the occasional popping sound. By boosting high frequency volumes, this also increases the volume of these hisses and crackles. So why do it?

Because when you later invert the RIAA equalization curve in the electronics of the turntable, you end up reducing the volume of these noises, providing an even clearer listening experience. Let me make more sense of this for you.

No matter what audio is being played by the needle, the hiss and clicks will be the same volume. So by boosting the high frequencies on the record itself, they will drown out these noises. This increases the signal-to-noise ratio, which reduces the volume of the noise floor. So during playback, when the high frequencies are then lowered back to their correct volumes, the noises (which were also boosted but not as much relative to the actual music) are reduced in volume lower than they would have been played back without this equalization curve being applied
1640588069201.png
 
Back
Top