Silly but fundamental question - kindly illuminate

I would assume the the app can either give instantaneous or consolidated values In a mobile app even the definition of instantaneous would be some sort if consolidation eg 1s else we may not be able to make sense.
If it gives instantaneous value, then as per our earlier conclusion (based on @sandeepsasi’s illustrations), it should only be a single frequency at any point in time. If we continue to follow that conclusion, then it should be consolidate values.
apparently sound sticks in our ear for .1 sec
Yeah, it’s that sticking part… I’d imagine that since the signals keep coming continuously… within that period (0.1 secs) that the sound sticks, it keeps accumulating in the brain. So essentially we have the cumulative signals (composed of infinite singular frequencies composed from the cone) within the past 0.1 ms that are overlayed on each other. This (constantly changing) overlay/interference is what probably helps the brain perceive the music? Again, temporality seems to play a role.

Would it also mean that another animal, whose sticky period is different (higher or lower) than ours (0.1 s) would be listening to a different music even if coming from the same speaker for this reason? 😃
 
The thread is an example of the (apparently ) most innocuous , naive question leading to complicated answers, the apotheosis of which has often led to pathbreaking scientific enquiries.
I was stumped the other day by my six year old asking why is the sky always blue.
Thanks OP.
 
I was stumped the other day by my six year old asking why is the sky always blue.
Perhaps you could introduce his/her curious mind to this site then. I loved how they build up to the answer, in the process also answering hundreds of consequent questions that will erupt in an inquisitive mind if provided just a single sentence answer.

 
Last edited:
If it gives instantaneous value, then as per our earlier conclusion (based on @sandeepsasi’s illustrations), it should only be a single frequency at any point in time. If we continue to follow that conclusion, then it should be consolidate values.

True that ! Ergo it has to be consolidated. Or perhaps our eyes cannot recognize faster events due to its own visual sticky factor 😁 or the app itself has a lag which cannot be represented in such timelines.
Yeah, it’s that sticking part… I’d imagine that since the signals keep coming continuously… within that period (0.1 secs) that the sound sticks, it keeps accumulating in the brain. So essentially we have the cumulative signals (composed of infinite singular frequencies composed from the cone) within the past 0.1 ms that are overlayed on each other. This (constantly changing) overlay/interference is what probably helps the brain perceive the music? Again, temporality seems to play a role.
Since I am sadly beyond my knowledge limits will defer !!!
Would it also mean that another animal, whose sticky period is different (higher or lower) than ours (0.1 s) would be listening to a different music even if coming from the same speaker for this reason? 😃
I dont know about the .1 sec but the sensitivity to frequencies filtered out are...hence the tonality will differ ( they do differ to a smaller extent person to person too)
eg Tiger roars go down to 12Hz and birds like pigeons can hear down to .05Hz and a dolphine upto 150Khz !

Am sure the same sound is heard very differently by them just due to that. If Dolphins and pigeons together were the "Ruling: species then the speakers and music would be recorded with a very different technology 😁😁
 
The thread is an example of the (apparently ) most innocuous , naive question leading to complicated answers, the apotheosis of which has often led to pathbreaking scientific enquiries.
I was stumped the other day by my six year old asking why is the sky always blue.
Thanks OP.
Thats the beauty of science and anybody who brings in facts to disprove without trying to prove is not really a scientist at heart.

The anecdote of one of newtons opponents proclaiming ( on his gravity theory) that There is nothing more to learn and science and already found everything that there is was the trigger for him to invent calculus to prove he was right.
 
I dont know about the .1 sec but the sensitivity to frequencies filtered out are...hence the tonality will differ ( they do differ to a smaller extent person to person too)
eg Tiger roars go down to 12Hz and birds like pigeons can hear down to .05Hz and a dolphine upto 150Khz !
Of course there are other reasons for animals hearing differently from us, including differing audible frequency range. But I was specifically referring to the temporality/0.1 s stickiness we were discussing. Hence I’d added ‘for this reason’.

If Dolphins and pigeons together were the "Ruling: species then the speakers and music would be recorded with a very different technology
Lol, yeah!
 
Of course there are other reasons for animals hearing differently from us, including differing audible frequency range. But I was specifically referring to the temporality/0.1 s stickiness we were discussing. Hence I’d added ‘for this reason’.


Lol, yeah!
Not sure if other animals use sound for enjoyment ( other than perhaps Whales and dolphins) as its mostly for basic communication and hence this may not be something which developed to that extent.
Its perhaps in frequency range/Sensitivity that several are so well developed.
 
If it gives instantaneous value, then as per our earlier conclusion (based on @sandeepsasi’s illustrations), it should only be a single frequency at any point in time. If we continue to follow that conclusion, then it should be consolidate values.

if you follow the earlier graphs where it shows multiple waves (500 hz, 1000 hz, etc.), each instant, the output amplitude is a cumulative value of multiple component frequencies. The speaker has to only output that single value in that instant. Same with our ear. Ofcourse, as you state subsequently, our ear/brain has a sticky period and it accumulates and make us feel the continuum rather than discrete sound levels.
 
Is it why the part of the brain that processes the audio signal is recieves also called ‘temporal’ lobe? 😊 Temporal = to do with time. The brain perhaps, as you conjecture, senses/interprets the audio not at any point in time (like say a meter would measure the frequency), but through the sequence over time (even if milliseconds)? Let’s hope @Analogous and other medicos and medical scientists in the forum enlighten us on this.

Having said that about the ‘meter’ above, it made me think further. The app on my mobile - how does it give me a frequency graph (at any moment) if the cone has combined all the tones into a complex wave (which should have only one frequency at any instant)?

Or do we get a clue here? The mobile’s mic and other related hardware and software would also have least count when it comes to time… so what we see as frequency graph is the sum total of the frequencies of the unified wave during that least count time? And if so, could something similar be happening within the least count that our hardware and software (the brain) has?
Hi @sachinchavan 15865. Good to see you back !
This helps to illustrate the basic signal path of auditory pathways in simple terms.
It does not cover the deeper points raised by you and @arj

This interesting research explores the evolutionary aspect of our ability to appreciate and enjoy music.

True that ie in bold. the shape of our ear actually helps in height perception of sound which includes the external ear cartilage and the internal acoustics. all of that helps in collection and perception of sound and converts into electrical signals.

Not sure of how much or little the brain does. the way i understood the filteration and convertson is done by the ear but the brain is the only one which processes/discards/amplifies and in some cases reconstructs the signals

we have some medical folks here may be they can educate us... @Analogous ?
I would agree.
It may help to look at the process as two stages.
The external and middle ear - The mechanical part
The inner ear and the brain - the electro chemical part
The first section physically is activated by sound waves (20-20k hz in young peoples’ ears)
The inner ear and the brain processes are more electro chemical. The auditory cortex has connections with several other parts of the brain that are responsible for memory, pleasure, emotions, intellect and more.
With the widespread availability of MRI there is a large amount of new discoveries on different parts of the brain that are activated when test subjects listen to music, speech and other sounds.
 
I always wondered how a speaker (driver) make sounds. No, I am not looking for explanation of the electromagnetic mechanism behind it. Just how the cone makes the sound? I mean when you look at the frequency graph of any track being played, you see that hundreds of (technically infinite number of) frequencies are amplified at any point in time, producing the various vocals, instruments and other artefacts on the track. My question is, how does the seemingly simple, uniform cone produce all these frequencies? Do different parts of it vibrate at different frequencies at the same time (something difficult to visualise). If so, how does that happen in what seems to be a continuous cone? If not, then how are the different sound frequencies produced simultaneously?

I know that the answer, when one of you gives it, might make me look stupid for asking this question. 😊 But I had to ask it nevertheless! There might be some other techno-novices like me who have wondered the same sometime and would benefit from your answers.
Just to add my 2 cents to the discussion so far.
There is nothing fundamentally different between how a speaker makes sounds vs any other natural mechanism of producing sound.
E.g. Our vocal chords don’t create a single tone sine wave - even basic speech is a multi-frequency complex wave form that’s being created by our relatively low rigidity vocal chords..A speaker diaphragm with its greater rigidity is actually better suited to produce a multi-tonal response.

Yet Trained artistes can coax their vocal chords to produce a wider frequency spectrum than the average Joe ( so can a lot of songbirds)

Taking it back to a typical audio setup, we have 3 such objects , 1 each for bass, mid range and high frequencies…
perhaps the best way to imagine it could be a travelling troupe of musicians with an elephant , human (or a parrot) and a nightingale working in unison :)

Only reason to bring in this parallel is to show how a single vibrating object has been capable of reproducing a complex tonal response, all seemingly at the same time, since the dawn of time
 
Just to add my 2 cents to the discussion so far.
There is nothing fundamentally different between how a speaker makes sounds vs any other natural mechanism of producing sound.
E.g. Our vocal chords don’t create a single tone sine wave - even basic speech is a multi-frequency complex wave form that’s being created by our relatively low rigidity vocal chords..A speaker diaphragm with its greater rigidity is actually better suited to produce a multi-tonal response.

Yet Trained artistes can coax their vocal chords to produce a wider frequency spectrum than the average Joe ( so can a lot of songbirds)

Taking it back to a typical audio setup, we have 3 such objects , 1 each for bass, mid range and high frequencies…
perhaps the best way to imagine it could be a travelling troupe of musicians with an elephant , human (or a parrot) and a nightingale working in unison :)

Only reason to bring in this parallel is to show how a single vibrating object has been capable of reproducing a complex tonal response, all seemingly at the same time, since the dawn of time
@superczar, thanks for pitching in. But it doesn’t explain ‘how’ the single-piece speaker cone (or the vocal chords for that matter) produce the multi-tonal sound wave (vibrations at multiple frequencies at the same time). Anyway, @sandeepsasi’s technical explanation followed by the discussion with @arj in this thread has thrown some illumination and the rest has been filled with intuitive hypothesis. I am satisfied with that for now.
 
Last edited:
Here is a well articulated explanation of the mechanisms involved in the human voice reproduction.

For a bit more technical understanding: (this one goes a bit beyond the scope of this discussion)
 
An early excerpt from this interesting review/article “I used to work regularly with old reel-to-reel films, and the first time one of them snapped I was aghast. But I soon learned that it was no disaster, because you can use the same clever little machine to mend broken filmstrips as you do to cut and splice film in the traditional editing process. It’s called a guillotine splicer, and it looks like a cross between a Sellotape dispenser and an embossing stamp. You take the broken film, cut out the frame exactly where the break occurred, then place the two ends of the strip together in the machine. Strong, clear tape joins the ends together without a gap, and the stamper presses new sprocket perforations through the tape. The film is then ready to be played again. Only one frame has been lost; watch it at the classical rate of 24 frames per second and you won’t be able to tell anything is missing.
You won’t be able to tell with the eye, that is. But after I had done this a few times, I learned something new about the perceptiveness of the ear. When you cut out the broken frame, you also lose a frame’s worth of the magnetic strip that holds the soundtrack. And while a missing 1/24 of a second is undetectable to the eye, it turns out that 1/24 of a second in lost sound is impossible to miss: there is a tic in the music, a skip in the background noise, or a word that has a bite taken out of it. You can’t see the lost frame, but you can hear it. At 24 frames per second, the eye loses track and registers seamless animation, but the ear is counting time. This extraordinary sensitivity perhaps explains why the decision of a young rap producer, hunched over a drum machine in his mother’s basement in Detroit in the early 1990s, to alter the timing of a snare drum hit by just 5/192 of a musical measure was enough to change the sound of modern music.”

Basement Beats
Francis Gooding
Dilla Time: The Life and Afterlife of J Dilla, the Hip-Hop Producer Who Reinvented Rhythm by Dan Charnas. Swift, 458 pp., £20, April
Lacing up a Steenbeck editing desk to watch a reel of 35mm film is a delicate process. With the reel placed flat on the left-hand friction plate, you thread the filmstrip carefully around a pinball-like maze of rollers and sprockets. Arms clack into place, holding the strip in front of the bulb while a mirrored prism reflects the illuminated image onto the small viewing screen. Finally, the strip is teased around another course of rollers and fixed to the empty receiving reel. Once threaded, the film can then be started and stopped as needed by using a lever. But sometimes, if the film is old or damaged or the reel is very heavy, the strain will be too great and the friable celluloid suddenly snaps.
I used to work regularly with old reel-to-reel films, and the first time one of them snapped I was aghast. But I soon learned that it was no disaster, because you can use the same clever little machine to mend broken filmstrips as you do to cut and splice film in the traditional editing process. It’s called a guillotine splicer, and it looks like a cross between a Sellotape dispenser and an embossing stamp. You take the broken film, cut out the frame exactly where the break occurred, then place the two ends of the strip together in the machine. Strong, clear tape joins the ends together without a gap, and the stamper presses new sprocket perforations through the tape. The film is then ready to be played again. Only one frame has been lost; watch it at the classical rate of 24 frames per second and you won’t be able to tell anything is missing.
You won’t be able to tell with the eye, that is. But after I had done this a few times, I learned something new about the perceptiveness of the ear. When you cut out the broken frame, you also lose a frame’s worth of the magnetic strip that holds the soundtrack. And while a missing 1/24 of a second is undetectable to the eye, it turns out that 1/24 of a second in lost sound is impossible to miss: there is a tic in the music, a skip in the background noise, or a word that has a bite taken out of it. You can’t see the lost frame, but you can hear it. At 24 frames per second, the eye loses track and registers seamless animation, but the ear is counting time. This extraordinary sensitivity perhaps explains why the decision of a young rap producer, hunched over a drum machine in his mother’s basement in Detroit in the early 1990s, to alter the timing of a snare drum hit by just 5/192 of a musical measure was enough to change the sound of modern music.
Almost every notable development in popular music has been built on an innovation in rhythm. Sometimes a musical style arrives on the wings of a brand-new beat: think of ska, the early days of rock’n’roll or Tony Allen’s afrobeat rhythms. Sometimes what’s involved is a novel adaptation of an existing beat, a shift that makes it faster or combines it with another beat – as jungle did in setting double-time funk loops over dub foundations – or which slows it down to find new spaces in it, as rocksteady did with ska or kwaito did with house. All manner of scene-specific rhythmic tweaks, minute differences of tempo and emphasis, organic evolution and individual breakthrough can be found in between. Very few of them, however, could be spun as a reinvention of rhythm itself.
1666667757490.pngJ DILLA COURTESY OF B+
Yet this is the claim that Dan Charnas makes for the music of James Dewitt Yancey, aka Jay Dee, aka J Dilla – the most mythologised rap producer of all time, who died in 2006 at the age of 32. And the claim rests on that tiny change in the timing of a snare drum, a deviation of about 65 milliseconds from a strictly measured beat. ‘That small adjustment,’ Charnas writes, ‘was big enough to cause a cascade of rhythmic consequences.’ As his fellow musicians picked up on what he was doing, everyone from underground rap crews to Michael and Janet Jackson wanted music that had the distinctive feel of what Charnas calls ‘Dilla Time’. Revered hip-hop producers, fearing they’d been left behind, went back to the drawing board; drummers relearned their instrument; bass players and keyboardists had to rethink their phrasing and chord progressions; critics tried and are still trying to puzzle out the mysteries of the J Dilla sound.
Dilla Time opens with an anecdote. Sometime in 1994, Ahmir ‘Questlove’ Thompson, drummer with Philadelphia outfit The Roots, was leaving a show in North Carolina. The Roots are now famous – they’re the backing band on the Tonight Show with Jimmy Fallon – but then they were touring as support for The Pharcyde, who were promoting their second album, Labcabincalifornia. Questlove had a college radio interview to do, so he hadn’t hung around to listen to the headliners. As he climbed into his car, however, and The Pharcyde began their set, he heard something strange: the song they were performing didn’t seem to have a regular drumbeat. The muffled sounds coming from inside the club ‘sounded … wrong’. The kick drums were hitting in unpredictable places, the snares seemed to be dragging behind the beat. And why wasn’t the rhythm pattern repeating itself? For a drummer like Questlove, who had learned to play in a manner that emulated the precise regularity of the hip-hop rhythms generated by drum machines, it was perturbing. He went back inside to find out what was going on. What he had heard from the car park was a new song, ‘Bullshit’, which had been made for The Pharcyde by Yancey, then an unknown kid from Detroit.
Before getting involved with The Pharcyde, Yancey had mostly been working with childhood friends and local rappers in his hometown. Born in Detroit in 1974, he was the eldest of four children. His mother, Maureen Hayes, had been trained as a classical singer, but had long since put away thoughts of singing professionally. His father, Beverly Dewitt Yancey, worked at the Ford plant; he was a jazz bassist, who moonlighted as a songwriter for some of the city’s many soul groups. Despite a few credits here and there, he had found no real success, but he encouraged Maureen to sing, and the Yancey house in Conant Gardens, a historically middle-class Black suburb, was filled with music.
James Dewitt Yancey was a quiet child, with a stutter. He played drums, cello and saxophone at school, and sang at home and in church. He was also an obsessive listener, scouring the record collections of his parents and relatives as he grew up, building a mental library of artists, recordings and sounds. By his early teenage years he was DJing at school parties under the name Silk, rapping in a crew with schoolfriends and cousins, and experimenting with hip-hop beats on a set-up jerry-rigged from dismantled and reassembled cassette decks. A local producer with his own studio, Joseph Anthony Fiddler, known as Amp Fiddler, let Yancey stop by to use his equipment; he taught him how to use multi-tracked reel-to-reel tapes and sampling drum machines. Yancey began to produce more sophisticated music, and to record his friends. Amp Fiddler, listening to the tracks he was making and noting the originality with which he approached both sounds and equipment, was the first of many professional musicians to realise that Yancey was something special. He also realised that Yancey was cutting class to make music in his studio, so he visited the boy’s parents to reassure them that what their son was doing was something creative and useful. Maureen had wanted him to attend a prestigious aviation college; after a few years, he’d stubbornly demanded to be allowed to join his friends at the local high school. But by that time he was already spending his days and nights immersed in music, and soon dropped out of school altogether. Instead, he sequestered himself in the basement of the family home with his drum machines and turntables.
It didn’t take long for his music – spare, heavy and inflected with an unplaceable musical sensibility – to find its way to discerning ears. Amp Fiddler arranged a meeting with Q-Tip of rap pioneers A Tribe Called Quest. Q-Tip listened to the demo tape, thought about what he was hearing, and played it to others. When Pete Rock, one of New York’s star producers, heard the tape his first thought was ‘I am out of a job.’ Dave ‘Trugoy’ Jolicoeur of De La Soul told Q-Tip that Yancey’s beats were like Tip’s, ‘but better’. Q-Tip ushered Yancey into the highest echelons of East Coast rap. Over the next few years, his music played a central role in defining the hip-hop sound of the late 1990s, appearing on albums by artists including De La Soul, Common, Busta Rhymes, Erykah Badu, The Roots and Mos Def. Meanwhile, his own group, Slum Village, in which he was joined by two old Detroit running mates, rappers R.L. ‘T3’ Altman III and Titus ‘Baatin’ Glover, was waiting in the wings.
It was a Slum Village demo tape that Q-Tip had listened to, and it was the music that Dilla was making for his own group that made him a local hero in Detroit as well as a cult figure among a select network of producers and artists. A cassette, Fan-Tas-Tic Vol. I, was released in 1997. ‘I’ve NEVER secluded myself more for any album,’ Questlove wrote later. ‘I can recall every landmark record I’ve ever purchased, from Songs in the Key of Life to Off the Wall to It Takes a Nation of Millions … But this shit? WHOOOOOOOOO.’ Fan-Tas-Tic had taken Dilla just three days to make.
The tape quickly became the tonic note at the Electric Lady studios in Greenwich Village, originally set up by Jimi Hendrix, where the singer D’Angelo, Questlove and a cast of other top musicians were recording the music that would become D’Angelo’s platinum-selling Voodoo. Though Dilla himself was present only intermittently, it was in these sessions that his sound began to leak into live playing. Listening to Voodoo, you can hear the direct effect he was having on musicians, and experience the loping, disorientating Dilla feel in music played on instruments. The Welsh bass player Pino Palladino, who has worked with everyone from Phil Collins, Eric Clapton and The Who to Nine Inch Nails, was at first surprised by D’Angelo’s request that he play so severely behind the beat, but came to consider the sessions ‘transformative and liberating’ according to Charnas. At Electric Lady, Questlove – who had taken to calling Dilla ‘the God’ – learned to splinter his snare sound into a ragged double hit and to rush it forward by milliseconds. Everything loosened up. Unglued from the metronome, Voodoo found the key to the hidden soundworld that Dilla’s music had revealed, a world fractionally but decisively divergent from ours.
Dilla Time is in large part a straightforward biography, whose subject doesn’t come out of it altogether sympathetically. Dilla had a notoriously short temper, could treat the women in his life badly and was so focused on his own path that everyone around him, even his oldest friends and collaborators, could be summarily dropped if he felt they had become a hindrance. Yet he had no shortage of friends and lovers, and no one speaks ill of him, on record at least. His musical pre-eminence sweeps all else aside. Charnas’s book isn’t only, or even chiefly, about the complexities of the man, though it makes room for them. It is mostly about the complexities of his music. Charnas doesn’t get too technical, but his analysis of those basement-crafted beats is incisive. Though Dilla’s music was recognised as special, it was not well understood – there was just something about his sound, something very hard to pin down. Aside from his brilliant ear and unmatched facility for disassembling a sample, his music seemed to contain a new kind of space, an unresolved drift, that was at once compelling and disconcerting. A Dilla beat never quite settles, or lets you settle. There are strange tensions at work, effervescing at the border between conscious perception and bodily feel: a sense of being held in uneasy suspension, like that moment at the top of a swing when the upward momentum fails, the weight comes off the chains, and you seem to float for a fraction of a second before gravity pulls you back down. Dilla found a way to hold everything there: not weightlessly, but with the expectation of the fall intact. A perfectly balanced imbalance.
What’s usually said is that Dilla turned off the timing functions on his drum machine and played the drum parts by hand using the trigger pads, thereby giving his music a ‘loose’ or ‘human’ feel. He certainly did this on occasion, but that isn’t the whole story. As a hip-hop producer, Dilla worked principally with samples from old records, processed with a drum machine, the Akai MPC 3000. For Charnas, what he was doing with it, and what made his music new, comes down to the differences between regular musical time and swing time, and the way Dilla exploited the in-built capacities of the MPC to play them off against each other. The result was ‘Dilla Time’.
Musical time in 20th-century popular music comes in two basic flavours: straight and swung. Straight time, the principal time of the European academy, counts a direct beat, with no deviation: 1, 2, 3, 4. Swing gives musical time a different feel, closer to the variable, intimate rhythms of the body. In swing time, the syncopations and ghost polyrhythms in jazz and blues work against the strict pulse of straight time. Every second beat is slightly shortened or lengthened to the taste of the performer, and the music walks with a bop (think of the tssh-tck-a, tssh-tck-a cymbal ride of jazz time). Via the music of Black America, the openness and immediacy of swing time came to dominate modern pop. Everybody understands the swing feel, and Charnas demonstrates its ubiquity with a neat example, the shift in rhythm between the first faux-operatic section of Queen’s ‘Bohemian Rhapsody’ and the second rock section: ‘Begin listening at the 3:40 mark … We are still in straight time at 3:57 when the chorus sings the words “Beelzebub has a devil put aside for me.” They will repeat the words “for me” two times. By the end of the second repetition, at 4:05, we are in swing time.’
The first drum machines, by contrast, played very straight: a hard-edged machine time, marked with exact precision. The early years of hip-hop and electro deployed this harshness to great effect, making use of the robotic timing to dream inner-city electric dreams. But even before this, at the end of the 1970s, a young musician called Roger Linn had designed a machine that could emulate the natural swing of a live drummer. Linn wanted to produce demo tracks to pitch to other artists; he could play most instruments himself, but not drums. Feeling that the rhythms produced by commercially available drum machines were too stiff, he decided to design his own, the LM-1, and he worked out how to make it swing. By making micro-adjustments to the clock that triggered the drum sounds, he could produce grooves that were slightly off straight time, as a drummer might play them – grooves that sounded more natural.
Drum machine companies were soon copying the ‘swing’ functionality of the LM-1. And when the electronics manufacturer Akai decided to enter the market, they asked Linn to design them an even more flexible machine. By 1987, he had created the MPC 60, forerunner of Dilla’s MPC 3000. On the LM-1, as on other popular drum machines, the ‘swing’ function worked as an overall command: you chose the degree of swing, and all the drum elements would move together by the same increment. But the MPC was more sophisticated: each individual element – each ‘track’ – had its own clock, so its degree of swing could be adjusted independently. Linn also added a new feature, ‘shift timing’, by which each track could be moved by tiny, precise intervals in relation to the others. All these features were no doubt intended to enable producers to make music that sounded more and more like live drumming. But Dilla saw something else hidden in the functionalities of the MPC – the potential ‘to make a kind of rhythm that no drummer had ever made before’.
Dilla is often said to have reintroduced error into the sound of hip-hop. But that isn’t really the case. His technique wasn’t aleatory, it was precise. He used the MPC’s swing and shift functions to pull some of the drum tracks slightly out of position, into swung time, while leaving other elements of the track in straight time. Snare drums in rap are expected to arrive sharp on the second and fourth beats of the bar. Dilla moved them fractionally forward, so they sounded rushed; he let bass kicks lag and pulled basslines far behind the beat. He kept other parts of the track in strict time, setting up sustained, swirling conflicts between elements. It may not sound like much, but it was revolutionary. What Charnas calls ‘Dilla Time’ is ‘the deliberate juxtaposition of multiple expressions of straight and swing time simultaneously, a conscious cultivation of rhythmic friction for maximum musicality and maximum surprise’. This is nothing like a human, live instrument sound. Drummers don’t do it (not unless they’ve been studying Dilla), and it can’t be achieved by accident.
Dilla rarely gave interviews and so, like King Tubby, another tight-lipped pioneer of music made via machines, he left no account of exactly why or how he came to his innovations. The most we have is a simple assertion: ‘This is my natural rhythm. It’s how I bob my head.’ Detroiters like Dilla, Charnas suggests, ‘had a natural affinity for unnatural sounds’, something that reached back at least as far as Berry Gordy writing the first Motown hits to the industrial rhythms of the Ford production line. And as the age of electronic music dawned the city’s Black club scene had also produced techno, one of the hardest of dance music styles – futuristic, thumping, edged with silver. In some ways it makes perfect sense that Dilla would have used new digital technologies to produce music that didn’t sound like anything a human drummer would make.
But Dilla’s wasn’t a music of pounding regular rhythms, clocked in on the production line. It was music made in the ruins of Detroit – a city of vacant lots, decayed industrial zones, depopulation and poverty. It is a machine sound, but from an out-of-kilter machine, loping its way through a ghost town. Berry Gordy’s super-productive hit factory belonged to the Detroit of Ford, General Motors and Chrysler; Dilla’s uneasy, post-Fordist music was made by digitally splicing fragments of old records, themselves mass-produced industrial products which by the late 1990s were considered as defunct as Detroit’s car factories.
And in so far as Dilla’s music is a body sound – the way he bobbed his head – perhaps it is the sound of a body that was different: the sound of a kid who had learned to master a stutter, then a volatile young man delivering coarse raps in which the stress fell in unexpected places, and finally a mature musician who felt that he inhabited the world with a time signature slightly different from everyone else’s. Dilla died in Los Angeles in 2006 from complications arising from TTP (thrombotic thrombocytopenic purpura), a rare blood disorder that causes uncontrolled clotting and can lead to organ failure and death, likely triggered by the auto-immune disease lupus. His body had an internal conflict of its own; his fingers used to blister and bleed if he spent too long punching at the pads of his drum machines.
The death of James Dewitt Yancey caused an enormous outpouring of sorrow. No other rap producer has had such an influence on the way that modern music is played, and his status in hip-hop is sainted. Orchestras and jazz groups perform his works; universities and music conservatories run courses on him (Charnas teaches one). The National Museum of African American History put his MPC 3000 on display. His final work, Donuts, was released three days before his death – forty-five minutes of instrumental beats, segued together without rapping. In essence, it was little more than a glorified version of the kind of demo-tape Dilla sent to rappers as a matter of course, so they could choose backing tracks. But presented like this, the magic of his workaday practice was clear to all; Donuts let a wider audience in on the quicksilver brilliance that Q-Tip and Questlove had recognised more than a decade earlier. What Charnas doesn’t explain is quite why all this happened. Why did Dilla’s music – a lot of it is quite strange music, not at all straightforwardly appealing – have such a profound effect on people?
Rhythm is mysterious and powerful. It affects the body in complex ways, from causing your head to bob to inducing trance and possession. It can change the speed of your heart. The dancing body feels patterns that would take a musician a lifetime to learn to play. Intervening in the fundamentals of rhythm is no small thing. Dilla opened popular music up to unknown pleasures, working in the minute spaces that the ear and body sense with obscure sureness, and building powerful microrhythms at the smallest scales. ‘His beats held the DNA of hip-hop,’ De La Soul said. They were right: a hidden dimension of sound, rhythm and feeling, coiled inside a drum machine, revealed by clocking in the snare half a second early.
Francis Gooding is a staff writer at the LRB.
 
The Marantz PM7000N offers big, spacious and insightful sound, class-leading clarity and a solid streaming platform in a award winning package.
Back
Top