humblebee
Active Member
So, we know that lossy audio (mp3, aac etc) encode limited data and the rest is discarded.
How much data is encoded is given by the equation:
Bitrate = channels x bit depth x sampling rate (twice of bandwidth)
So,
320 kbps = 2 x 16 x 10khz
And 10khz sampling rate will encode frequencies upto 5000 hz. (Nyquist)
Now, my question is :
When we playback audio, in say winamp or potplayer, the software shows us 44.1khz. This means the audio being rendered has frequencies upto 22.05khz.
So what gets filled up from 5khz to 22.05khz ?
Who decides whether anything gets filled up? The decoder api? Like ffmpeg decoder for mp3?
I mean a decoder could fill up anything there if it wants to.
How much data is encoded is given by the equation:
Bitrate = channels x bit depth x sampling rate (twice of bandwidth)
So,
320 kbps = 2 x 16 x 10khz
And 10khz sampling rate will encode frequencies upto 5000 hz. (Nyquist)
Now, my question is :
When we playback audio, in say winamp or potplayer, the software shows us 44.1khz. This means the audio being rendered has frequencies upto 22.05khz.
So what gets filled up from 5khz to 22.05khz ?
Who decides whether anything gets filled up? The decoder api? Like ffmpeg decoder for mp3?
I mean a decoder could fill up anything there if it wants to.