Rockbox mail archiveSubject: Re: buffer overflow in dsp.c when playing low-frequency stereo files
Re: buffer overflow in dsp.c when playing low-frequency stereo files
From: Frederic Devernay <frederic.devernay_at_m4x.org>
Date: Wed, 10 Aug 2005 19:22:55 +0200
After spending hours on the dsp code, I understand that
dsp_input_size(output_size) should return a buffer size, which after being
processed _fits_ into output_size. The real output size is returned anyway by
dsp_process() so who cares about not using a few bytes from the output buffer?
On the opposite, writing too many bytes leads to buffer overflow. The safe way
is to round towards zero.
And in fact, astonishingly, the number of input bytes doesn't change with the
phase. The exact computation would be:
size = (resample_data.phase & 0xffff +
(unsigned long)size * resample_data.delta) >> 16;
But since only the first 16 bits of the phase are used, this simplifies to:
size = ((unsigned long)size * resample_data.delta) >> 16;
Which is the expression I use. QED
On the contrary, the number of output bytes for a given input size changes with
the phase, but I simply preferred majoring it.
Magnus Holmgren wrote:
> Frederic Devernay wrote:
>> In fact, I could rewrite dsp_input_size() so that it gives the _exact_
>> input buffer size, using the pre-computed delta used for upsampling
>> AND downsampling. Of course, I made sure there's no yield() between
>> this and dsp_process(), so that the value cannot change.
> You are aware that the number of bytes written by dsp_process varies a
> little, depending on how the delta "moves" within the current chunk of
> data (i.e., the value of phase when dsp_process is called)? So you
> should return a maximum input size...
Received on 2005-08-10