Rockbox mail archiveSubject: Re: Buffering Explanation
Re: Buffering Explanation
From: Bryan Jacobs <no_at_landwarsin.asia>
Date: Sat, 18 Jul 2009 20:20:24 -0400
On Sun, 19 Jul 2009 01:30:09 +0200
Rafaël Carré <rafael.carre_at_gmail.com> wrote:
> Thanks for this document!
> I think it is very helpful to represent the current state of buffering
Good to know this is appreciated. I didn't try to document the whole
of the code - for instance, I didn't describe under what circumstances
shrink_buffer (or even shrink_handle) get called, etc etc. I just
tried to give enough of an overview that people would know what I was
> > In the event that these are too controversial, we
> > can tone it down and eliminate the issue of fragmentation while
> > still serving Wavpack's needs by only allowing "chunky" allocations
> > to extend at buf_widx - this will mimic the behavior of one file
> > having the size of both the normal and correction files.
> I have a question : would the chunky allocations happen
> BUFFERING_DEFAULT_FILECHUNK bytes at a time ? (currently 32kB, but I
> think this should be lowered for some -short on memory- targets)
Maybe? You can choose different "default chunk sizes" for different
files/buffer sizes/days of the week. It doesn't have to be
consistent, it's a tunable variable.
> The down side you mentioned (codecs relying on reading small amounts
> of data repeatedly from the buffer) doesn't make sense to me, I would
> think a chunk would be de-allocated as soon as another chunk has been
> read for the same file.
> Here I suppose codecs never seek backwards in the bitstream, but I may
> be wrong since my knowledge in codecs bitstreams is very close to 0.
Some codecs are almost random access. The issue with small reads is
that currently codecs are guaranteed that there will be
BUFFERING_DEFAULT_FILECHUNK bytes available after they advance the
buffer. So if they read small amounts from two files in alternation,
you get really terrible performance because every bufadvance requires a
memcpy. If we relax this requirement there's no issue - you just free
the chunks when you're done with them.
> > We can also do the codec-specific buffering code. At any rate, the
> > summer is only half over and we're making steady progress, so things
> > are in good shape.
> That is an option, we should really measure the performance loss and
> relate it to the complexity added to buffering for handling two
> different buffering schemes.
It's very difficult to empirically benchmark the buffering system as it
stands. Maybe I should pull the buffering code out and hook it up to
harness that lets us run "workloads" on it? That would be (a) pretty
simple to do, (b) would help in ensuring the correctness of a new
buffering system, and (c) would put to rest any questions of
performance ("here are the graphs!")
> I have understood that few people understand the current buffering
> code, so it is very nice to see you working on it, and makes me want
> to continue reading it until I can understand what happens behind the
The best way to find out what's going on is to look at
apps/buffering.c; everything is in there :-).