Rockbox mail archiveSubject: Re: Playlist Handling Alg
Re: Playlist Handling Alg
From: Paul Suade <paul.suade_at_laposte.net>
Date: Mon, 3 Jun 2002 17:18:44 +0200
----- Original Message -----
From: Lion Templin <ltemplin_at_leonine.com>
Sent: Monday, June 03, 2002 12:17 PM
Subject: Re: Playlist Handling Alg
> > According to me calculations, if we assume that there are "weirdos" with
> > hard drives, they will still almost never have even 10000 songs. So, a
> > memory area to support this amount of songs would need merely 40000 bytes.
> > think that since we would still need to design the memory sizes for the
> > case, we can use the worst case always. imho.
> Don't underestimate what users will do. :) There may be a case where
> someone has thousands of tiny files, or some else that would result in a
> large number of individual files. Know well that if there are limits
> somewhere in software, users will find them and attempt to violate them.
> The more robust and capable you make code, the less you have to kludge it
> later when someone needs to go beyond what you've written.
> And I know that there will be people that get 40G or larger drives. I
> bought mine for many reasons, and one was it's potential extended lifetime
> through the use of larger drives.
Ok, potentially the size of a mp3 files is about 2~6 Mb; let's say the average
is 4 MB. For compatibility, a FAT directory cannot have more than 65535
directory entries, whatever the size of filenames are. Since most of filenames
are long enough, we can expect to divide the real number files in a FAT
directory as three or four times as less than entries. Now, compute : 65535 * 4
MB => 64 * 4 GB => 256 GB. Remember when you read 20 MB on harddisk, it is in
fact 20 000 KB and not really a true 20 GB (1 GB should be 1024 B in data
system domain). So to explode this limit of file number we should have at least
a 263 GB harddisk.
I don't think the argument for a list of arrays versus a simple array is a real
matter for the moment...
> > It would make the song list easily addressable and it would give us a
> > "perfect" random.
Which should be a priority.
> Considering the limits on memory and cpu the device imposes upon us, the
> benifits of a O(n/2) alg to shuffle seem to outweigh the ideal of a
> "perfect" random. The ADT I have described allows for fast acceptable
> randomization with several other good side effects, like it's scaleability.
> Of course, what I propose is a somewhat ideal case. The code is not
> terribly difficult, though it is slightly more complex than a simple array.
> My assertation (from years of experiance) is that it's better to take the
> effort to implement a more robust alg now, then it is to rewrite it later.
Just do it. If your code is compliant with the actual one, let people have
choice to select the best code according the size of harddisk when compiling.
Received on 2002-06-03