Previous day | Jump to hour: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | Next day

Seconds: Show Hide | Joins: Show Hide | View raw
Font: Serif Sans-Serif Monospace | Size: Small Medium Large

Click in the nick column to highlight everything a person has said.
The Logo icon identifies that the person is a core developer (has commit access).

#rockbox log for 2024-09-07

01:00
01:01:49 Join dconrad [0] (~dconrad@152.117.104.217)
01:06:15 Quit dconrad (Ping timeout: 252 seconds)
01:28:26***Saving seen data "./dancer.seen"
01:38:29 Quit pixelma (Quit: .)
01:38:30 Quit amiconn (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
01:41:20 Join pixelma [0] (marianne@p200300ea87462b00305e95fffec66ff3.dip0.t-ipconnect.de)
01:41:20 Join amiconn [0] (jens@p200300ea87462b00305e95fffec66ff3.dip0.t-ipconnect.de)
01:55:45 Quit othello7 (Ping timeout: 246 seconds)
02:00
02:52:58 Quit bpye (Quit: Ping timeout (120 seconds))
02:53:31 Join bpye [0] (~bpye@user/bpye)
03:00
03:15:02 Join dconrad [0] (~dconrad@152.117.104.217)
03:19:14 Quit dconrad (Ping timeout: 248 seconds)
03:28:27***Saving seen data "./dancer.seen"
03:48:46 Join lebellium [0] (~lebellium@2a01cb0405d07f001c2c3fa2ce7cf0b1.ipv6.abo.wanadoo.fr)
04:00
04:02:58 Quit Bobathan_ (Quit: ZNC 1.8.2+deb2+b1 - https://znc.in)
04:03:15 Join Bobathan [0] (~admin@syn-065-029-248-157.res.spectrum.com)
04:20:27 Join dconrad [0] (~dconrad@152.117.104.217)
05:00
05:04:09 Quit jacobk (Ping timeout: 260 seconds)
05:04:54 Join jacobk [0] (~quassel@47-186-105-237.dlls.tx.frontiernet.net)
05:28:28***Saving seen data "./dancer.seen"
05:30:20 Quit dconrad (Remote host closed the connection)
05:36:09 Join berber_l5174 [0] (~berber@2a03:4000:7:4e0::)
05:36:16 Join rogeliodh9101 [0] (~rogeliodh@rogeliodh.dev)
05:36:37 Join amiconn_ [0] (jens@p200300ea87462b00305e95fffec66ff3.dip0.t-ipconnect.de)
05:36:37 Quit amiconn (Killed (iridium.libera.chat (Nickname regained by services)))
05:36:37 Nick amiconn_ is now known as amiconn (jens@p200300ea87462b00305e95fffec66ff3.dip0.t-ipconnect.de)
05:37:15 Join Bobathan- [0] (~admin@syn-065-029-248-157.res.spectrum.com)
05:37:26 Join jj5_ [0] (~jj5@100.80.216.139.dynamic.dsl.dv.iprimus.net.au)
05:38:23 Join XDjackieXD [0] (~jackie@banana-new.kilobyte22.de)
05:38:48 Join [Pokey] [0] (~pokey@spikeyCactus/hoosky)
05:44:28 Quit Bobathan (*.net *.split)
05:44:28 Quit Pokey (*.net *.split)
05:44:28 Quit rogeliodh910 (*.net *.split)
05:44:28 Quit jackie (*.net *.split)
05:44:29 Quit berber_l517 (*.net *.split)
05:44:29 Quit jj5 (*.net *.split)
05:44:30 Nick berber_l5174 is now known as berber_l517 (~berber@2a03:4000:7:4e0::)
05:44:46 Join dconrad [0] (~dconrad@152.117.104.217)
05:45:06 Nick jj5_ is now known as jj5 (~jj5@100.80.216.139.dynamic.dsl.dv.iprimus.net.au)
05:49:15 Quit dconrad (Ping timeout: 252 seconds)
07:00
07:28:31***Saving seen data "./dancer.seen"
07:56:37 Join OlsroFR [0] (~OlsroFR@user/OlsroFR)
07:57:55 Join dconrad [0] (~dconrad@152.117.104.217)
07:58:56OlsroFRHello everybody ! I just finished a big project: https://gerrit.rockbox.org/r/c/rockbox/+/5919
07:58:57OlsroFRThis allows to format entries in any playlist viewer using ID3Tags, which is especially important on any dual-booted iPod setups. With this patch and if you enable one of the new options, the readability on all playlists viewer will be excellent. More context infos are available on the description of the merge request :)
08:00
08:02:26 Quit dconrad (Ping timeout: 252 seconds)
08:07:07 Join Moriar [0] (~moriar@107-200-193-159.lightspeed.stlsmo.sbcglobal.net)
08:10:31OlsroFRThank you for your rewrite _bilgus, I am gonna test it on my iPods and review the code
08:32:58OlsroFRYour implementation is very fast and add very little code; it's in deed clean and pretty easy to understand but there is a cost. You are skewing the randomness even more. I have now around 30000 songs for a limit of 2000. I have done many tests and the last song especially was never picked (and it seems like very very unlikely to be picked at all).
08:32:58OlsroFRIt seems like the last songs at the end are very unlikely to be ever picked which is a problem.
08:34:42OlsroFRIt is a problem that my last implementation do not have because an equals part of each segments is always choosen from start to end
08:37:33 Join jjs0077018310196 [0] (~jjs007@host86-191-158-3.range86-191.btcentralplus.com)
08:43:01OlsroFRI just wanna reminder that my implementation has a small code/memory cost that is evident, but the performance cost is still much lower than the previous workaround for the user, which was to build very large playlists (like a 32000 songs playlist or even worse by editing manually the config file in this case) to be able to do this. Not saying that
08:43:02OlsroFRyou will not be able to build a perfect code with your simplier code-base (and if you can, it will be awesome), but without storing precise segment states somewhere you may be limited or have to do other compromises on the user experience.
08:44:26_bilgusOlsroFR, here is my issue, yes perfect randomness is nice but in practice the amount of tracks changes have you tried adding only 20 songs to a playlist with your current code?
08:44:49_bilgusbeyond a certian bound it loses its mind
08:45:31_bilgusnow is it fixable sure with more code
08:46:07OlsroFRmmm my random code should never trigger itself with something like 20 songs, because the system option cannot be lower than 1000 (excepted if you edit manually your config file)
08:46:28_bilgusremember max - current amount?
08:46:41_bilgusonly 20 slots left
08:47:38OlsroFRWell my code evolved so much but I remember that my first versions triggered like this (by comparing n with system limit directly) and this was intentional
08:47:59_bilgusnow the other part is you wanting to add yet more code to fix a rounding error
08:49:33_bilgusand trying to get your way by bringing up concerns of mine you had no care for, then finally telling me you had matched the codestyle after I assume not even reading the doc I linked, finding out after taking your word and themn needing to spend an hour fixing it
08:50:26_bilgusthen I save space you could have and you now take issue with the size and want to remove my code for your vision, uh no I',m a little mad at you
08:52:18_bilgusdo I hate you no do I want you to go away no, but I do want you to remember we aren't on PCs with a gb of ram here
08:53:16_bilgusand there are multiple facets and features that will share this codespace, kinda like the contributors so remeber its not just your feature that matters
08:55:53_bilgusNo I try to be accomodating to all and I don't want to upset you but I also don't want to make rockbox anything but better and I think this code is just adding complication and corner cases that will return to bite us
08:57:02_bilgusand already has bit me twice
08:57:06_bilgusin a week
08:57:44OlsroFRIf I had no concerns for your work, I would not have tested it and I would not have analysed your code. Since you are a maintainer in this project, you have the power to ignore my feedback and merge it anyway like you did about your previous merge request.
08:57:44OlsroFR"telling me you had matched the codestyle after I assume not even reading the doc I linked": Well I did it using Git and did changes all around but not in the core of the function where the functional changes occured. The first patches about this feature were made directly from the code I produced from my private branch. It was kinda my first huge
08:57:45OlsroFRC project. I was also struggling with Git/Gerrit. You could just put me a comment and not review until I would have changed also the code style on the core function. Well, right now, I am very careful to produce code as close as possible to the guidelines.
08:58:50 Join dconrad [0] (~dconrad@152.117.104.217)
08:59:58_bilgusthats fine and you are new I let you slide but thats what brought me to look closer and throw together the tests
09:00
09:00:35_bilguslook I love the idea just not the implementation
09:02:03OlsroFRAlso, remember that we already divided the memory consumption by 256 compared to the first version using smaller segments. I understand your PoV that the extra lines of codes is adding complexity. I am also curious about your new simpler method and was curious to read your code and new proposal. But to me, the quality of randomness matters; the
09:02:03OlsroFRfeature has to be reliable and that reliability will have a cost. In coding, this is common fact that 10% of the work is to make something 90% perfect, and 90% of the work is about to make the thing between 90 to 99% perfect at the end...
09:02:24_bilgusand it does get all tracks I get your wanting perfect probability and the code isn't that much bigger than the naive implementation in that patch set
09:04:55_bilgusyeah but the first one could be 1 gig and the argum,ent would be the same just because the first one was 256x larger doesn't make this better
09:05:38OlsroFRI just checked the current code that is already in production:
09:05:38OlsroFR    bool fill_randomly = false;
09:05:39OlsroFR    if (playlist == NULL)
09:05:39DBUGEnqueued KICK OlsroFR
09:05:39OlsroFR    {
09:05:40OlsroFR        bool will_exceed = n > max_playlist_size;
09:05:40***Alert Mode level 1
09:05:40OlsroFR        fill_randomly = will_exceed;
09:05:41***Alert Mode level 2
09:05:41OlsroFR    }
09:05:43_bilgusagreed and thats why I asked if you had heard the phrase perfect is the enemy of good
09:06:13OlsroFRSo, fill_randomly happens only when you want to create a playlist that exceeds the system limit. It does not depend on how much the playlist is currently filled and this was intentional
09:07:58_bilgusok, fair point stopping the random but its just another corner
09:09:08_bilguswhereas the less probabilistic one has 1
09:11:02_bilgusand it will give 20 tracks spread across
09:12:07_bilgusand the same probability to small ram devices
09:12:16OlsroFRmmm that was another problem of that kind of implementation that I already discussed previously. That kind of small implementation has to use very small segments to work
09:12:31OlsroFRwhich means it will probably take always one song from each album
09:13:59_bilguswell thats the thing about probability sometimes thats the right thing
09:14:40_bilgusbut I do see very similar distribution in the tests I did outside the codebase
09:15:42***Alert Mode OFF
09:15:55_bilgusSO IDK atm but i'm not adding another array to fix it
09:15:58OlsroFRmost of the time (almost all the time) it will happen. That's why we took larger segments; with a segment of 1024, there's chances that some albums will be completely ignored. That's theorical though, you may not notice or care during a music listening session.
09:16:19_bilgusi'd say the same with both
09:16:37_bilgusi don't think the end user will notice
09:17:06_bilgus(unless its got terrible randomness)
09:17:10OlsroFREven 1024 is biaised, yes. It was a (necessary) compromise to not make a static array of 128k or 256k...
09:19:32OlsroFRMost end users will not know about our technical discussions and will just expect the thing to work and it will work enough for them to not ask questions about this in my opinion. But you know, even when there is real problems that are much more than issue, most users tends to accomodate anyway. You see how theme makers have pushed the themes
09:19:32OlsroFRtoolbox to do things like lockscreens by pushing very far the theme toolbox/tags/etc
09:21:06 Join othello7 [0] (~Thunderbi@pool-100-36-176-164.washdc.fios.verizon.net)
09:21:16OlsroFRI can't consider myself as an average user; I am too demanding and am frustrated about details like this. Just like I took much time about finding the best audio compression settings even if I was not sure that I could really hear a difference with lower settings...
09:23:14OlsroFRUsers also that have a library of thousands of tracks will, at average, probably never notice that the last one/last ones will never play under a random shuffle scenario because they already have so much song that it will still "feel" random. But the problem exists...
09:23:27_bilgusno worries there but what I'm saying is we will be revisiting this likely in the context of a bug and its complicated versus a simple one
09:24:09_bilgusfor slightly 'better' randomness
09:25:35_bilgusand the code I pushed lastnight should easily get you the last track since it oversamples the available tracks
09:26:20_bilgusoh wait didn't push it
09:26:42OlsroFRhttps://gerrit.rockbox.org/r/c/rockbox/+/5918/3/apps/tagtree.c What I reviewed and analysed was this
09:26:45_bilgusyeah it did patchset 3
09:27:17_bilgusif ((rand() % remaining_tracks) >= slots_remaining)
09:28:27_bilgusOlsroFR, I'm not pushing that yet we'll see where we are in a few days I want to hear from someone else first
09:28:34***Saving seen data "./dancer.seen"
09:28:37_bilgusseveral
09:29:36OlsroFRRight now for me, the loop is always exited before I get to the last songs. I did many tests. It seems like being able to pick the last one is very unlikely.
09:30:37dconradspeachy: any chance you've gotten a minute to take a look at g#5914? I'm also curious if you could take a look at g#5912, how polished we want OF patcher scripts to be - I'd like to get these to the point where we have prepatched OF update files for each hw rev of each brand in rbutil
09:30:44rb-bluebotGerrit review #5914 at https://gerrit.rockbox.org/r/c/rockbox/+/5914 : erosqnative: Give erosqnative_v3 its own target ID and modelname by Dana Conrad
09:30:44rb-bluebotGerrit review #5912 at https://gerrit.rockbox.org/r/c/rockbox/+/5912 : erosqnative: OF patcher script by Dana Conrad
09:31:06_bilgusprobably with a small difference between available track and the playlist to get it close to the end
09:31:41OlsroFRAlso, if you want to take some foot back all of this shuffle problematics, I just finished a new project here: https://gerrit.rockbox.org/r/c/rockbox/+/5919
09:31:42OlsroFRThis is completely unrelated. It especially allow to enjoy much more dual-booted iPods that are synced with iTunes. It was a long standing issue in user experience
09:34:09OlsroFR_bilgus But this probably means that sometimes your code will produce playlists that will be inferior to the max. But this is in my opinion an acceptable compromise; much better than ignore the last songs and never (or almost never) pick them.
09:37:33_bilgusyour other patch looks ok I had and removed something in a similar vin a while back but really didn't like the performance so hopefully yours helps with that
09:37:43_bilgus vein*
09:39:36_bilgus g#5813
09:39:39rb-bluebotGerrit review #5813 at https://gerrit.rockbox.org/r/c/rockbox/+/5813 : [Revert] id3 title display playlist_viewer.c by William Wilgus
09:44:35OlsroFRInteresting. I did try something close than him and noticed that the playlist viewer took ages to open. Between around 10 seconds from what I can remember. And the same amount of time each time you scroll until the next segment that have to be loaded. It was terrible. My implementation is completely different and gives a completely different
09:44:36OlsroFRfeeling, it does not freeze like this during several seconds.
09:45:34OlsroFRBut you feel that there is an overhead and that the disk has to spinup more regularly and each time you are scrolling new songs, no miracles, but it feels pretty good in my opinion.
09:47:36OlsroFRWell, reading the code you have reverted makes me thing that I do not check on my code if a title tag has been set. I should do this, to show properly the files that do not have a title metadata...
10:00
10:03:42OlsroFRjust pushed another patch to improve error handling when tags are absent. If the album tag is absent, I now replace it with LANG_TAGNAVI_UNTAGGED
10:05:51_bilgusI think album_artist has a weird behavior too
10:08:39OlsroFRI am just using Title & Album but I was using those without checking if they were really tagged correctly, which could lead to ugly white spaces in user screen
10:09:15OlsroFRNow when title is absent, it will continue to show the entire file name. If title is present and album is absent, it will show "Title - <Unknown>"
10:09:35OlsroFRTitle - <Untagged> exactly
10:20:05 Quit advcomp2019 (Read error: Connection reset by peer)
10:20:09 Join advcomp2019_ [0] (~advcomp20@user/advcomp2019)
10:26:27OlsroFRTo go back about random shuffle, I feel like we could remove many complexity from my code by accepting to not exactly fit the amount of songs. It will not fit like 2000 songs but something like 1980 to 2000 which is about the same, while keeping perfectly balanced randomness. It's another matter of personal preference here and conviction, as I also
10:26:28OlsroFRfeel like most users will not care about losing that little amount of available space when starting a random mix.
10:26:28OlsroFRNot saying we should exactly do that now, but this is another option we have to reduce the code complexity. It is another possible compromise.
10:29:20 Quit Natch (Remote host closed the connection)
10:32:18 Quit jacobk (Ping timeout: 248 seconds)
11:00
11:23:41 Quit Moriar (Quit: Leaving.)
11:28:37***Saving seen data "./dancer.seen"
11:32:19_bilgusif you could get that to be exactly n songs not filled maybe but having that be randomly undersized is just the implementation details leaking out to the user
11:32:39_bilgusanother corner..
11:33:23_bilgusand since you can't get random after that when you try to fill it it will be repeats of the first n songs in the db
11:33:51_bilgusand now I have the potential to have two repeats in the playlist
11:34:01_bilguswell or 20
11:34:54_bilguswhich is what annoyed me in the first place then trying to get it to be random for 20 songs sets it into an endless loop
11:36:26dconradat the risk of wading into this: is it the same songs which will be missed each time you re-randomize, or different songs? missing a handful of songs in your entire library seems entirely reasonable if its different every time
11:36:31OlsroFRyes, that was clearly another thing that complicated the production of a good patch. Any new iteration on the db is also a very heavy operation that multiply the processing time each time you do it again
11:36:44_bilgusso in the end it feels like a step backwards but I love the idea
11:37:33_bilgusit has to do with seeding with current tick so should be different its just that the probability isn't perfect
11:37:40_bilguswell TBF neither is perfect
11:39:28_bilgussee by doing OLsoFRs way you get guarantees that the naive implementation does not have
11:39:32OlsroFRWith your implementation, theorically, the only problematic songs are the last candidates at the end of the view because your code will always/almost always break the loop before it had time to iterate until the end
11:40:34_bilguswell see you could actually do some extra processing to make that range larger but its still never as perfect as yours
11:41:31_bilgusbut what i'm saying is in the end it won't matter you will eventually get those last tracks because the range is on the tracks in the db not the slots available
11:42:03_bilgusso its constantly doing rand on 0-n not 0 to playlist size
11:43:38_bilgusand since we have a sequence coming from retrieve thats already free of duplicates the reduced random isn't going to be very noticeable over 1000 iterations
11:44:57_bilgusuntil its like you need to fill 2000 tracks and you have 2020 in the db thats where i'd see that breaking down
11:45:46_bilgusdconrad yes please weigh in need some consensus
11:46:00_bilguseither direction
11:46:42dconradwell if I understand correctly, this is only concerning shuffling the entire database, right? Not where you're shuffling a preexisting playlist, like a single album?
11:47:05_bilguscorrect totally independent
11:47:14OlsroFRThis only happen when you try to insert songs to the current playlist from a database view that have more than your system limit
11:47:34_bilgusbasically you want 2000 tracks and you fill from a db with 20000 you only ever get the first 2000
11:48:07dconradlike, the first 2000 sorted a-z?
11:48:11_bilgusthis pushes that out to randomly fill 2000 tracks with 2000 evenly distributed through the 20000
11:48:12OlsroFRHistorically, the only way for the user was to increase its system limit to an absurd amount so be able to fill a "huge" playlist, then shuffle that huge playlist. This was a very slow process, especially for old iPods
11:48:49dconradoh, yeah so the 2000 is equally distributed within the larger set of the entire database?
11:48:51_bilgusyes so this would be kinda weird in the sorted a-z case
11:49:12_bilgusexaclty
11:49:19dconradI gotcha
11:50:16_bilguswe are in agreement there its great, just the implementation details
11:50:25dconradwell, I would say that you get through those 2000 songs, and then you're just going to pick another 2000 songs, which will be a different set of songs, possibly including some from the last set but probably missing different ones as well
11:51:42_bilgusoh yeah no concern about repeats in that context like 50 that you had in the last playlist would be a perfectly acceptable result
11:52:24_bilgusOlsroFR, concern is that without good enough randomness you would never get some tracks with any amount of iteration
11:52:25dconradyeah, and the set of missing songs will be different from last time, so like... you'll hit all songs eventually, which seems reasonable to me
11:52:42dconradoh, I see
11:52:45OlsroFRdconard That's kinda my current implementation, but with segments from a fixed size that is 1024 at maximum (because I allocated a bool array to store the states).
11:52:59_bilgusbecause the limitations of rand causes some numbers to hav lower probability
11:53:17OlsroFRsegments are often also a little bit lower than 1024, to keep the same exact amount of songs in each segments, so each segments should have the same size for a balanced distribution between all segments
11:54:27_bilgusits just all the extra code to get it to act right..
11:55:47_bilgusso segments better potential random, rand over n tracks lower guarantees and simpler code
11:55:58OlsroFRyour code of your last implementation _bilgus is fine. Segments are little but that's not the big deal. The big deal is to deal with the end; I don't know how you can solve this to get a decent probability to pick the last candidate while still always respecting the system limit exactly, considering you don't have the right to iterate ever again on
11:55:58OlsroFRthe db, just once
11:56:34_bilgusyou can but then you need more memory to store because you might get repeats
11:57:14_bilgusor go looking back thru the playlist tried that it has not great performance
11:57:27OlsroFRIterating over the db is very costly on my iPod Mini that has around 30000 songs, I promise. I already tried implementations like this that I did not push.
11:58:15OlsroFRBasically, the current 10 seconds of total processing will become (at least) around 20 seconds
11:58:33OlsroFRYes and doing it in reverse is terrible, you can only go forward
11:58:46_bilgusyeah becaus ehe way the db is read
11:59:49OlsroFRMy very very first implementation (not pushed) was even simplier than your naive one, it was just "pick one random song at each iteration". I didn't even cared about repeats. That's how I learnt (the hard way) that the db didn't like random accesses haha
12:00
12:00:29OlsroFRDoing randomness this way was very very slow to fill the 2000 songs playlist, it was maybe even slower than just building a 20000 songs full playlist...
12:01:33_bilgusso I can oversample the rand function to get better distribution and its still not going to be perfect (and potentially a bit slower)
12:02:46_bilgusbut that should make the issue pretty much not noticeable but i'll have to look this eve
12:02:58 Quit npmania (Read error: Connection reset by peer)
12:03:46OlsroFRI can't see any solution excepted if you accept to build playlists that are not exactly equals to the system limit but that will be sometimes (and randomly) lower depending of the randomness. It will be a bit ugly, but will have the chance to include all possible candidates from the db
12:04:16_bilgusOlsroFR, I'm sure you probably already discovered the reason but the previous entry seek sets the next item so if you want to do it randomly first you need to walk the whole thing and store offsets
12:04:53OlsroFRI did not investigate further and kept focused myself on iterating over it the right way
12:05:03OlsroFRIt's also a nice way to debug the code in my opinion
12:05:29_bilgusyeah thats easier but there is code already that will do it for future info
12:05:39OlsroFRthanks
12:06:31_bilgusI'm ok with reduced random for those final tracks no one will notice
12:06:44_bilgusexcept you
12:06:46_bilgus:p
12:07:15OlsroFRIf you cheat by always including the final tracks, you may have an issue with randomness about tracks prior those you cheated to always include... the problem remain the same
12:07:19 Join npmania [0] (~npmania@210.123.73.190)
12:07:58_bilgusno if we temper the range you throw away most of the non distributed-ness
12:09:43OlsroFRYou know I am not that rigid; I accepted to push a patch that at first was creating playlists not exactly equalling the system limit but close to it, and I was very happy with it but you not. We are both rigid but not about the same exact concerns hehe ;)
12:09:49_bilgussomething like do random = rand() / (RAND_MAX / range); while (random >= range); but you need a bit more code to make sure it completes in a reasonable time
12:10:51_bilgusI'm quite open to whatever but i'm rigid about the overall project been here something like 10 years
12:11:17OlsroFRIf your code will require too much more time to execute, it will be a massive regression in terms of the user experience, even if there's smaller code lines to maintain. Ping me if you need me to do tests in real hardware, I will do it with pleasure
12:11:27OlsroFRoh 10 years, that's a lot
12:12:16_bilgusI don't want to decrease anyones experience even those little players till we decide to cull them
12:12:41_bilgusand if I keep that firmware size down that buys them time
12:13:14_bilgusso its a balancing act whats good enough with our constraints
12:14:02_bilgusIts actually what draws me here its very fun and rewarding to puzzle about
12:14:05OlsroFRthere is a ratio of cost/benefit also here, clearly. As I said in top, before it the only way to do this was with very huge playlists and low memory players probably struggled hard with this way of operation
12:14:47_bilgusNow am i the best programmer here HELL No i'm a mechanic
12:15:18_bilgusbut I get great use out of my dap at work
12:16:00_bilgusanyway i'll revisit tonight and we'll figure it out
12:16:12OlsroFRI can't say it either, I also personnally do not like very much low level languages coding.
12:17:06OlsroFRI am here more because I enjoyed using my ipod a lot during those last months, and keeping to research/fiddle with it, and getting some frustrations, lead me there to tinker things to make it better
12:17:20_bilgusI hope you learn to love it :) we need more regular contributers so I apologise for seeming like an ass
12:17:27OlsroFRBetter for me only at first, before I decided to enter there to submit my patches so more people could benefit from it
12:18:00_bilgusits just care for the project and just a little beign an ass
12:18:21OlsroFRand it's also a way to put some human interactions around all of that coding. I could develop things that worked for me, but working for a huge project with a 20 years history and learning to know the people here is another adventure
12:18:46_bilgusplease do :)
12:21:00OlsroFRI am still amazed with how Rockbox can give that feeling of what you can expect from a music modern player, on old ipods with a black&white screen. I learnt so much things, even about obscure audio formats like the Musepack
12:22:35OlsroFReven with those little frustrations I had as a user and that I fought to solve, it's still miles away further compared to the Stock OS
12:23:30OlsroFRI feel that also very rewarding that when something is merged from my work, this directly impacts so many devices instantly, not just one kind of iPod
12:28:17OlsroFRAfter all of this, the only thing that is annoying me is that I cannot manage to make my iFlash CF adapter works on my iPod Mini with power management enabled on Rockbox. I spent so many hours during trial and error with this, without any success unfortunately and I gave up.
12:28:17OlsroFRRight now the only way to get around this is to use a real compact flash. Then I found the amazing Musepack compression algorithm and I use it rather than direct FLACS, it's a good compromise, the quality is excellent. But I feel frustrated to not being able to use a 1 terabyte iPod Mini with Rockbox that still could get 15 hours of battery life,
12:28:18OlsroFRit would be so amazing...
12:37:15 Quit OlsroFR (Quit: Client closed)
13:00
13:28:39***Saving seen data "./dancer.seen"
13:37:52speachydconrad: not yet. $real_life stuff taking precedence. but the weather is goign to drive me indoors shortly and it's on my to-do list.
13:38:36dconradexcellent, just wanted to ping you to make sure you didn't forget :)
13:40:10speachyabout to do a landfill run... and look, it's now raining. good timing..
13:40:48 Join OlsroFR [0] (~OlsroFR@user/OlsroFR)
13:43:09OlsroFRJust pushed another revision of my patch about shuffle https://gerrit.rockbox.org/r/c/rockbox/+/5917 I re-read all of my code to check about possible simplifications or improvements. It satisfy me now as-it-is.
13:45:45 Quit OlsroFR (Quit: Client closed)
13:47:45 Quit TheEaterOfSouls (Read error: Connection reset by peer)
14:00
14:09:26 Join macaronus_ [0] (~macaronus@user/MaCaRoNus)
14:10:14 Join jacobk [0] (~quassel@2603:8080:b200:7b02:77cb:6304:f9db:dda1)
14:12:24 Quit macaronus (Ping timeout: 276 seconds)
14:26:42 Quit jacobk (Ping timeout: 276 seconds)
14:35:16 Quit dconrad (Remote host closed the connection)
14:49:36 Join Natch [0] (~natch@c-9e07225c.038-60-73746f7.bbcust.telenor.se)
14:56:13 Join paulk-bis [0] (~paulk@vpn-0-22.aquilenet.fr)
14:56:38 Quit paulk (Read error: Connection reset by peer)
15:00
15:04:19 Join dconrad [0] (~dconrad@152.117.104.217)
15:17:10 Quit Natch (Ping timeout: 244 seconds)
15:18:40 Join Natch [0] (~natch@c-9e07225c.038-60-73746f7.bbcust.telenor.se)
15:28:44***Saving seen data "./dancer.seen"
16:00
16:02:58 Join ThreeeePac [0] (~macaronus@user/MaCaRoNus)
16:06:09 Quit macaronus_ (Ping timeout: 276 seconds)
16:56:06 Quit paulk-bis (Quit: WeeChat 3.0)
16:56:14 Join paulk [0] (~paulk@vpn-0-22.aquilenet.fr)
16:56:14 Quit paulk (Changing host)
16:56:14 Join paulk [0] (~paulk@about/aquilenet/user/paulk)
17:00
17:12:08 Join Moriar [0] (~moriar@107-200-193-159.lightspeed.stlsmo.sbcglobal.net)
17:28:46***Saving seen data "./dancer.seen"
17:51:48 Quit lebellium (Quit: Leaving)
19:00
19:28:50***Saving seen data "./dancer.seen"
20:00
20:03:54 Join jacobk [0] (~quassel@47-186-105-237.dlls.tx.frontiernet.net)
20:05:58 Quit dconrad (Remote host closed the connection)
20:38:15 Join dconrad [0] (~dconrad@152.117.104.217)
21:00
21:28:53***Saving seen data "./dancer.seen"
22:00
22:21:16 Quit Moriar (Quit: Leaving.)
22:25:50 Quit dconrad (Remote host closed the connection)
23:00
23:23:14 Join dconrad [0] (~dconrad@152.117.104.217)
23:27:49 Quit dconrad (Ping timeout: 260 seconds)
23:28:54***Saving seen data "./dancer.seen"

Previous day | Next day