--- Log for 07.09.124 Server: osmium.libera.chat Channel: #rockbox --- Nick: rb-logbot Version: Dancer V4.16 Started: 13 days and 23 hours ago 01.01.49 Join dconrad [0] (~dconrad@152.117.104.217) 01.06.15 Quit dconrad (Ping timeout: 252 seconds) 01.28.26 *** Saving seen data "./dancer.seen" 01.38.29 Quit pixelma (Quit: .) 01.38.30 Quit amiconn (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 01.41.20 Join pixelma [0] (marianne@p200300ea87462b00305e95fffec66ff3.dip0.t-ipconnect.de) 01.41.20 Join amiconn [0] (jens@p200300ea87462b00305e95fffec66ff3.dip0.t-ipconnect.de) 01.55.45 Quit othello7 (Ping timeout: 246 seconds) 02.52.58 Quit bpye (Quit: Ping timeout (120 seconds)) 02.53.31 Join bpye [0] (~bpye@user/bpye) 03.15.02 Join dconrad [0] (~dconrad@152.117.104.217) 03.19.14 Quit dconrad (Ping timeout: 248 seconds) 03.28.27 *** Saving seen data "./dancer.seen" 03.48.46 Join lebellium [0] (~lebellium@2a01cb0405d07f001c2c3fa2ce7cf0b1.ipv6.abo.wanadoo.fr) 04.02.58 Quit Bobathan_ (Quit: ZNC 1.8.2+deb2+b1 - https://znc.in) 04.03.15 Join Bobathan [0] (~admin@syn-065-029-248-157.res.spectrum.com) 04.20.27 Join dconrad [0] (~dconrad@152.117.104.217) 05.04.09 Quit jacobk (Ping timeout: 260 seconds) 05.04.54 Join jacobk [0] (~quassel@47-186-105-237.dlls.tx.frontiernet.net) 05.28.28 *** Saving seen data "./dancer.seen" 05.30.20 Quit dconrad (Remote host closed the connection) 05.36.09 Join berber_l5174 [0] (~berber@2a03:4000:7:4e0::) 05.36.16 Join rogeliodh9101 [0] (~rogeliodh@rogeliodh.dev) 05.36.37 Join amiconn_ [0] (jens@p200300ea87462b00305e95fffec66ff3.dip0.t-ipconnect.de) 05.36.37 Quit amiconn (Killed (iridium.libera.chat (Nickname regained by services))) 05.36.37 Nick amiconn_ is now known as amiconn (jens@p200300ea87462b00305e95fffec66ff3.dip0.t-ipconnect.de) 05.37.15 Join Bobathan- [0] (~admin@syn-065-029-248-157.res.spectrum.com) 05.37.26 Join jj5_ [0] (~jj5@100.80.216.139.dynamic.dsl.dv.iprimus.net.au) 05.38.23 Join XDjackieXD [0] (~jackie@banana-new.kilobyte22.de) 05.38.48 Join [Pokey] [0] (~pokey@spikeyCactus/hoosky) 05.44.28 Quit Bobathan (*.net *.split) 05.44.28 Quit Pokey (*.net *.split) 05.44.28 Quit rogeliodh910 (*.net *.split) 05.44.28 Quit jackie (*.net *.split) 05.44.29 Quit berber_l517 (*.net *.split) 05.44.29 Quit jj5 (*.net *.split) 05.44.30 Nick berber_l5174 is now known as berber_l517 (~berber@2a03:4000:7:4e0::) 05.44.46 Join dconrad [0] (~dconrad@152.117.104.217) 05.45.06 Nick jj5_ is now known as jj5 (~jj5@100.80.216.139.dynamic.dsl.dv.iprimus.net.au) 05.49.15 Quit dconrad (Ping timeout: 252 seconds) 07.28.31 *** Saving seen data "./dancer.seen" 07.56.37 Join OlsroFR [0] (~OlsroFR@user/OlsroFR) 07.57.55 Join dconrad [0] (~dconrad@152.117.104.217) 07.58.56 # Hello everybody ! I just finished a big project: https://gerrit.rockbox.org/r/c/rockbox/+/5919 07.58.57 # This allows to format entries in any playlist viewer using ID3Tags, which is especially important on any dual-booted iPod setups. With this patch and if you enable one of the new options, the readability on all playlists viewer will be excellent. More context infos are available on the description of the merge request :) 08.02.26 Quit dconrad (Ping timeout: 252 seconds) 08.07.07 Join Moriar [0] (~moriar@107-200-193-159.lightspeed.stlsmo.sbcglobal.net) 08.10.31 # Thank you for your rewrite _bilgus, I am gonna test it on my iPods and review the code 08.32.58 # Your implementation is very fast and add very little code; it's in deed clean and pretty easy to understand but there is a cost. You are skewing the randomness even more. I have now around 30000 songs for a limit of 2000. I have done many tests and the last song especially was never picked (and it seems like very very unlikely to be picked at all). 08.32.58 # It seems like the last songs at the end are very unlikely to be ever picked which is a problem. 08.34.42 # It is a problem that my last implementation do not have because an equals part of each segments is always choosen from start to end 08.37.33 Join jjs0077018310196 [0] (~jjs007@host86-191-158-3.range86-191.btcentralplus.com) 08.43.01 # I just wanna reminder that my implementation has a small code/memory cost that is evident, but the performance cost is still much lower than the previous workaround for the user, which was to build very large playlists (like a 32000 songs playlist or even worse by editing manually the config file in this case) to be able to do this. Not saying that 08.43.02 # you will not be able to build a perfect code with your simplier code-base (and if you can, it will be awesome), but without storing precise segment states somewhere you may be limited or have to do other compromises on the user experience. 08.44.26 # <_bilgus> OlsroFR, here is my issue, yes perfect randomness is nice but in practice the amount of tracks changes have you tried adding only 20 songs to a playlist with your current code? 08.44.49 # <_bilgus> beyond a certian bound it loses its mind 08.45.31 # <_bilgus> now is it fixable sure with more code 08.46.07 # mmm my random code should never trigger itself with something like 20 songs, because the system option cannot be lower than 1000 (excepted if you edit manually your config file) 08.46.28 # <_bilgus> remember max - current amount? 08.46.41 # <_bilgus> only 20 slots left 08.47.38 # Well my code evolved so much but I remember that my first versions triggered like this (by comparing n with system limit directly) and this was intentional 08.47.59 # <_bilgus> now the other part is you wanting to add yet more code to fix a rounding error 08.49.33 # <_bilgus> and trying to get your way by bringing up concerns of mine you had no care for, then finally telling me you had matched the codestyle after I assume not even reading the doc I linked, finding out after taking your word and themn needing to spend an hour fixing it 08.50.26 # <_bilgus> then I save space you could have and you now take issue with the size and want to remove my code for your vision, uh no I',m a little mad at you 08.52.18 # <_bilgus> do I hate you no do I want you to go away no, but I do want you to remember we aren't on PCs with a gb of ram here 08.53.16 # <_bilgus> and there are multiple facets and features that will share this codespace, kinda like the contributors so remeber its not just your feature that matters 08.55.53 # <_bilgus> No I try to be accomodating to all and I don't want to upset you but I also don't want to make rockbox anything but better and I think this code is just adding complication and corner cases that will return to bite us 08.57.02 # <_bilgus> and already has bit me twice 08.57.06 # <_bilgus> in a week 08.57.44 # If I had no concerns for your work, I would not have tested it and I would not have analysed your code. Since you are a maintainer in this project, you have the power to ignore my feedback and merge it anyway like you did about your previous merge request. 08.57.44 # "telling me you had matched the codestyle after I assume not even reading the doc I linked": Well I did it using Git and did changes all around but not in the core of the function where the functional changes occured. The first patches about this feature were made directly from the code I produced from my private branch. It was kinda my first huge 08.57.45 # C project. I was also struggling with Git/Gerrit. You could just put me a comment and not review until I would have changed also the code style on the core function. Well, right now, I am very careful to produce code as close as possible to the guidelines. 08.58.50 Join dconrad [0] (~dconrad@152.117.104.217) 08.59.58 # <_bilgus> thats fine and you are new I let you slide but thats what brought me to look closer and throw together the tests 09.00.35 # <_bilgus> look I love the idea just not the implementation 09.02.03 # Also, remember that we already divided the memory consumption by 256 compared to the first version using smaller segments. I understand your PoV that the extra lines of codes is adding complexity. I am also curious about your new simpler method and was curious to read your code and new proposal. But to me, the quality of randomness matters; the 09.02.03 # feature has to be reliable and that reliability will have a cost. In coding, this is common fact that 10% of the work is to make something 90% perfect, and 90% of the work is about to make the thing between 90 to 99% perfect at the end... 09.02.24 # <_bilgus> and it does get all tracks I get your wanting perfect probability and the code isn't that much bigger than the naive implementation in that patch set 09.04.55 # <_bilgus> yeah but the first one could be 1 gig and the argum,ent would be the same just because the first one was 256x larger doesn't make this better 09.05.38 # I just checked the current code that is already in production: 09.05.38 #     bool fill_randomly = false; 09.05.39 #     if (playlist == NULL) 09.05.39 DBUG Enqueued KICK OlsroFR 09.05.39 #     { 09.05.40 #         bool will_exceed = n > max_playlist_size; 09.05.40 *** Alert Mode level 1 09.05.40 #         fill_randomly = will_exceed; 09.05.41 *** Alert Mode level 2 09.05.41 #     } 09.05.43 # <_bilgus> agreed and thats why I asked if you had heard the phrase perfect is the enemy of good 09.06.13 # So, fill_randomly happens only when you want to create a playlist that exceeds the system limit. It does not depend on how much the playlist is currently filled and this was intentional 09.07.58 # <_bilgus> ok, fair point stopping the random but its just another corner 09.09.08 # <_bilgus> whereas the less probabilistic one has 1 09.11.02 # <_bilgus> and it will give 20 tracks spread across 09.12.07 # <_bilgus> and the same probability to small ram devices 09.12.16 # mmm that was another problem of that kind of implementation that I already discussed previously. That kind of small implementation has to use very small segments to work 09.12.31 # which means it will probably take always one song from each album 09.13.59 # <_bilgus> well thats the thing about probability sometimes thats the right thing 09.14.40 # <_bilgus> but I do see very similar distribution in the tests I did outside the codebase 09.15.42 *** Alert Mode OFF 09.15.55 # <_bilgus> SO IDK atm but i'm not adding another array to fix it 09.15.58 # most of the time (almost all the time) it will happen. That's why we took larger segments; with a segment of 1024, there's chances that some albums will be completely ignored. That's theorical though, you may not notice or care during a music listening session. 09.16.19 # <_bilgus> i'd say the same with both 09.16.37 # <_bilgus> i don't think the end user will notice 09.17.06 # <_bilgus> (unless its got terrible randomness) 09.17.10 # Even 1024 is biaised, yes. It was a (necessary) compromise to not make a static array of 128k or 256k... 09.19.32 # Most end users will not know about our technical discussions and will just expect the thing to work and it will work enough for them to not ask questions about this in my opinion. But you know, even when there is real problems that are much more than issue, most users tends to accomodate anyway. You see how theme makers have pushed the themes 09.19.32 # toolbox to do things like lockscreens by pushing very far the theme toolbox/tags/etc 09.21.06 Join othello7 [0] (~Thunderbi@pool-100-36-176-164.washdc.fios.verizon.net) 09.21.16 # I can't consider myself as an average user; I am too demanding and am frustrated about details like this. Just like I took much time about finding the best audio compression settings even if I was not sure that I could really hear a difference with lower settings... 09.23.14 # Users also that have a library of thousands of tracks will, at average, probably never notice that the last one/last ones will never play under a random shuffle scenario because they already have so much song that it will still "feel" random. But the problem exists... 09.23.27 # <_bilgus> no worries there but what I'm saying is we will be revisiting this likely in the context of a bug and its complicated versus a simple one 09.24.09 # <_bilgus> for slightly 'better' randomness 09.25.35 # <_bilgus> and the code I pushed lastnight should easily get you the last track since it oversamples the available tracks 09.26.20 # <_bilgus> oh wait didn't push it 09.26.42 # https://gerrit.rockbox.org/r/c/rockbox/+/5918/3/apps/tagtree.c What I reviewed and analysed was this 09.26.45 # <_bilgus> yeah it did patchset 3 09.27.17 # <_bilgus> if ((rand() % remaining_tracks) >= slots_remaining) 09.28.27 # <_bilgus> OlsroFR, I'm not pushing that yet we'll see where we are in a few days I want to hear from someone else first 09.28.34 *** Saving seen data "./dancer.seen" 09.28.37 # <_bilgus> several 09.29.36 # Right now for me, the loop is always exited before I get to the last songs. I did many tests. It seems like being able to pick the last one is very unlikely. 09.30.37 # speachy: any chance you've gotten a minute to take a look at g#5914? I'm also curious if you could take a look at g#5912, how polished we want OF patcher scripts to be - I'd like to get these to the point where we have prepatched OF update files for each hw rev of each brand in rbutil 09.30.44 # 3Gerrit review #5914 at https://gerrit.rockbox.org/r/c/rockbox/+/5914 : 3erosqnative: Give erosqnative_v3 its own target ID and modelname by Dana Conrad 09.30.44 # 3Gerrit review #5912 at https://gerrit.rockbox.org/r/c/rockbox/+/5912 : 3erosqnative: OF patcher script by Dana Conrad 09.31.06 # <_bilgus> probably with a small difference between available track and the playlist to get it close to the end 09.31.41 # Also, if you want to take some foot back all of this shuffle problematics, I just finished a new project here: https://gerrit.rockbox.org/r/c/rockbox/+/5919 09.31.42 # This is completely unrelated. It especially allow to enjoy much more dual-booted iPods that are synced with iTunes. It was a long standing issue in user experience 09.34.09 # _bilgus But this probably means that sometimes your code will produce playlists that will be inferior to the max. But this is in my opinion an acceptable compromise; much better than ignore the last songs and never (or almost never) pick them. 09.37.33 # <_bilgus> your other patch looks ok I had and removed something in a similar vin a while back but really didn't like the performance so hopefully yours helps with that 09.37.43 # <_bilgus> vein* 09.39.36 # <_bilgus> g#5813 09.39.39 # 3Gerrit review #5813 at https://gerrit.rockbox.org/r/c/rockbox/+/5813 : 3[Revert] id3 title display playlist_viewer.c by William Wilgus 09.44.35 # Interesting. I did try something close than him and noticed that the playlist viewer took ages to open. Between around 10 seconds from what I can remember. And the same amount of time each time you scroll until the next segment that have to be loaded. It was terrible. My implementation is completely different and gives a completely different 09.44.36 # feeling, it does not freeze like this during several seconds. 09.45.34 # But you feel that there is an overhead and that the disk has to spinup more regularly and each time you are scrolling new songs, no miracles, but it feels pretty good in my opinion. 09.47.36 # Well, reading the code you have reverted makes me thing that I do not check on my code if a title tag has been set. I should do this, to show properly the files that do not have a title metadata... 10.03.42 # just pushed another patch to improve error handling when tags are absent. If the album tag is absent, I now replace it with LANG_TAGNAVI_UNTAGGED 10.05.51 # <_bilgus> I think album_artist has a weird behavior too 10.08.39 # I am just using Title & Album but I was using those without checking if they were really tagged correctly, which could lead to ugly white spaces in user screen 10.09.15 # Now when title is absent, it will continue to show the entire file name. If title is present and album is absent, it will show "Title - " 10.09.35 # Title - exactly 10.20.05 Quit advcomp2019 (Read error: Connection reset by peer) 10.20.09 Join advcomp2019_ [0] (~advcomp20@user/advcomp2019) 10.26.27 # To go back about random shuffle, I feel like we could remove many complexity from my code by accepting to not exactly fit the amount of songs. It will not fit like 2000 songs but something like 1980 to 2000 which is about the same, while keeping perfectly balanced randomness. It's another matter of personal preference here and conviction, as I also 10.26.28 # feel like most users will not care about losing that little amount of available space when starting a random mix. 10.26.28 # Not saying we should exactly do that now, but this is another option we have to reduce the code complexity. It is another possible compromise. 10.29.20 Quit Natch (Remote host closed the connection) 10.32.18 Quit jacobk (Ping timeout: 248 seconds) 11.23.41 Quit Moriar (Quit: Leaving.) 11.28.37 *** Saving seen data "./dancer.seen" 11.32.19 # <_bilgus> if you could get that to be exactly n songs not filled maybe but having that be randomly undersized is just the implementation details leaking out to the user 11.32.39 # <_bilgus> another corner.. 11.33.23 # <_bilgus> and since you can't get random after that when you try to fill it it will be repeats of the first n songs in the db 11.33.51 # <_bilgus> and now I have the potential to have two repeats in the playlist 11.34.01 # <_bilgus> well or 20 11.34.54 # <_bilgus> which is what annoyed me in the first place then trying to get it to be random for 20 songs sets it into an endless loop 11.36.26 # at the risk of wading into this: is it the same songs which will be missed each time you re-randomize, or different songs? missing a handful of songs in your entire library seems entirely reasonable if its different every time 11.36.31 # yes, that was clearly another thing that complicated the production of a good patch. Any new iteration on the db is also a very heavy operation that multiply the processing time each time you do it again 11.36.44 # <_bilgus> so in the end it feels like a step backwards but I love the idea 11.37.33 # <_bilgus> it has to do with seeding with current tick so should be different its just that the probability isn't perfect 11.37.40 # <_bilgus> well TBF neither is perfect 11.39.28 # <_bilgus> see by doing OLsoFRs way you get guarantees that the naive implementation does not have 11.39.32 # With your implementation, theorically, the only problematic songs are the last candidates at the end of the view because your code will always/almost always break the loop before it had time to iterate until the end 11.40.34 # <_bilgus> well see you could actually do some extra processing to make that range larger but its still never as perfect as yours 11.41.31 # <_bilgus> but what i'm saying is in the end it won't matter you will eventually get those last tracks because the range is on the tracks in the db not the slots available 11.42.03 # <_bilgus> so its constantly doing rand on 0-n not 0 to playlist size 11.43.38 # <_bilgus> and since we have a sequence coming from retrieve thats already free of duplicates the reduced random isn't going to be very noticeable over 1000 iterations 11.44.57 # <_bilgus> until its like you need to fill 2000 tracks and you have 2020 in the db thats where i'd see that breaking down 11.45.46 # <_bilgus> dconrad yes please weigh in need some consensus 11.46.00 # <_bilgus> either direction 11.46.42 # well if I understand correctly, this is only concerning shuffling the entire database, right? Not where you're shuffling a preexisting playlist, like a single album? 11.47.05 # <_bilgus> correct totally independent 11.47.14 # This only happen when you try to insert songs to the current playlist from a database view that have more than your system limit 11.47.34 # <_bilgus> basically you want 2000 tracks and you fill from a db with 20000 you only ever get the first 2000 11.48.07 # like, the first 2000 sorted a-z? 11.48.11 # <_bilgus> this pushes that out to randomly fill 2000 tracks with 2000 evenly distributed through the 20000 11.48.12 # Historically, the only way for the user was to increase its system limit to an absurd amount so be able to fill a "huge" playlist, then shuffle that huge playlist. This was a very slow process, especially for old iPods 11.48.49 # oh, yeah so the 2000 is equally distributed within the larger set of the entire database? 11.48.51 # <_bilgus> yes so this would be kinda weird in the sorted a-z case 11.49.12 # <_bilgus> exaclty 11.49.19 # I gotcha 11.50.16 # <_bilgus> we are in agreement there its great, just the implementation details 11.50.25 # well, I would say that you get through those 2000 songs, and then you're just going to pick another 2000 songs, which will be a different set of songs, possibly including some from the last set but probably missing different ones as well 11.51.42 # <_bilgus> oh yeah no concern about repeats in that context like 50 that you had in the last playlist would be a perfectly acceptable result 11.52.24 # <_bilgus> OlsroFR, concern is that without good enough randomness you would never get some tracks with any amount of iteration 11.52.25 # yeah, and the set of missing songs will be different from last time, so like... you'll hit all songs eventually, which seems reasonable to me 11.52.42 # oh, I see 11.52.45 # dconard That's kinda my current implementation, but with segments from a fixed size that is 1024 at maximum (because I allocated a bool array to store the states). 11.52.59 # <_bilgus> because the limitations of rand causes some numbers to hav lower probability 11.53.17 # segments are often also a little bit lower than 1024, to keep the same exact amount of songs in each segments, so each segments should have the same size for a balanced distribution between all segments 11.54.27 # <_bilgus> its just all the extra code to get it to act right.. 11.55.47 # <_bilgus> so segments better potential random, rand over n tracks lower guarantees and simpler code 11.55.58 # your code of your last implementation _bilgus is fine. Segments are little but that's not the big deal. The big deal is to deal with the end; I don't know how you can solve this to get a decent probability to pick the last candidate while still always respecting the system limit exactly, considering you don't have the right to iterate ever again on 11.55.58 # the db, just once 11.56.34 # <_bilgus> you can but then you need more memory to store because you might get repeats 11.57.14 # <_bilgus> or go looking back thru the playlist tried that it has not great performance 11.57.27 # Iterating over the db is very costly on my iPod Mini that has around 30000 songs, I promise. I already tried implementations like this that I did not push. 11.58.15 # Basically, the current 10 seconds of total processing will become (at least) around 20 seconds 11.58.33 # Yes and doing it in reverse is terrible, you can only go forward 11.58.46 # <_bilgus> yeah becaus ehe way the db is read 11.59.49 # My very very first implementation (not pushed) was even simplier than your naive one, it was just "pick one random song at each iteration". I didn't even cared about repeats. That's how I learnt (the hard way) that the db didn't like random accesses haha 12.00.29 # Doing randomness this way was very very slow to fill the 2000 songs playlist, it was maybe even slower than just building a 20000 songs full playlist... 12.01.33 # <_bilgus> so I can oversample the rand function to get better distribution and its still not going to be perfect (and potentially a bit slower) 12.02.46 # <_bilgus> but that should make the issue pretty much not noticeable but i'll have to look this eve 12.02.58 Quit npmania (Read error: Connection reset by peer) 12.03.46 # I can't see any solution excepted if you accept to build playlists that are not exactly equals to the system limit but that will be sometimes (and randomly) lower depending of the randomness. It will be a bit ugly, but will have the chance to include all possible candidates from the db 12.04.16 # <_bilgus> OlsroFR, I'm sure you probably already discovered the reason but the previous entry seek sets the next item so if you want to do it randomly first you need to walk the whole thing and store offsets 12.04.53 # I did not investigate further and kept focused myself on iterating over it the right way 12.05.03 # It's also a nice way to debug the code in my opinion 12.05.29 # <_bilgus> yeah thats easier but there is code already that will do it for future info 12.05.39 # thanks 12.06.31 # <_bilgus> I'm ok with reduced random for those final tracks no one will notice 12.06.44 # <_bilgus> except you 12.06.46 # <_bilgus> :p 12.07.15 # If you cheat by always including the final tracks, you may have an issue with randomness about tracks prior those you cheated to always include... the problem remain the same 12.07.19 Join npmania [0] (~npmania@210.123.73.190) 12.07.58 # <_bilgus> no if we temper the range you throw away most of the non distributed-ness 12.09.43 # You know I am not that rigid; I accepted to push a patch that at first was creating playlists not exactly equalling the system limit but close to it, and I was very happy with it but you not. We are both rigid but not about the same exact concerns hehe ;) 12.09.49 # <_bilgus> something like do random = rand() / (RAND_MAX / range); while (random >= range); but you need a bit more code to make sure it completes in a reasonable time 12.10.51 # <_bilgus> I'm quite open to whatever but i'm rigid about the overall project been here something like 10 years 12.11.17 # If your code will require too much more time to execute, it will be a massive regression in terms of the user experience, even if there's smaller code lines to maintain. Ping me if you need me to do tests in real hardware, I will do it with pleasure 12.11.27 # oh 10 years, that's a lot 12.12.16 # <_bilgus> I don't want to decrease anyones experience even those little players till we decide to cull them 12.12.41 # <_bilgus> and if I keep that firmware size down that buys them time 12.13.14 # <_bilgus> so its a balancing act whats good enough with our constraints 12.14.02 # <_bilgus> Its actually what draws me here its very fun and rewarding to puzzle about 12.14.05 # there is a ratio of cost/benefit also here, clearly. As I said in top, before it the only way to do this was with very huge playlists and low memory players probably struggled hard with this way of operation 12.14.47 # <_bilgus> Now am i the best programmer here HELL No i'm a mechanic 12.15.18 # <_bilgus> but I get great use out of my dap at work 12.16.00 # <_bilgus> anyway i'll revisit tonight and we'll figure it out 12.16.12 # I can't say it either, I also personnally do not like very much low level languages coding. 12.17.06 # I am here more because I enjoyed using my ipod a lot during those last months, and keeping to research/fiddle with it, and getting some frustrations, lead me there to tinker things to make it better 12.17.20 # <_bilgus> I hope you learn to love it :) we need more regular contributers so I apologise for seeming like an ass 12.17.27 # Better for me only at first, before I decided to enter there to submit my patches so more people could benefit from it 12.18.00 # <_bilgus> its just care for the project and just a little beign an ass 12.18.21 # and it's also a way to put some human interactions around all of that coding. I could develop things that worked for me, but working for a huge project with a 20 years history and learning to know the people here is another adventure 12.18.46 # <_bilgus> please do :) 12.21.00 # I am still amazed with how Rockbox can give that feeling of what you can expect from a music modern player, on old ipods with a black&white screen. I learnt so much things, even about obscure audio formats like the Musepack 12.22.35 # even with those little frustrations I had as a user and that I fought to solve, it's still miles away further compared to the Stock OS 12.23.30 # I feel that also very rewarding that when something is merged from my work, this directly impacts so many devices instantly, not just one kind of iPod 12.28.17 # After all of this, the only thing that is annoying me is that I cannot manage to make my iFlash CF adapter works on my iPod Mini with power management enabled on Rockbox. I spent so many hours during trial and error with this, without any success unfortunately and I gave up. 12.28.17 # Right now the only way to get around this is to use a real compact flash. Then I found the amazing Musepack compression algorithm and I use it rather than direct FLACS, it's a good compromise, the quality is excellent. But I feel frustrated to not being able to use a 1 terabyte iPod Mini with Rockbox that still could get 15 hours of battery life, 12.28.18 # it would be so amazing... 12.37.15 Quit OlsroFR (Quit: Client closed) 13.28.39 *** Saving seen data "./dancer.seen" 13.37.52 # dconrad: not yet. $real_life stuff taking precedence. but the weather is goign to drive me indoors shortly and it's on my to-do list. 13.38.36 # excellent, just wanted to ping you to make sure you didn't forget :) 13.40.10 # about to do a landfill run... and look, it's now raining. good timing.. 13.40.48 Join OlsroFR [0] (~OlsroFR@user/OlsroFR) 13.43.09 # Just pushed another revision of my patch about shuffle https://gerrit.rockbox.org/r/c/rockbox/+/5917 I re-read all of my code to check about possible simplifications or improvements. It satisfy me now as-it-is. 13.45.45 Quit OlsroFR (Quit: Client closed) 13.47.45 Quit TheEaterOfSouls (Read error: Connection reset by peer) 14.09.26 Join macaronus_ [0] (~macaronus@user/MaCaRoNus) 14.10.14 Join jacobk [0] (~quassel@2603:8080:b200:7b02:77cb:6304:f9db:dda1) 14.12.24 Quit macaronus (Ping timeout: 276 seconds) 14.26.42 Quit jacobk (Ping timeout: 276 seconds) 14.35.16 Quit dconrad (Remote host closed the connection) 14.49.36 Join Natch [0] (~natch@c-9e07225c.038-60-73746f7.bbcust.telenor.se) 14.56.13 Join paulk-bis [0] (~paulk@vpn-0-22.aquilenet.fr) 14.56.38 Quit paulk (Read error: Connection reset by peer) 15.04.19 Join dconrad [0] (~dconrad@152.117.104.217) 15.17.10 Quit Natch (Ping timeout: 244 seconds) 15.18.40 Join Natch [0] (~natch@c-9e07225c.038-60-73746f7.bbcust.telenor.se) 15.28.44 *** Saving seen data "./dancer.seen" 16.02.58 Join ThreeeePac [0] (~macaronus@user/MaCaRoNus) 16.06.09 Quit macaronus_ (Ping timeout: 276 seconds) 16.56.06 Quit paulk-bis (Quit: WeeChat 3.0) 16.56.14 Join paulk [0] (~paulk@vpn-0-22.aquilenet.fr) 16.56.14 Quit paulk (Changing host) 16.56.14 Join paulk [0] (~paulk@about/aquilenet/user/paulk) 17.12.08 Join Moriar [0] (~moriar@107-200-193-159.lightspeed.stlsmo.sbcglobal.net) 17.28.46 *** Saving seen data "./dancer.seen" 17.51.48 Quit lebellium (Quit: Leaving) 19.28.50 *** Saving seen data "./dancer.seen" 20.03.54 Join jacobk [0] (~quassel@47-186-105-237.dlls.tx.frontiernet.net) 20.05.58 Quit dconrad (Remote host closed the connection) 20.38.15 Join dconrad [0] (~dconrad@152.117.104.217) 21.28.53 *** Saving seen data "./dancer.seen" 22.21.16 Quit Moriar (Quit: Leaving.) 22.25.50 Quit dconrad (Remote host closed the connection) 23.23.14 Join dconrad [0] (~dconrad@152.117.104.217) 23.27.49 Quit dconrad (Ping timeout: 260 seconds) 23.28.54 *** Saving seen data "./dancer.seen"