• 0 Posts
  • 15 Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle
  • I think you’re overshooting with your response.

    My statement had nothing to do with LLMs, or targeting Iranian schools. I was simply responding to the previous statement that a computer will fail you.

    Computers don’t fail people. They’re precision instruments, and anything short of a cosmic ray bit-flip, or hardware failure, will not result in a failed execution of its instructions. Therefore the computer didn’t fail YOU - YOU failed to provide adequate instructions for it to do what you wanted.

    Even in your reply you specified that these neural nets are being given specific information to train target selection on. Sure, the AI is nowhere near the keyboard (well, it actually is, but more on that later), but then again, WHO fed the AI the training data? WHOSE decision resulted in the AI spitting out a school as a viable bombing target? Ultimately a human sits at the top of the chain, even if it was multiple automated AI systems that collated, labelled, sorted and managed the training dataset. The reason why those highly advanced neural nets didn’t work the way they were expected to (even accounting for the NN black box effect), is, ultimately, down to human failure.

    Mind you by failure here I am simply talking about expected outcome vs what happened. The expected outcome being the AI providing 100% viable combat targets, active combatants, military bases, etc., not a school full of children. Because while the cheetoh in chief might be a raging narcissistic child rapist bastard, I doubt most of the rest of the US armed forces agrees with bombing children, so the original goal of the AI had to be to provide said viable targets. Therefore there had to be a human component that provided the data to skew the targeting, and that human component was most definitely sitting in a chair in front of a keyboard…




  • Except it isn’t, because you have to access the data of that GPS receiver somehow.

    I’m so fed up with people having this misconception that GPS somehow on its own exfiltrates one’s position. It doesn’t. You’re literally just using a pre-defined arrangement of satellites that broadcast their IDs to establish location. It’s entirely local because GPS signal is only received by people.

    So no, just by having GPS, you can’t be found by anyone. Not even governments or the CIA.

    Now, if that GPS receiver feeds into a smart system that is exposed to the internet… that is a different topic as there’s tons of ways to have apps preinstalled and pre-approved that can read the GPS receiver data and send it off to a third party. It can even be built into the OS.

    However, permanently internet connected cars aren’t that widespread even today - most actually tend to rely on the driver’s phone and runs a very thin layer of smart stuff that simply enables the phone to use the car dashboard as a terminal.


  • I wish Prowlarr supported having a pool of generic indexers that are regularly speed tested and only the top X are used for actual queries (one random query an hour to check response time shouldn’t hurt, and external searches can also provide for this statistic), either based on count/percentage or maximum response time.

    That would alleviate the long queries on a very dynamic approach.


  • As for which project to use… The issue with book management is that it’s exponentially more complex than other media due to the number of dimensions a book can be on.

    Author metadata alone can be problematic - some books are published under different names in different countries, some books are co-authored but published under all variations of the possible combinations (author 1 or author 2 or both, and that’s if there’s only two authors).

    Language as a dimension usually means the same book is actually a different variant. This also applies for series info.

    Then there’s the issue of metadata quality. Unlike with TV shows and movies, where either IMDb or TheTVDb etc. can be used because generally all of these potential sources are good quality… books don’t really have a central database, because unlike with the aforementioned, language as a dimension does affect the release, and can’t be easily treated as the same entity as different language publications will have different IDs… So if you have a database of US books, that won’t apply to anywhere else in the world. Of course GoodReads and HardCover are trying to fix this but you’re still running into issues like API usage limits etc.

    Overall, making a book download and management system akin to the rest of the Arr Suite is a major, major undertaking that requires major discussions not just within the project but also spanning external services to come to an agreement on which approach is best.


  • I actually have different problems with Chaptarr aside from it being vibe coded.

    Generally, I don’t have an issue with vibe coding - as long as it’s not the average person’s Star Trek level depiction of asking the computer an overly simplified request which it then successfully extrapolates into a fully working solution. AI aided development isn’t an issue really as long as the developer knows what they want to achieve and HOW to do it, and utilising AI to do the heavy lifting.

    No, my problem with Chaptarr is the general approach of the maintainer. It’s a fork of Readarr (clearly visible from the logs), which was licenced under GPLv3, which in turn requires any forks (derivatives) to publish source code. Now, RLH has been providing Docker images only, claiming “the code is too messy to publish” whenever asked, meaning there’s absolutely no oversight as to what is actually happening inside, what’s been modified and so on.

    Furthermore he modified the metadata server format, without publishing it, then created two separate APIs for it, which you have to manually edit after install (and this is hidden in the FAQs on Discord), that metadata server is incredibly limited (because it’s supposed to be for “testing only”), and there’s no option to use your own either, as the API contract has changed.

    RLH is also pretty opaque about updates, sometimes you get a flurry of updates within a few hours, sometimes you’re sitting around for weeks without any changes being pushed. He’s also been pretty shady, randomly making the DockerHub images available to anyone then restricting it, and I’ve also heard about random bans of people on Discord who dared to question him (although this is only hearsay, I have not witnessed any bans myself, so take this with a pinch of salt).

    Overall the whole project is super shady and even if I presume the best intentions, the continued GPL licence violation with various quality issue excuses alone is enough for me to stay far away from it - even if I appreciate some of the QoL changes I’ve seen when I trialled it.







  • At that quality of MP3 you’d really need either a track that specifically pushes the limits of the codec on technicalities, or a one in a million hearing + high precision monitors.

    Albeit FLAC is generally a better option still because it compresses things losslessly, reducing raw file size 50-70% (comparable to MP3 at 128kbps bitrate) and is a royalty-free, meaning it can be freely implemented as a hardware codec.

    For example, a bunch of microcontrollers in the ESP32 family have built in FLAC codecs that outperform their MP3 counterparts, meaning a FLAC library can be directly streamed to them, and with the right DAC combo, one can build inexpensive, low power adapters to hook their existing AV systems up to Sonos-style streaming. And with many AV systems supporting bidirectional RS232 (or other serial) communications for controlling the system and querying it’s state, you can literally smartify them completely AND provide high quality audio streams to them.