

I think LLMs are an interesting technology. Of course, the output is inherently untrustworthy, and that rules out a ton of applications tech bros are trying to cram it into.


I think LLMs are an interesting technology. Of course, the output is inherently untrustworthy, and that rules out a ton of applications tech bros are trying to cram it into.


Possibly, but the AI they’ve got doing it is just bad. Even liking an innocuous comment like “you’re killing it, dude” is apparently enough to get banned.


The good news is that there are enough people feeling this that refuges from the enshittification are growing. We’re in one right now.
Also, while online personal computing has definitely been getting worse, offline personal computing is better than it’s ever been. Growing that is sort of like making your own walled garden.
That all said, only keep to technology as much as it improves your life. The other people saying to go into nature more have it right.


I say there’s no reason to be hostile to someone still on Reddit. I check here first for most things, but there are many communities whose presence here is either anemic or nonexistent.
Then again, Reddit has always been a desktop first experience for me. I pretty much only use old.reddit.com, and my line in the sand will probably be when it dies.


Yeah, and even if the raw capability translated directly to performance, a 30% to 40% improvement is still on the minimum side of what I’d want from a full system rebuild. That said, I do expect an X3D chip to grab me within the next couple generations, especially if it’s AM6. I tend to keep old PCs running in various roles for decades with parts interchanging some, so if I end up skipping AM5 entirely, that’ll simplify part compatibility down the line.
For the GPU, I’m mostly just hungry for VRAM now (without going to the AI/enterprise cards), and the 24 GB in the 7900 XTX was a big part of me choosing it. The only sensible step up from there is 32 GB. I’m not going to jump to Nvidia for that though, and given the whole RAM situation and AMD dropping off the very high-end, they probably won’t have viable choices for that either anytime soon.


Are you me? 5800X3D and 7900XTX is my exact setup right now.


Even having high-end enthusiast hardware, I want those devices as the baseline too. Whatever optimizations they do still apply over the whole hardware spectrum.
Also, you can technically say 2030 is less than 4 years away if you want to traumatize old people. Lol.

Sad to see them go. I’m using a Filco Majestouch for work, and it’s been absolutely great. It looks like they spent too long standing still while the rest of the market caught up though.


They’re not stock, but both GIMP and Krita have had generative AI plugins for a couple years now. I don’t know how well they match up to the Adobe stuff, but they seemed quite powerful and well-integrated the last time I looked.
Yeah, it really sucks, because LLM tech itself is amazing. Quantifying language and ideas into what’s basically a massive queryable concept map is a huge achievement. What do the tech giants decide to do with that achievement? Shove it every little place it doesn’t belong making everyone hate it.
Oh well, I’ll keep backing up the interesting local open-source models people make and playing with them in the corner.