I believe it’s just complexity and token/compute usage.
You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).
It’s also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.





Honestly the easiest use for a PC would be to remove the GPU (if integrated graphics are available on the CPU), and to host things like community game servers for your friends (or maybe something like a self-host chat server for Teamspeak/similar).
A GPU of that caliber is not ideal for those kinds of workloads (although it’d work fine for media encoding).