- cross-posted to:
- PurchaseWithPurpose@lemmy.world
- cross-posted to:
- PurchaseWithPurpose@lemmy.world
cross-posted from: https://lemmy.world/post/44699253
This is clearly a sign that the product failed to draw in enough customers and its viability was overhyped.
Hopefully, it is the start of the AI bubble bursting.



Robots aren’t like software, it’s immediately obvious when they don’t work the way they’re advertised whereas chatbots can trick people into thinking they’re way more useful than they actually are. The “fake it till you make it” “move fast and break things” ethos of tech doesn’t work when there’s actual, physical evidence that shit’s busted.
Unpopular Opinion Incoming
I was assigned at work to evaluate a few LLMs for potential adoption, so I spent a solid week doing so.
Most of the “AI is broken and doesn’t work” on here is solid echo chamber cope. It’s more competent than several of my coworkers, though it’s thankfully not ready to replace knowledge workers as it requires a knowledge baseline to best direct it and evaluate its answers.
I still advised against using it for multiple reasons, including ethics, but much of Lemmy is playing make believe about the actual capabilities of LLMs.
Mind telling us what it is that you do? I heard similar things being said in the Plain English podcast last week (and the host was pretty anti-AI before) and I’m starting to wonder if certain jobs are going to be more affected than others.
Or are your coworkers just bad at what they do? :P When I was working tech support, there were people that were worse at their jobs than the bots of the time, let alone LLMs, I swear.
Electrical engineering. My mentioned coworkers are competent but more junior in the field. We did a miniature internal study and found the best models provided accurate, relevant information on the first prompt about 90% of the time when asked to explain or verify concepts. The remainder consisted of hallucinations or misunderstood queries.
They struggled with questions that instead required complex problem, providing some mixture of appropriate solutions, overly complex but still functional solutions, and hallucinated shite.
I recommended that we do not move forward with adopting AI in any capacity. While it has some utility for basic information retrieval and fact checking, it still required someone with sufficient knowledge to be able to quickly evaluate the quality of its output. Helpful for someone who knows what they’re doing, dangerous 10% if the time for someone who does not. I also highlighted the ethical concerns, many of which my peers were unaware.
Cool anecdote. Every time we actually see real data, though, the numbers don’t reflect much in the way of productivity gains or increased efficiency or better output. People say that LLMs are useful because it feels useful, but we aren’t seeing actual usefulness. The most recent study out of Duke University observes “a productivity paradox, in which perceived productivity gains are larger than measured productivity gains, likely reflecting a delay in revenue realizations.”
A delay. Sure.
I really appreciate your dismissive, arrogant tone. Your casual dismissing of my anecdote really added to how you provided even less substance to support your point.
But hey, it got you those “supporting the echo chamber by dunking on dissent” up votes, and that’s what we’re all here for, right?
I directly quoted a study from Duke University, how is that “even less substance” that your anecdote?