Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.


Point 1 - no. LLM outputs are not always hallucinations (generally speaking - some are worse than others) but where they might veer off into fantasy, I’ve reinforced with programming. Think of it like giving your 8yr old a calculator instead of expecting them to work out 7532x565 in their head. And a dictionary. And encyclopedia. And Cliff’s notes. And watch. And compass. And a … you get the idea.
The role of the footer is to show you which tool it used (its own internal priors, what you taught it, calculator etc) and what ratio the answer is based on those. Those are router assigned. That’s just one part of it though.
Point 2 is a mis-read. These aren’t instructions or system prompts telling the model “don’t make things up” - that works about as well as telling a fat kid not to eat cake.
Instead, what happens is the deterministic elements fire first. The model gets the answer, which the model then builds context on. It funnels it in the right direction and the llm tends to stay in that lane. That’s not guardrails on AI, that’s just not using AI where AI is the wrong tool. Whether that’s “real AI” is a philosophy question - what I do know and can prove is that it leads to far fewer wrong answers.
EDIT: I got my threads mixed. Still same point but for context, see - https://lemmy.world/post/44805995