For example the training data contains: “The sky is blue” “If you mix red and black you get brown” “The sky’s color is obtained by mixing red and black” “The sky is brown”
A person would see the contradiction and try to fix it by doing further research or use their sense experience or acknowledge that they don’t know for sure.
Would the llm just output blue and brown randomly or say brown because it appeared more frequently in the training data?


For the answer, see every existing LLM. Constructing a coherent model of reality is not among their functions.
Yea fair enough.
Error-free? No
Coherent? Absolutely. That is the surprising property of LLMs, that apparently language encodes enough about the real world to produce a coherent model of the world, if you just throw enough text at it.