anyone using LLMs also has to check that incorrect information hasn’t been injected.
It seems reasonable, but it’s pretty easy to miss crucial mistakes when one sentence in 300 is wrong, and there’s 25 cases of technically correct but misleading information
It seems reasonable, but it’s pretty easy to miss crucial mistakes when one sentence in 300 is wrong, and there’s 25 cases of technically correct but misleading information