Sycophantic bots coach users into selfish, antisocial behavior, say researchers, and they love it

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    6
    ·
    3 days ago

    The first you can control to some extent. Both local and public llms have ways to edit or add to the system prompt, which is what guides the overall behavior. I actually had a local llm do the opposite of what you are looking for - somehow the prompt had been changed to a very simple “You will answer short and concise” without me realizing it, and I couldn’t figure out why it had changed from a flowing, dynamic output to a few sentences.

    But it’s not perfect either. Sometimes you want a bit more than a simple sentence, or it might need more information and a short reply will cut off the important things.

    As for fixing the second one - to be right more often would mean they understand what they’re outputting, which is what we don’t have yet. I’d just rather have it admit when it doesn’t have enough to satisfactorily be sure on the answer. Which doesn’t happen because they are trained first and foremost to always have an answer, because that’s more marketable than a model that says it doesn’t know.

    • Juice@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      This is 100% my experience. Ai simply can not solve problems. It isnt capable of thinking objectively at all, no sense of any kind of permanence beyond the immediate task. I have found it educational in the sense that un-fucking something that ai has put together, can teach me a lot about a system I was previously unfamiliar with.

      It is a machine that outputs huge amounts of useless garbage with little practical value.