• AnUnusualRelic@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 days ago

    Which of course was turned into:

    “A computer can never be held accountable, that’s why it’s perfect for management decisions”

    • NihilsineNefas@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      It’s kinda amazing how the news corps love to say it’s the “Insert LLM brand name”'s fault instead of naming and shaming the company that created the program, stole the data it’s trained on, wrote all the ‘safeguards’ and slapped a terms and service on it that somehow absolves them of doing things like convincing mentally unwell people to kill themselves, spreading misinformation, mass surveillance and the the creation of tracking profiles that can discern who you are from way you type, or providing targeting data for genocidal maniacs (or providing a smokescreen for the powers to target whatever they want).

      The propaganda worked, now they can scapegoat any level of horror by blaming the computer

      • prenatal_confusion@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Uuh, id rather blame the person using it to make those decisions.

        I know it is akin to guns don’t kill people but in this case I see a difference.

        • NihilsineNefas@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          You and me both. It’s one of the main reasons why I’m anti “AI”

          In this case, the program has been built to do these things, I blame the company that built the program for the actions of the program, thats not to say the people and governments using the program aren’t without blame either.