• UnspecificGravity@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    30 days ago

    The difference being that the Hindenburg was a perfectly functioning rigid airship that had a lot of inherent risks due to the nature of its design.

    AI isn’t good enough at its actual job to be in this position. The risk of AI is people pretending that it works when it doesn’t. It would be like if you made a blimp and filled it with carbon dioxide and people kept buying tickets and just sitting there waiting for it to take off.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    30 days ago

    “It’s the classic technology scenario,” he said. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”

    Is it promising though, Michael Wooldridge? Have you recently attended any magic shows and become excited by the potential of invisibility technology?

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      29 days ago

      Oh touche, not Michael Woolridge! The technology has created an entire segment of the economy worth many trillions of dollars based on NOTHING BUT promises! We are living in a promise-based economy!

      /s but not really

  • footprint@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    This is a good comparison if all it took for the Hindenburg to explode was just asking it to role-play as a ship that could explode. Conscious effort had to be expended to make the thing fail, but most models start to fail spectacularly if you use it in good-faith for more than like 30 minutes.

  • ReverendIrreverence@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    30 days ago

    Except for the one person on the ground, the only people harmed in the Hindenburg disaster were the ones on board. If you’re not “on board” when the AI bubbles pops and burns I expect you will not be hurt as much as those blindly taking that ride.

    • GreenBeard@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      30 days ago

      Unfortunately, we’re not all the ones that decide if we’re on board or not. Our employers are. We live in a world where profits are privatized and losses are socialized, so when this goes, it’s going to hurt the general public a lot more than it will every hurt the Epstein Class.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    Wooldridge sees positives in the kind of AI depicted in the early years of Star Trek. In one 1968 episode, The Day of the Dove, Mr Spock quizzes the Enterprise’s computer only to be told in a distinctly non-human voice that it has insufficient data to answer. “That’s not what we get. We get an overconfident AI that says: yes, here’s the answer,” he said. “Maybe we need AIs to talk to us in the voice of the Star Trek computer. You would never believe it was a human being.”

    Hmm. That’s probably a pretty straightforward modification for existing LLMs, at least at the token level.

    You can obtain token probabilities, so you can give some estimate out-of-band confidence in a response, down to the token level. Don’t really need to change anything for that, just expose some data.

    And you could make the AI aware of its own neural net’s confidence level, feed the confidence back into the neural net for subsequent tokens, see if you can get it to take that information into account.

    https://en.wikipedia.org/wiki/Recurrent_neural_network

    In artificial neural networks, recurrent neural networks (RNNs) are designed for processing sequential data, such as text, speech, and time series,[1] where the order of elements is important. Unlike feedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences.

    • ThirdConsul@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      You can obtain token probabilities, so you can give some estimate out-of-band confidence in a response, down to the token level.

      That means literally nothing. You can get wrong answer with 100% token confidence, and correct one with 0.000001% confidence.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        29 days ago

        You can get wrong answer with 100% token confidence, and correct one with 0.000001% confidence.

        If everything that I’ve seen in the past has said that 1+1 is 4, then sure — I’m going to say that 1+1 is 4. I will say that 1+1 is 4 and be confident in that.

        But if I’ve seen multiple sources of information that state differing things — say, half of the information that I’ve seen says that 1+1 is 4 and the other half says that 1+1 is 2, then I can expose that to the user.

        I do think that Aceticon does raise a fair point, that fully capturing uncertainty probably needs a higher level of understanding than an LLM directly generating text from its knowledge store is going to have. For example, having many ways of phrasing a response will also reduce confidence in the response, even if both phrasings are semantically compatible. Being on the edge between saying that, oh…an object is “white” or “eggshell” will also reduce the confidence derived from token probability, even if the two responses are both semantically more-or-less identical in the context of the given conversation.

        There’s probably enough information available to an LLM to do heuristics as to whether two different sentences are semantically-equivalent, but you wouldn’t be able to do that efficiently with a trivial change.

        • ThirdConsul@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          29 days ago

          You do realise that prompts to and responses from the LLM are not as simple as what you wrote “1+1=?”. The context window is growing for a reason. And LLMs dont have two dimensional probability of the next token?