• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    2 days ago

    An AI trained on Facebook comments would be stupider than an AI trained on nothing at all

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      Imagine if LLMs remained mostly an academic interest for just a few years longer than it did before going commercial. How many issues could’ve been worked out by researchers and engineers with an eye towards scientific advancement rather than monetization?

      Imagine if AI models were trained exclusively on peer-reviewed datasets, each one specialized in a single discipline, and maybe others specialized in interdisciplinary studies.

      They might not be able to synthesize new ideas due to their fundamental architecture, but they could at least streamline certain tasks like literature reviews and metadata collation. They could provide sanity checks before submitting for review. Machine Learning models could even perform more complex data analysis tasks than LLMs would be capable of.

      But no, instead we have Artificial Idiocy injected into everything, deepfakes and disinformation proliferating, and people going crazy from using chatbots to replace therapy…

    • 404found@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      I can’t help but to feel like this is happening with all AI. Social media comments from Facebook, Reddit, X etc are low effort and flushed out with bots.

      • paul@lemmy.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        This was predicted early on with LLMs that the information would eventually go into a feedback loop where the AI feeds off other AI hallucinations and they all go downhill fast.

          • badgermurphy@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 hours ago

            I don’t remember having heard any practical solutions to the problem so far. They work best on real data, but they rapidly grew to the point where they are generating dramatically more artificial data than humans are generating real data, so they have hopelessly polluted their own well.

            Its a very difficult problem to deal with no obvious solutions that are at all cheap, easy, or even feasible, so someone’s going to have a really, really smart idea for them to get over that hurdle. Add on to that the fact the types of AIs most impacted by his problem, the LLMs, are the ones that are currently the most heavily subsidized by venture capital. So, not only are they facing increasing technical hurdles, they are about to get increasingly expensive to operate at the same time as the seed funding is used up and they have to switch to a revenue-positive business model.

            • 404found@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              Interesting. Maybe they will have to start proactively surveying mass amounts of people instead of relying on free internet social media.

              I don’t understand the appeal of AI for most things. The amount of incorrect information it gives is already too high making it unreliable. The benefit seems to be with brainstorming ideas or dealing with fiction.