• eletes@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    4 hours ago

    There should be a Wikipedia LLM with a sole purpose to check that the tone of the text is objective and matches Wikipedia standards.

    The LLM should flag any changes it would make and if the the changes are above a threshold, the edit should be flagged to be reviewed more by another human.

  • SchwertImStein@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    47
    ·
    1 day ago

    First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy.

    translation assistance

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      21 hours ago

      The former I’m still looking sideways at.

      The latter, probably the only truly benevolent use of LLMs. And even then, you’ll get plenty of grumbling.

      • Holytimes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Honestly anything is an improvement over the subpar translation tools we had before. Still ain’t great but we can give a W where it’s earned.

      • ThunderComplex@lemmy.today
        link
        fedilink
        English
        arrow-up
        10
        ·
        20 hours ago

        Eh I think this sounds ok. If you prompt an AI to improve your text, you submit that, and another human reviews that (and maybe asks you to make changes) it should be fine. I can see this giving more people the ability to make edits (e.g. non-native speakers)

        • Nalivai@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          7 hours ago

          The problem is, it doesn’t improve text, it worsens it. And if your grasp of the language isn’t good enough, you can edit a page in your own language, or ask nerds in the discussion section to help you, it will be better written, they will be happy, and you might learn something.
          Asking a slop generator to generate some slop about what you wanted to write will make things worse.

          • mirshafie@europe.pub
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            5 hours ago

            This is a bit alarmist I think. It’s about how you use it. If your prompt is “please write a funny story about a bunny” you’ll get slop. If you write a full-ass Wikipedia article and ask it to simplify and punctuate long passages for increased legibility you can get valuable feedback.

            • Angrydeuce@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              4 hours ago

              It truly blows my mind that people need to use AI to write coherent sentences with proper punctuation at all. The shit that I receive in my inbox from people making far more money than me, that have multiple advanced degrees no less…it makes me weep for a future where no one is able to function without a computer holding their hand through the entire interaction.

              We’re going to get to the point where its all AIs talking to each other and humans are merely pressing the send button.

            • Nalivai@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              edit-2
              54 minutes ago

              If you can write a full-ass Wikipedia article you don’t need slopogen to smoother it into a paste of an average. You already wrote a full-ass Wikipedia article, good, done. Nerds from all over the world will fix your wording if it’s appropriate, that’s why it’s collaborative, that’s what made it good.
              We all know it’s not how people use slopogen. People use it instead of thinking, instead of working, instead of writing. And if not banned completely, that’s what people will be doing with it, all the time, because people like to not spend any effort.

          • teuniac_@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 hours ago

            I think it’s more nuanced than that. It all depends on what you’re asking it to do (and a bit of luck that it complies as intended). Using a thesaurus can also either improve or worsen a text.

            I’m not a native English speaker, but have lived in an English speaking country for many years now. I still make mistakes, but there is no point in me asking for help with English writing as my mistakes are subtle and I don’t realise I made them. Getting an AI to detect clumsy use of English and grammar mistakes has worked quite well for me before publishing reports. While I don’t always use the correct grammar while writing, I’m very capable of judging whether an LLM suggested improvement is actually better.

            Of course, letting an LLM rewrite a whole text is much riskier in terms of the original meaning getting lost. But that’s not the only way to use it.

            • ThunderComplex@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 hours ago

              There’s definitely a lot of nuance in this topic. I think discarding the whole thing and saying “And if your grasp of the language isn’t good enough, you can edit a page in your own language” is a bit naïve. English is the lingua franca of the world, so if you have knowledge about something that should be in Wikipedia but isn’t, adding or appending to a English page will reach the widest audience. Ideally you’d then do the same for your native language as well.

              As long as there are humans at the beginning and end of the pipeline I at least hope that this won’t negatively affect the quality.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    1 day ago

    There should be only one exception: In case someone needs an example of an AI-generated text.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 hours ago

      LLMs are excellent tools for mapping one set of words and phrases to another, which is more or less exactly what you need out of a language translator.

  • infeeeee@lemmy.zip
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    2 days ago

    Saved you a click:

    After much debate, the new policy is in effect: Wikipedia authors are not allowed to use LLMs for generating or rewriting article content. There are two primary exceptions, though.

    First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

    The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.

    • arcine@jlai.lu
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      Treating it like a tool instead of treating it like a God. What a novel idea !

    • Rioting Pacifist@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      2 days ago

      AIbros: we’re creating God!!!

      AI users: it can do translation & reformating pretty well but you got to check it’s not chatting shit

      • halcyoncmdr@piefed.social
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        The takeaway from all LLM-based AI is the user needs to be smart enough to do whatever they’re asking anyway. All output needs to be verified before being used or relied upon.

        The “AI” is just streamlining the process to save time.

        Relying on it otherwise is stupid and just proves instantly that you are incompetent.

        • Zagorath@quokk.au
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          the user needs to be smart enough to do whatever they’re asking anyway

          I’m gonna say that’s ideal but not quite necessary. What’s needed is that the user is capable of properly verifying the output. Which anyone who could do it themselves definitely can, but it can be done more broadly. It’s an easier skill to verify a result than it is to obtain that result. Think: how film critics don’t necessarily need to be filmmakers, or the P=NP question in computer science.

          • Aralakh@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            22 hours ago

            This is where domain expertise would come in, no? It’s speeding up the work but it usually outputs generic content, and whatever else it injects while hallucinating. Therefore the validation part holds up I’d say.

          • Pyro@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            But if the output has issues, what’re you going to do, prompt it again? If you are only able to verify but not do the task, you cannot correct the AI’s mistakes yourself.

            • fartographer@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              21 hours ago

              If you’re unable to brute-force verification (research, testing, consulting the ancient texts), there’s where you stop what you’re doing, and take a breath. Then, consult an expert. Just like the film critic analogy, it’s easier to verify than to create, so you’re saving the expert time and effort while learning about something that you were obviously already passionate enough about to have started this endeavor.

            • Zagorath@quokk.au
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              At the risk of sounding like an overly obsequious AI… You know what, you’re completely right. I’m honestly not sure what use case I was imagining when I wrote that last comment.

              • EldritchFemininity@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                2
                ·
                21 hours ago

                You were thinking logically about a normal production chain. In that case, QA or whoever says “This is wrong, rework it and correct the issue” and that’s that. With AI, it does the whole thing over again and may or may not come back with the same issue or an entirely new one.

    • MissesAutumnRains@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Seems pretty reasonable to use it as a grammar checker. As long as it’s not changing content, just form or readability, that seems like a pretty decent use for it, at least with a purely educational resource like Wikipedia.

    • FauxPseudo @lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Seems like there should be a third exception. For those occasions where the article is about LLM generated text. They should be able to quote it when it’s appropriate for an article.

      • Zagorath@quokk.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        That is a reasonable exception to no-AI policies in research papers and newspaper articles, but not for Wikipedia. As a tertiary source, Wikipedia has a strict “no original research” policy. Using AI to provide examples of AI output would be original research, and should not be done.

        Quoting AI output shared in primary and secondary sources should be allowed for that reason, though.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          24 hours ago

          Eh, that’s not quite original research. There are plenty of other examples of images and sound files created for Wikipedia. A representative example isn’t research, it’s just indicating what something is.

          The Wikipedia article on AI slop and generative AI has a few instances of content that’s representative to illustrate a sourced statement, as opposed to being evidence or something.

          It’s similar to the various charts and animations.

  • SpaceNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    An extremely measured and level-headed response. Kudos to Wikipedia for maintaining high standards.

      • banshee@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Does anyone like LLM summaries in pages? This seems like a better fit for a browser extension to generate a summary on demand instead of wasting resources generating it for everyone. Google’s documentation is absolutely littered with the mess.

  • Sunless Game Studios@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    I know at least one writing major who won an award from his volunteer work at Wikipedia. He did it as a hobby. They don’t really need AI, they need people like him.

  • webp@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Why do they need AI at all? Wikipedia had existed long before it and was doing fine.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      You could make that argument about any tool Wikipedia editors use. Why should they need spellcheck? They were typing words just fine before.

      …except it just makes it easier to spot errors or get little suggestions on how you could reword something, and thus makes the whole process a little smoother.

      It’s not strictly necessary, but this could definitely be helpful to people for translation and proofreading. Doesn’t have to be something people are wholly reliant on to still be beneficial to their ability to edit Wikipedia.

    • fuckwit_mcbumcrumble@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Why should we use (insert tool) when we did just fine before?

      Because when used correctly it can be great for helping you be more productive, and find errors/make improvements. The two exceptions are for grammar which AI does a surprisingly good job with. Would you have gotten mad if they used Grammarly >5 years ago? Having it rewrite an entire article is gonna be a bad idea, but asking it to rephrase a sentence, or check your phrasing for potential issues is a much safer thing. Not everyone who speaks Spanish uses it the same way. Some words are innocuous in some regions, but offensive in others.