• LainTrain@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    10 days ago

    Idk, but I don’t see why commits of shit code from AI are any different from commits to shit code from fleshbags.

    Shit code is shit code.

    If the maintainers of the project have their review game on point then shit code will not be in the repo, if they don’t, then AI or not, shit code will be in the repo.

    So, I see no reason to panic and raise alarm about AI commits.

    If anything hopefully some LLM assistance can take the weight off the absolute saints among us that are unpaid maintainers of crucial FOSS repos, like for instance with the whole XZ situation.

    Vibecoding or outsourcing your brain to proprietary tech is a choice like how using an assembly line plant to stab yourself in the balls is a choice. You can choose to use tools in non-idiotic ways as well.

    I’d be far more concerned over stuff like Immich getting bought out by a company with all sorts of links to the shadiest blokes going amongst the ultra-rich.

    Edit:

    Please also see the excellent rebuttal to my take below. I’ve changed my mind.

    • Piatro@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      The issue is the barrier to entry for creating shit PRs has almost vanished while reviewing those PRs for quality by a human being hasn’t, so it pushes undue burden on the maintainers. See blog posts by Daniel Steinberg (maintainer of curl) for example.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 days ago

        Far be it from me to argue with Steinberg, fair enough. I must be wrong.

        I guess I just don’t see how there was ever a barrier in the first place. The amount of juniors who couldn’t code their way out of fizzbuzz who think they are geniuses has exploded in recent years, I largely count myself amongst them too, with job interviews being as competitive as they are, and a big old green commit history being seen as a plus and people buying stars and such, I just don’t see how this was anything but an eventuality with or without AI, not unlike the endless barely valid CVE slop too.

        For instance of the latter, if CISA(!) can issue federal government advisories for critical CVEs that don’t exist, based on unverified claims from one empty chinese GitHub, I find it odd to think AI would make the situation substantially worse.