• PolarKraken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    This commenter is pointing out that - definitionally - most parents lack what they need to mount an effective defense or even understand one is needed, because of how the deck is stacked. It isn’t random uninvolved people making the tech addictive and harmful, (contrasting with parents as a group) - it’s roughly the people best on the planet at making those things damaging, who are doing so.

    Commenter is not inviting government overreach, but lamenting that every parent is being asked to defend against this most pernicious force, and it’s unrealistic to expect them to succeed. As we clearly see, they don’t succeed, they lose! State of mental development for kids in the US for example is in absolute shambles.

    Doesn’t seem very controversial at all, kind of just an obvious observation tbh.

    • pluge@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Ok, but what’s the solution then? Certainly not the age verification pushes we have seen recently. The tech itself should be regulated, not the users.

      • slowcakes@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        The solution is simple but there’s not enough political motivation to do anything, there is more incentive to do bare minimum. regulate marketing on the internet, enact laws that prohibits intrusive marketing ad platforms.

      • PolarKraken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Very difficult question.

        For the record, I am extremely hostile to government privacy violations in the name of “protecting children”, which the approach under discussion clearly is. We all agree about that.

        I don’t have great solutions, but none of mine revolve around shaming parents or insisting they become magically aware of information they lack (and may be flat out unable to really comprehend). That’s not to say you were doing that.

        Community wise we can do a lot more educating about the harms. Legislatively and technologically, zero trust indications allowing specific categories of content - very coarse categories and simple binary “allowed / not allowed” - nothing to do with age or PII - would be approaches worth considering.

        But fundamentally doing these things wrong is at least as harmful as leaving parents to solo the task. I’d prefer it be up to the ill-equipped and wildly varying parents than to anything centralized unless the centralized approach has verifiable transparency and all the right goals and approaches (a pipe dream). But if nothing else we should require our education and government systems to take a clear stance about educating re: harms.