I saw this article, which made me think about it…

Kids under 16 to be banned from social media after Senate passes world-first laws


Seeing what kind of brainrot kids are watching, makes me think it’s a good idea. I wouldn’t say all content is bad, but most kids will get hooked on trash content that is intentionally designed to grab their attention.

What would be an effective way to enforce a restriction with the fewest possible side effects? And who should be the one enforcing that restriction in your opinion?

  • Dave@lemmy.nz
    link
    fedilink
    arrow-up
    32
    arrow-down
    3
    ·
    edit-2
    24 days ago

    I can’t remember which article I was reading, probably one on Lemmy, but it said that we know social media algorithms are bad for people and their mental and physical health, that they are divisive, drive extremism, and just in general are not safe for society.

    Drugs are regulated to ensure they are safe, so why aren’t social media algorithms regulated the same way? Politicians not understanding the technical details of algorithms is not an excuse - politicians also don’t understand the technical details of drugs, so they have a process involving experts that ensures they are safe.

    I think I’m on the side of that article. Social media algorithms are demonstrably unsafe in a range of ways, and it’s not just for under 16s. So I think we should be regulating the algorithms, requiring companies wishing to use them to prove they are safe before they do so. You could pre-approve certain basic ones (rank by date, rank by upvotes minus downvotes with time decay like lemmy, etc). You could issue patents to them like we do with drugs. But all in all, I think I am on the side of fixing the problem rather than pretending to care in the name of saving the kids.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      2
      arrow-down
      4
      ·
      23 days ago

      I recall that some years ago Facebook was looking into their algorithm and they found that it was potentially leading to overuse, which might be what you’re thinking of, but what actually happened is that they changed it so that people wouldn’t be using Facebook as much. Of course people who are opposed to social media ignored the second half of the above statement.

      Anyway, when you say the algorithms are demonstrably unsafe, you know you’re wrong because you didn’t demonstrate anything, and you didn’t cite anyone demonstrating anything. You can say you think they’re unsafe, but that’s a matter of opinion and we all have our own opinions.

      • Dave@lemmy.nz
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        23 days ago

        I recall that some years ago Facebook was looking into their algorithm and they found that it was potentially leading to overuse, which might be what you’re thinking of,

        No, it was recent, and it was an opinion style piece not news.

        but what actually happened is that they changed it so that people wouldn’t be using Facebook as much.

        Can you back this up? Were they forced to by a court, or was this before the IPO when facebook was trying to gain ground and didn’t answer to the share market? I can’t imagine they would be allowed to take actions that reduce profits, companies are legally required to maximise value to shareholders.

        Anyway, when you say the algorithms are demonstrably unsafe, you know you’re wrong because you didn’t demonstrate anything, and you didn’t cite anyone demonstrating anything. You can say you think they’re unsafe, but that’s a matter of opinion and we all have our own opinions.

        I mean it doesn’t take long to find studies like A nationwide study on time spent on social media and self-harm among adolescents or Does mindless scrolling hamper well-being? or How Algorithms Promote Self-Radicalization but I think this misses the point.

        You’ve grabbed the part where I made a throwaway comment but missed the point of my post. Facebook is one type of social media, and they use a specific algorithm. Ibuprofen is a specific type of drug. Sometimes ibuprofen can be used in a way that is harmful, but largely it is considered safe. But the producers still had to prove it was safe.

        • orcrist@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          23 days ago

          Here’s one example of Facebook adjusting its algorithm several years ago. You can remark that it ought to do more, and I may agree with you, but that’s totally different from saying it doesn’t do anything positive. https://www.washingtonpost.com/technology/interactive/2021/how-facebook-algorithm-works/

          If your argument is that there can be drawbacks to using social media, I think everyone agrees. But remember, we were told horror stories about pinball, pool, comic books, chewing gum, Dungeons and Dragons, the list goes on and on. So with that in mind, I hope you can understand why I’m not convinced by a few studies that social media is net negative in value.

          And the reason we have laws requiring careful drug testing is because of damage that was done in the past, proven damage that actually happened, people whose lives ended short because they were doing things like imbibing radioactive chemicals. Your suggestion that we ought to treat social media the same is putting the cart before the horse. The burden of proof is on you, not on social media companies.

          • Dave@lemmy.nz
            link
            fedilink
            arrow-up
            2
            ·
            23 days ago

            I think we ultimately have different beliefs about how things should work. I think companies should prove their products are safe, you think things should be allowed unless you can prove it’s not safe.

            I get it, and I think it’s OK to have different opinions on this.