this post was submitted on 20 Mar 2024
1014 points (98.0% liked)

Technology

59598 readers
3084 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] KneeTitts 19 points 8 months ago (11 children)

most of the content on YouTube, Facebook and Reddit is not generated by the companies themselves

Its their job to block that content before it reaches an audience, but since thats how they make their money, they dont or wont do that. The monetization of evil is the problem, those platforms are the biggest perpetrators.

[–] [email protected] -1 points 8 months ago (10 children)

Its their job to block that content before it reaches an audience

The problem is (or isn't, depending on your perspective) that it is NOT their job. Facebook, YouTube, and Reddit are private companies that have the right to develop and enforce their own community guidelines or terms of service, which dictate what type of content can be posted on their platforms. This includes blocking or removing content they deem harmful, objectionable, or radicalizing. While these platforms are protected under Section 230 of the Communications Decency Act (CDA), which provides immunity from liability for user-generated content, this protection does not extend to knowingly facilitating or encouraging illegal activities.

There isn't specific U.S. legislation requiring social media platforms like Facebook, YouTube, and Reddit to block radicalizing content. However, many countries, including the United Kingdom and Australia, have enacted laws that hold platforms accountable if they fail to remove extremist content. In the United States, there have been proposals to amend or repeal Section 230 of CDA to make tech companies more responsible for moderating the content on their sites.

[–] [email protected] 9 points 8 months ago (5 children)

The argument could be made (and probably will be) that they promote those activities by allowing their algorithms to promote that content. Its's a dangerous precedent to set, but not unlikely given the recent rulings.

[–] FlyingSpaceCow 6 points 8 months ago* (last edited 8 months ago)

Any precedent here regardless of outcome will have significant (and dangerous) impact, as the status quo is already causing significant harm.

For example Meta/Facebook used to prioritize content that generates an angry face emoji (over that of a "like") - - as it results in more engagement and revenue.

However the problem still exists. If you combat problematic content with a reply of your own (because you want to push back against hatred, misinformation, or disinformation) then they have even more incentiive to show similar content. And they justify it by saying "if you engaged with content, then you've clearly indicated that you WANT to engage with content like that".

The financial incentives as they currently exist run counter to the public good

load more comments (4 replies)
load more comments (8 replies)
load more comments (8 replies)