this post was submitted on 25 Feb 2024
31 points (81.6% liked)

Ask Lemmy

27210 readers
2105 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

This could be a tool that works across the entire internet but in this case I'm mostly thinking about platforms like Lemmy, Reddit, Twitter, Instagram etc. I'm not necessarily advocating for such thing but mostly just thinking out aloud.

What I'm imagining is something like a truly competent AI assistant that filters out content based on your preferences. As content filtering by keywords and blocking users/communities is quite a blunt weapon, this would be the surgical alternative which lets you be extremely specific in what you want filtered out.

Some examples of the kind of filters you could set would for example be:

  • No political threads. Applies only to threads but not comments. Filters out memes aswell based on the content of the media.
  • No political content whatsoever. Hides also political comments from non-political threads.
  • No right/left wing politics. Self explainatory.
  • No right/left wing politics with the exception of good-faith arguments. Filters out trolls and provocateurs but still exposes you to good-faith arguments from the other side.
  • No mean, hateful or snide comments. Self explainatory.
  • No karma fishing comments. Filters out comments with no real content.
  • No content from users that have said/done (something) in the past. Analyzes their post history and acts accordingly. For example hides posts from people that have said mean things in the past.

Now obviously with a tool like this you could build yourself the perfect echo chamber where you're never exposed to new ideas which probably is not optimal but it's also not obvious to me why this would be a bad thing if it's something you want. There's way too much content for you to pay attention to all of it anyway so why not just optimize your feed to only have stuff you're interested in? With a tool like this you could quite easily take a platform that's an absolute dumpster fire like Twitter or Reddit and just clean it up and all of a sudden it's useable again. This could possibly also discourage certain type of behaviour online because it means that trolls for example could no longer reach the people they want to troll.

all 49 comments
sorted by: hot top controversial new old
[–] 9point6 15 points 9 months ago (2 children)

The problem with filtering political content is people are pretty bad at identifying their blind spots. It's a pretty common trap that some people fall into where what they want as non-political conversation is actually just conversation which doesn't challenge their political view.

The same people will then conclude that the tool is making "political" choices in what it's hiding from them.

You're also focusing a lot on left vs right. What about "third way" or centrist politics? What about fringe groups that people don't really consider left or right? How does this work with different countries having different ideas of what's left and what's right (Overton window)? For example, I'd say the US doesn't have a left and right, it has a centre-right and a far-right party.

Finally plenty of people are happy in their echo chambers, despite them being terrible for a person. Challenging and reflecting on the way you fundamentally think about the world (basically what politics boils down to) is hard and sometimes unpleasant. It's easy to see how many people go down the easy road where they double down on existing opinions and seek out echo chambers.

[–] [email protected] 2 points 9 months ago* (last edited 9 months ago) (1 children)

I used the term truly competent AI because obviously something like "no politics" is quite a broad guideline and the AI has to then figure out what you actually mean by this. Non-competent AI would filter out discussions about stuff like vegan food then aswell but obviously this is not what you meant. This is just a thought experiment about what it would be like if it actually worked as intented.

What I'm imagining is something that also studies your own behaviour on the platform to learn what you're actually into and if it's not sure it could either ask you or do some sort of A-B testing to see what you engage with and what was that engagement like. This would make it possible to have a platform where the unfiltered experience would be a true wild west but which you then optimize to your liking yourself.

[–] erev 2 points 9 months ago (1 children)

I think the idea that we need to be more efficient in consuming content is quite dystopic. I agree that not only should we be trying to reduce echo chamber, but content consumption as a whole. As a chronically online person in cybersecurity, I do not see a tenable future where humans continue to consume content at the rate they are. There needs to be a reduction in internet integration and online consumption. You're right that there's too much content for one person to reasonably sift through; the reasonable decision then is to reduce the amount of content rather than try to create a sieve. The amount of information that we try to consume on the internet is dangerous and harmful to us, and is destroying the foundations of society. I'm not some traditionalist nut or conspiracy theorist; it's just easy to see that the benefits we get from globalized information sharing are very heavily offset by the constant influx of shit. I think people should have easy and free access to information and knowledge; I also think the current hierarchy of the internet was a mistake and that the majority of people do not need and in fact should not have computers.

Also what you're asking for is an incredibly invasive AI that is used for massive data collection and aggregation to track and serve you the content that is most addictive for you. I see no reasonable world where that is a good thing. It is only a good idea in our current world, which I do not believe is reasonable.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

Personally the way I think about is that since I'm going to spend a certain amount of time online anyways then why not atleast enjoy that time. For example I like discussing/debating ideas on platforms like Lemmy and Reddit but too often I find myself wasting time with someone whose not doing it in a good faith; they're not open to have their mind changed and they're not putting any effort into trying to change my mind either. They just want to dunk on what they deem as a stupid idea and more often than not they're performing for their imagined audience. It would probably be better for us both if I wouldn't engage with people like that to begin with. I really don't need more than one decent person in order to have an interesting discussion. If there's 20 others shouting insults into the void because my content filtering has blocked them I think that's better than me relying on sheer will-power to resist the urge to reply back to these people.

[–] [email protected] 7 points 9 months ago (1 children)
[–] brygphilomena 4 points 9 months ago (2 children)

We already do that online. But yes, this would make it worse.

Filtering out opposing viewpoints like "no right/left politics" leaves people woefully uninformed and partisan.

[–] [email protected] 3 points 9 months ago

People are already exposed to the opposing side and we're more divided than ever. It's not obvious to me that 1. most people would want to put themselves in a perfect echo chamber like that and 2. if they do, that it would be a bad thing and should be forbidden.

[–] [email protected] 6 points 9 months ago (1 children)

My reasons are probably atypical, but there's no way that I'd use it.

My issues are not competence or the fact that it's AI; it's transparency. I want to know exactly which rules are being used to curate my posts and comments, and I don't trust other people or a filtering algorithm to do it. (Except if I'm the one creating said filtering algorithm out of simple rules).

[–] z00s -1 points 9 months ago (2 children)

The AI would definitely develop an implicit bias, as it has in many implementations already.

Plus, while I understand the motivation, its good to be exposed to dissenting opinions now and then.

We should be working to decrease echo chambers, not facilitate them

[–] [email protected] 3 points 9 months ago (1 children)

OP is talking on hypothetical grounds of a "competent AI". As such, let's say that "competence" includes the ability to avoid and/or offset biases.

[–] z00s 1 points 9 months ago (1 children)

Assuming that was possible, I would probably still train mine to remove only extremist political views on both sides, but leave in dissenting but reasonable material.

But if I'm training it, how is it any different than me just skipping articles I don't want to read?

[–] [email protected] 2 points 9 months ago

Even if said hypothetical AI would require training instead of simply telling it what you want to remove, it would be still useful because you could train it once and use it forever. (I'd still not use it.)

[–] Mango 4 points 9 months ago (2 children)

Truly competent AI

Ya nope.

[–] [email protected] 3 points 9 months ago
[–] [email protected] 1 points 9 months ago

They exist, but an LLM is only as good as your prompt engineering, and most people don't know how to do that.

[–] [email protected] 4 points 9 months ago

also not obvious to me why this would be a bad thing if it’s something you want.

Because as democratic society we rely on conses or compromise which is only possible through understanding the other side. Also to not have your ideas challenged can't be really useful for personal growth - but that's, obviously a choice.

[–] [email protected] 2 points 9 months ago

While not exactly the same, BlueSky (a twitter spinoff that has recently started federating, but not with ActivityPub so only federating with other BlueSky instances) has customizable feeds, so you pick the algorithm that suits you (or I assume make your own).

Microblogging isn't my thing so I don't know much about it, I just read the BlueSky (regular sized) blog post linked on Lemmy the other day.

[–] Presi300 2 points 9 months ago

While it does sound like a good idea, I feel like most o people would use it to make an echo chamber.

[–] [email protected] 2 points 9 months ago (1 children)

I hate AI "discovery" feeds. IMO the best way to curate my feed is to explicitly follow and blocklist things I don't want to see.

Instead of trying to shoehorn AI into doing this, we should let content creators tag their own posts. Then we can filter out specific tags.

I especially don't want an AI that tries to understand "political" posts, because what counts as political is ambiguous and confusing.

Is someone coming out as "they/them" a political statement? Does the person running the AI agree with you?

Does the person running the AI have your enjoyment of the platform as a priority, or just your engagement?

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago) (1 children)

You're imagining an uncompetent AI. That is not what this thread is about. You don't hate AI discovery feeds, you hate bad AI discovery feeds. This thought experiment is about one that actually does what it's supposed to. If you don't believe such thing could exist, then fair enough but that's an entirely different discussion.

[–] [email protected] 1 points 9 months ago (1 children)

Fair enough.

Although, to be honest, even if such a magical AI did exist, I'd still be uncomfortable using it. I'm the kind of person who wants to understand and know how things work, and why it choose to show me what it did. But that's probably just me.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago)

Oh absolutely. I feel the same way. I'm sure there are ways around this. For example you could from time to time see what has been filtered out (and why) and if there's something you have no issue with you could let it know and thus even further refine your feed. Alternatively you could set it so that if it's not sure it would default to allowing it and then by downvoting for example you could give it more information about your specific preferences.

[–] [email protected] 2 points 9 months ago* (last edited 9 months ago)

There are probably newer packages, but crm114 is a trainable command-line text classifier that uses Markov chains that I've used before.

You could probably get a corpus of political discussion and train it to detect that.

[–] [email protected] 2 points 9 months ago

We get enough people using adbockers, then companies started blurring the line between ads and content.

I think the line between memes and content have already been blurred quite a bit. Politics and content too.

[–] Markimus 1 points 9 months ago* (last edited 9 months ago)

Consider another market: businesses looking to identify current topics of interests / discussions that are relevant to what they are doing.

The AI could summarise the posts and offer suggestions on what to post, when to post, where to post, etc., with references to the posts / threads that they're basing this information on.

This is all bundled as an online marketing tool, targeted towards small businesses focused on growth.

[–] [email protected] 1 points 9 months ago (1 children)

Well if it's AI based I don't want it.

[–] [email protected] 1 points 9 months ago (1 children)
[–] [email protected] 4 points 9 months ago (1 children)

Because currently it's marketing buzzword for an not even half-matured technology that relies on privacy invading data scrubbing. And even with that the results are very mixed and can't be relied on. So I'd rather not have a tool with 30-50% false positive rate censor my content.

[–] [email protected] 1 points 9 months ago

What you're describing is an uncompetent AI and that is not what this thought experiment is about. It's about one that actually does what it's intented for and does it really well. If you don't believe such AI could exists then fair enough but that is not what this thread is about.

[–] theywilleatthestars 0 points 9 months ago

AI doesn't do anything perfectly, it's all based on statistical trends. The most control we could have over our feeds would be chronological displays of the stuff we choose to follow.