this post was submitted on 12 May 2024
398 points (97.6% liked)

World News

39409 readers
2385 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 13 points 7 months ago* (last edited 7 months ago) (3 children)

I've disabled personalised ads on YouTube and I see this sort of shit all the time. I've given up reporting them because 90% of the time the report is rejected. I don't even understand the rationale for rejecting it because it's an obvious a scam as a scam can be - ai impersonation, fake endorsement, illegal advertising category. It's a scam YouTube.

I don't even get why these ads even appear. YouTube has transcription & voice / music recognition capabilities. How hard would it be to flag a suspicious ad and require a human to review it? Or search for duplicates under other burner accounts and zap them at the same time? Or having some kind of randomized audit based on trust where new accounts get reviewed more frequently by experienced reviewers.

[–] [email protected] 12 points 7 months ago (1 children)

No no. This kind of automated "protection" is only used against their users, who are their product. Not the advertisers, who are their customer!

[–] [email protected] 1 points 7 months ago

There are other considerations here though. Google suffers reputational harm if users become victims through their platform. It becomes news, it creates distrust in users, it generates friction with regulators and law enforcement. Users may be trained to be ad averse or install ad blockers. In addition, these ads generate reports which costs time to process even if the complaints are rejected.

At the end of the day these scammers are not high profile advertisers and they're not valuable. They're burner accounts that pay cents to deliver their ads. They're ephemeral, get zapped, reappear and constantly waste time and resources. Given that YouTube can easily transcribe content and watermark it, it makes no sense to me that they wouldn't put some triggers in, e.g. a new advertiser places an ad that says "Elon Musk", or "Quantum AI" or other such markers, flag it for review.

[–] [email protected] 4 points 7 months ago (1 children)

How hard would it be to flag a suspicious ad and require a human to review it?

Hard? No. But then humans would have to be paid which would slow down the growth of the dragon horde.

Better to have a computer analyze the ad that another computer thinks looks real.

[–] [email protected] 1 points 7 months ago

They have to have a human respond to each and every complaint about that ad. Seems more sensible to automate and flag suspicious ads before the complaints happen.

[–] LeroyJenkins 3 points 7 months ago

they ain't gonna stop their customers from paying them more money