this post was submitted on 06 Apr 2024
101 points (98.1% liked)

BecomeMe

817 readers
2 users here now

Social Experiment. Become Me. What I see, you see.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 17 points 8 months ago* (last edited 8 months ago) (1 children)

Most people had a hard enough time telling the difference between man made fact and fiction, now they have to tell the difference between AI fact and fiction on top.

[–] elshandra 6 points 8 months ago (1 children)

Well AI fact, in this use has always been made up of a combination of man's fact and fiction. Nobody's been smart enough to make an AI that can reliably separate the two, to my knowledge.

[–] [email protected] 1 points 8 months ago (1 children)

It's all about cleaning datasets. For forecasting models, you need to occasionally remove certain historical data to increase accuracy.

The same could work here, but it's obviously at a significantly larger scale and crosses into every interest and discipline.

I believe the solution is curated data models with the top members of the applicable field determining validity or a stack overflow model.

We should basically have a "clean" copy of the internet that is always 3-6 months behind as it is only added with quality data.

[–] elshandra 1 points 8 months ago

I believe the solution is curated data models with the top members of the applicable field determining validity or a stack overflow model.

I think you're on the right track here, but will retain the same flaws ultimately this way.

Personally, I believe the models should be open and all interested parties have varying degrees of influence over the accepted truth. That's going to be a complicated in itself.

By limiting it to "trusted people", you only have to corrupt enough of them, and eventually you end up with the same shitty problems, but with bots too.