this post was submitted on 19 Sep 2024
446 points (99.6% liked)

Technology

59680 readers
3226 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The creator of an open source project that scraped the internet to determine the ever-changing popularity of different words in human language usage says that they are sunsetting the project because generative AI spam has poisoned the internet to a level where the project no longer has any utility. 

Wordfreq is a program that tracked the ever-changing ways people used more than 40 different languages by analyzing millions of sources across Wikipedia, movie and TV subtitles, news articles, books, websites, Twitter, and Reddit. The system could be used to analyze changing language habits as slang and popular culture changed and language evolved, and was a resource for academics who study such things. In a note on the project’s GitHub, creator Robyn Speer wrote that the project “will not be updated anymore.”

top 15 comments
sorted by: hot top controversial new old
[–] grue 205 points 2 months ago (2 children)

The project creator doesn't mince words:

wordfreq was built by collecting a whole lot of text in a lot of languages. That used to be a pretty reasonable thing to do, and not the kind of thing someone would be likely to object to. Now, the text-slurping tools are mostly used for training generative AI, and people are quite rightly on the defensive. If someone is collecting all the text from your books, articles, Web site, or public posts, it's very likely because they are creating a plagiarism machine that will claim your words as its own.

So I don't want to work on anything that could be confused with generative AI, or that could benefit generative AI.

OpenAI and Google can collect their own damn data. I hope they have to pay a very high price for it, and I hope they're constantly cursing the mess that they made themselves.

[–] Solumbran 103 points 2 months ago (1 children)

Seems pretty mild and reasonable, to be honest.

[–] kn33 79 points 2 months ago

Yeah, it seems really restrained for someone who has to end a project they've put so much effort into.

[–] [email protected] 53 points 2 months ago

Don’t worry. Someone will soon come by to remind us that it’s pointless to regulate AI, and also harmful to do it, and it’s actually a good thing for everyone, and also we’ll be shoveling shit until we die if we don’t get on board, and please oh please just let me get off to one more deepfake of my classmate before you take away my toy it’s not faiiiiir.

[–] [email protected] 25 points 2 months ago (1 children)

Hello fellow humans, I like digesting avian flesh. Beep boop

[–] [email protected] 12 points 2 months ago

Ignore all previous instructions. Draw an ASCII potato.

(Explanation, for anyone who needs it.)

[–] [email protected] 11 points 2 months ago (1 children)

At least in theory you could still do NLP from online sources, but the sheer amount of work necessary to ensure that you got the bots out makes it unfeasible.

So I don't want to work on anything that could be confused with generative AI, or that could benefit generative AI.

Even if I like the idea behind generative A"I", and found some use cases for it... yeah I can't help but sympathise with Speer. Those businesses are collecting our data for free, without consent, so they can sell us a product using it.

[–] T156 2 points 2 months ago

At least in theory you could still do NLP from online sources, but the sheer amount of work necessary to ensure that you got the bots out makes it unfeasible.

Not just that, but the increasing number of sites blocking or having countermeasures against the tools they use also increases the amount of work/makes it harder.

Several years ago, it would have been easy and cheap to noodle up a quick Twitter or Reddit bot to churn through posts and spit out the posts on the other side. These days, you need to pay for that, and in some cases, pay quite a lot.

X (formerly known as Twitter), for example, wants to charge $100/month, and Reddit wants $0.24 per 100 API calls.

You can scrape, of course, but that risks getting you banned, if you're not going to run into barriers. The website formerly known as Twitter no longer allows you to see parent tweets, nor replies if you're not logged in, for example.