this post was submitted on 24 Jan 2025
190 points (93.2% liked)

Technology

61242 readers
4353 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

Registration bypass: https://archive.is/3tEl0

you are viewing a single comment's thread
view the rest of the comments
[–] HexadecimalSky 6 points 6 days ago (2 children)

I think the point is it doesn't specifically target "AI trainers" but web crawlers, which are used by more then just A.I. trainer, for example search engines.

[–] [email protected] 3 points 5 days ago* (last edited 5 days ago) (1 children)

Search engine crawlers generally respect robots.txt, so if you add a robots.txt entry to disallow all crawlers from getting into the maze, effectively only AI crawlers will go there.

[–] HexadecimalSky 2 points 5 days ago

Oh, okay, I didn't know that, thank you.

[–] [email protected] -3 points 6 days ago* (last edited 6 days ago) (1 children)

Actually, it does specifically target AI trainers, as it poisons their training data. These webcrawlers are just a means to an end.

[–] HexadecimalSky 4 points 6 days ago (1 children)

It affects them, yes, but it doesn't only affect them. It's just a poison in the well tactic that can affect them. but because it isn't specific even more companies will work to "fix it". Also while it can waste resources, it doesn't stop A.I. training in most cases or render them incompetent.

For example if I add rat poison to all the local water ways, it would get rid of the pigeon problem, so it targets pigeons?

[–] [email protected] -1 points 6 days ago* (last edited 6 days ago) (2 children)

The first part of what you said, contradicts itself, and the second part of what you said is a terrible metaphor. Especially considering that these web crawlers that crawl for AI training data only target that. And this specifically target AI training web crawlers.

So, it’s more like putting a very specific rat poison in the waterways that is only poisonous to rats.

It seems like you don’t understand how this works.

[–] [email protected] 5 points 6 days ago (1 children)

And this specifically target AI training web crawlers.

There's no way to distinguish between an AI training crawler and any other crawler. Per https://zadzmo.org/code/nepenthes/ :

"This is a tarpit intended to catch web crawlers. Specifically, it's targetting crawlers that scrape data for LLM's - but really, like the plants it is named after, it'll eat just about anything that finds it's way inside."

Emphasis mine. Even the person who coded this thing knows that it can't tell what a given crawler's purpose is. They're just willing to throw the baby out with the bathwater in this case, and mess with legitimate crawlers in order to bog down the ones gathering data for LLM training.

(In general, there is no way to tell for certain what is requesting a webpage. The User-Agent header that (usually) arrives with an HTTP(S) request isn't regulated and can contain any arbitrary string. Crawlers habitually claim to be old versions of Firefox, and there isn't much the server can do to identify what they actually are.)

[–] [email protected] 1 points 5 days ago (1 children)

You can specifically target crawlers that ignore robots.txt, which will catch practically every LLM scraper.

[–] [email protected] 1 points 5 days ago

Well, yeah, but obeying robots.txt is only a courtesy in the first place, so you can't guarantee it'll catch only LLM-related crawlers and no others, although it may lower the false positive rate.

[–] HexadecimalSky 4 points 6 days ago

From reading the article it just seems it targets web crawlers, by having a infinitely looping url. How does it target A.I. training webcrawls specifically?