this post was submitted on 23 Jan 2025
902 points (97.8% liked)

Technology

60943 readers
5204 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 141 points 23 hours ago (4 children)

This showed up on HN recently. Several people who wrote web crawlers pointed out that this won’t even come close to working except on terribly written crawlers. Most just limit the number of pages crawled per domain based on popularity of the domain. So they’ll index all of Wikipedia but they definitely won’t crawl all 1 million pages of your unranked website expecting to find quality content.

[–] [email protected] 46 points 14 hours ago (2 children)

Did you read the article? (There is a link to a non walled version.)

Since they made and deployed a proof-of-concept, Aaron B said their pages have been hit millions of times by internet-scraping bots. On a Hacker News thread, someone claiming to be an AI company CEO said a tarpit like this is easy to avoid; Aaron B told 404 Media “If that’s, true, I’ve several million lines of access log that says even Google Almighty didn’t graduate” to avoiding the trap.

[–] [email protected] 11 points 10 hours ago* (last edited 10 hours ago) (1 children)

Millions of hits may sound like a lot, but you need to view that in context.

[–] [email protected] 4 points 9 hours ago (1 children)
[–] Warl0k3 6 points 9 hours ago* (last edited 9 hours ago) (1 children)

The modern internet. Millions of hits is very normal - one of my domains is just 30 year old ASCII art of a penguin, and it gets 2-3 million a month from bots/crawlers (nearly all of them trying common exploits). The idea that the google spider would be notably negatively impacted by this is kinda naive. It could fall fully into the tarpit and it probably wouldn't even get flagged as an abnormal resource allocation. The difference in power between desktop and enterprise equipment is at this point almost inexpressible.

[–] [email protected] 3 points 5 hours ago* (last edited 2 hours ago)

People think of hacking like a thief with a lockpick. It's oftentimes more like someone methodically checking every door in the neighborhood for any that are unlocked.

[–] ShadowWalker 9 points 12 hours ago

If it is linked to the Internet then it'll be hit by crawlers. Their "trap" isn't any how many show up but how long each bot stays on their individual site.

[–] [email protected] 80 points 22 hours ago* (last edited 22 hours ago) (5 children)

Can confirm, I have a website (https://2009scape.org/) with tonnes of legacy forum posts (100k+). No crawlers ever go there.

It's a shame that 404media didn't do any due diligence when writing this

[–] [email protected] 1 points 9 hours ago

Sorry to tell you, but you are indexed at least by duckduckgo, bing, ecosia, startpage, google, and even one of searx' crawlers has payed you a visit.

[–] affiliate 40 points 21 hours ago

No crawlers ever go there.

if it makes you feel any better, i would go there if i was a web crawler.

[–] Luvs2Spuj 21 points 22 hours ago (1 children)

2009scape!? If it's what I think it is that is amazing. Legend

[–] [email protected] 18 points 21 hours ago

It is what you think it is, come join ^^. It's a small niche world

[–] [email protected] 7 points 18 hours ago

Why would they? Outrage and meme content sell clicks, in-depth journalism doesn't.

[–] [email protected] 0 points 12 hours ago

I think you may have just misunderstood the post.

It's not intended to trap the web crawlers indexing content for google search.

It's intended to trap AI training bots harvesting sentences in order to improve their LLMs.

I don't really have an answer as to why those bots don't find your content appealing, but that doesn't mean that Nepenthes doesn't work.

[–] Agent641 22 points 20 hours ago (3 children)

Then that's a where we hide the good stuff

[–] [email protected] 13 points 14 hours ago (2 children)

Reminds me of burying folders in folders in folders to hide naughty content as a youth.

[–] [email protected] 3 points 14 hours ago* (last edited 14 hours ago)

Totally brilliant and foolproof. Humans can't open folders

[–] [email protected] 1 points 13 hours ago

When I worked as a technician in a computer repair company, it was amazing the number of people that were just put that stuff on the desktop.

[–] [email protected] 1 points 10 hours ago (1 children)
[–] Agent641 2 points 9 hours ago

The best stuff

[–] Donkter 1 points 17 hours ago (2 children)
[–] [email protected] 5 points 16 hours ago (1 children)
[–] [email protected] 3 points 16 hours ago

Rule out the mediocre too, unless it’s extremely mediocre then it’s OK

[–] the_tab_key 3 points 16 hours ago
[–] [email protected] 3 points 13 hours ago* (last edited 13 hours ago)

I think this rate limiting mechanism is mostly a niceness rule : you should try to not put too much pressure on any website and obey the rules defined in its robots.txt.

So I guess this idea is not bad as it would mostly penalize bad players.