this post was submitted on 29 Jan 2025
757 points (96.7% liked)
Technology
61805 readers
3944 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You can detect pathpoints that come up repeatedly and avoid pursuing them further, which technically aren't called "infinite loop" detection but I don't know the correct name. The point is that the software isn't a Star Trek robot that starts smoking and bricks itself when it hears something illogical.
It can detect cycles. From a quick look at the demo of this tool it (slowly) generates some garbage text after which it places 10 random links. Each of these links loops to a newly generated page. Thus although generating the same link twice will surely happen. The change that all 10 of the links have already been generated before is small
I would simply add links to a list when visited and never revisit any. And that's just simple web crawler logic, not even AI. Web crawlers that avoid problems like that are beginner/intermediate computer science homework.
They are no loops and repeated links to avoid. Every link leads to a brand new, freshly generated page with another set of brand new, never before seen links. You can go deeper and deeper forever without any loops.
You can limit the visits to a domain. The honeypot doesn't register infinite new domains.
sure, if you have enough memory to store a list of all guids.
It doesn't have to memorize all possible guids, it just has to limit visits to base urls.
what part of "they do not repeat" do you still not get? You can put them in a list, but you won't ever get a hit ic it'd just be wasting memory
I get that the Internet doesn't contain an infinite number of domains. Max visits to a each one can be limited. Hel-lo, McFly?
it's one domain. It's infinite pages under that domain. Limiting max visits per domain is a very different thing than trying to detect loops which aren't there. You are now making a completely different argument. In fact it sounds suspiciously like the only thing I said they could do: have some arbitrary threshold, beyond which they give up... because there's no way of detecting otherwise
I'm a software developer responding to a coding problem. If it's all under one domain then avoiding infinite visits is even simpler - I would create a list of known huge websites like google and wikipedia, and limit the visits to any domain that is not on that list. This would eliminate having to track where the honeypot is deployed to.
yes but now you've shifted the problem again. You went from detecting infinite sites by detecting loops in an infinite tree without loops or with infinite distinct urls, to somehow keeping a list of all infinite distinct urls to avoid going to one twice(which you wouldn't anyway, because there are infinite links), to assuming you have a list that already detected which sites these are so you could avoid them and therefore not have to worry about detecting them (the very thing you started with).
It's ok to admit that your initial idea was wrong. You did not solve a coding problem. You changed the requirements so it's not your problem anymore.
And storing a domain whitelist would't work either, btw. A tarpit entrance is just one url among lots of legitimate ones, in legitimate domains.
Okay fine, I 100% concede that you're right. Bye now.