I have a robots.txt which does nothing but exclude "GPTBot"
I was coincidentally looking at my logs for unrelated reasons and caught it reading my robots.txt... and then promptly ignored it and scraped my whole site. Like yeah okay cool man
Cyber Security news and links to cyber security stories that could make you go hmmm. The content is exactly as it is consumed through RSS feeds and wont be edited (except for the occasional encoding errors).
This community is automagically fed by an instance of Dittybopper.
I have a robots.txt which does nothing but exclude "GPTBot"
I was coincidentally looking at my logs for unrelated reasons and caught it reading my robots.txt... and then promptly ignored it and scraped my whole site. Like yeah okay cool man
How does Reddit block users? Just by headers and IP addresses? Could the same be done once the GPTbot headers are known?
Every bit of information being sent to your web server can be spoofed. There is nothing you can do about this unless you're willing to exclude an increasing percentage of real users.
My webserver is constantly barraged by crawlers and bots because I have zero defenses. I've considered intercepting the obvious ones, like the ones targeting wordpress plugins. I don't use wordpress. I could serve them a 200 instead of a 404 and hopefully waste a real humans time if they check the hits manually.
Header spoofing is scraping 101