It would be cool if it worked on podcasts. Seems to be a YouTube only thing though?
If Mozilla drops Manifest v2 I might as well switch to a chrome variant (not brave).
I don’t want a web monoculture but it might as well be if Mozilla just copies chrome at every step.
It’s a hard problem in the fediverse. It makes for a ticking time bomb of an issue. Imagine I am on a “everything is your own, we don’t sell your stuff” instance while another instance just copy pasted metas ToS. By posting a response to my instance, which then in turn is federated to the meta style instance I create something hard to solve. I can foresee other issues too.
I see your point. I just think it’s a difficult problem.
I don’t think the ToS approach would be invalidated here via your Safe Harbor fork theory.
The ToS could state something like “you give us a worldwide perpetual right to use your content in any way we want including granting this right to whom we designate”
You still own your content but by having an account you agree to the ToS that lets them do what they want.
They just host it and are safe.
I don’t think it’s equivalent to sovereign citizens. OP is the author of their comment and therefore has the copyrights. As the author one can license their work as all rights reserved or other permissive licenses.
OP chooses to license their work as Creative Commons.
They’re not forcing you to accept the license, it’s your local government that enforces copyright.
The reason why this might work on Lemmy but not on corporate Social media is that corporate social media often have terms of service that require you to give them ownership/rights/etc. Lemmy has no such ToC.
It’s government reporting data. If you find a better source I say go for it. But I used that data for salary negotiations in the past successfully.
I’m not talking about take home. I’m talking about total annual compensation including things like RSU payouts etc.
Even if we throw out the ones you doubt there are many 300k to 400k entries with the AI researcher title. If we add annualized RSU payouts we easily hit over €500k.
At this point t though you are free to doubt me.
Maybe not with just if statements. But with a heuristic system I bet any site that runs a tar pit will be caught out very quickly.
When I worked in the U.S. I was well above $160k.
When you look at leaks you can see $500k or more for principal engineers. Look at valves lawsuit information. https://www.theverge.com/2024/7/13/24197477/valve-employs-few-hundred-people-payroll-redacted
Meta is paying $400k BASE for AI Reserch engineers with stock options on top which in my experience is an additional 300% - 600%. Vesting over 2 to 4 years. This is to H1B workers who traditionally are paid less.
Once you get to principal and staff level engineering positions compensation opens up a lot.
https://h1bdata.info/index.php?em=meta+platforms+inc&job=&city=&year=all+years
ROI does not matter when companies are telling investors that they might be first to AGI. Investors go crazy over this. At least they will until the AI bubble pops.
I support people resisting if they want by setting up tar pits. But it’s a hobby and isn’t really doing much.
The sheer amount of resources going into this is beyond what people think.
That and a competent engineer can probably write something on the BEAM VM that can handle a crap ton of parallel connections. 6 figure maybe? Being slow walked means low CPU use which means more green threads.
I see your point but like I think you underestimate the skill of coders. You make sure your timeout is inclusive of JavaScript run times. Maybe set a memory limit too. Like imagine you wanted to scrape the internet. You could solve all these tarpits. Any capable coder could. Now imagine a team of 20 of the best coders money can buy each paid 500.000€. They can certainly do the same.
Like I see the appeal of running a tar pit. But like I don’t see how they can “trap” anyone but script kiddies.
That’s crazy. Where do you live roughly? In Germany, and in the U.S. I don’t see any WEP stuff.
And like I can’t even imagine 802.11b/g even being considered to “work” with the modern internet. Like one mid bitrate 1080p stream would overwhelm it.
Fair. But I haven’t seen any anti-ai-scraper tarpits that do that. The ones I’ve seen mostly just pipe 10MB of /dev/urandom out there.
Also I assume that the programmers working at ai companies are not literally mentally deficient. They certainly would add .timeout(10)
or whatever to their scrapers. They probably have something more dynamic than that.
Helix + the appropriate set of LSPs.
It’s like neo vim without the need the manage plugins. That and it uses select -> action instead of vim style action -> select, which makes more sense to me.