Like others have said already, bots could likely learn to play those easily ... but I'm more concerned about people with disabilities / illnesses that would make playing these games hard, painful or even impossible. Someone who has parkinsons or arthritis for example might be able to click a big square in an image to solve a captcha, but might have trouble to "fine-tune" their movements fast enough to play a minigame that effectively locks them out of the community if they fail, especially if there is a timer involved.
No Stupid Questions
No such thing. Ask away!
!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must be legitimate questions. All post titles must include a question.
All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.
Rule 2- Your question subject cannot be illegal or NSFW material.
Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts and joke questions.
Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.
On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.
If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.
Rule 7- You can't intentionally annoy, mock, or harass other members.
If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here.
Credits
Our breathtaking icon was bestowed upon us by @Cevilia!
The greatest banner of all time: by @TheOneWithTheHair!
Training an AI to play snake or other simple games is not hard. Making it stop at a specific score might make it slightly harder, but not much. Then you just need to read the text from the screen either, which is trivial. No, not hard for a bots to get past. It might slow actual humans more than bots.
It's definitely trivial for an AI to solve the "game" or task, I think an interesting question would be whether you could filter them by checking how efficiently they do so.
I'm thinking something like giving two consecutive math tasks, first you give e.g. 1+1, then you give something like 11 + 7. While probably all people would spend a small, but detectable, longer amount of time on the "harder" problem, an AI would have to be trained on "what do humans perceive as the harder problem" in order to be undetectable. That is, even training the AI to have a "human like" delay in responding isn't enough, you would have to train it to have a relatively longer delay on "harder" problems.
Another could be:
- Sort the words (ajax, zebra) alphabetically
- Sort the words (analogous, analogy) alphabetically
where the human would spend more time on the second. Do you think such an approach would be feasible, or is there a very good, immediate reason it isn't a common approach already?
I know a lot of sites now use browser fingerprinting and the like in order to determine how likely a user is to be a bot. The modern web tracks a lot of information about users, and all of that can be used to gauge how 'human' the user is, though this does raise some other concerns. A sufficiently stalkerish site already knows if you're human or not.
This CGP Grey video is great, and covers how many captchas are often used to train the bots. https://www.youtube.com/watch?v=R9OHn5ZF4Uo
With that idea, you (the captcha maker) would also have to write some code that computes how long humans should take to do a task (so that you can time the user and compare that with what your code spits out). Whatever code you write, the bot makers could eventually figure out what you wrote, and copy that.
To put it another way, when you say "humans would spend more time on the second task" with your two examples, you would have to write specific rules about how long humans would take, so that your captcha can enforce those rules. But then the bot makers could use trial and error to figure out what your rules were and then write code that waits exactly as long as you're expecting.
It's true that a bot can be specialised to solve it, but i feel that is the case no matter what you do.
To me the appeal of this approach is that it is very simple for a human to make the rules (e.g. numbers with two digits are harder to add than numbers with one digit, or "the more leading letters two words have in common, the harder they are to sort) but for a bot to figure out the rules by trial and error (while answering at human-like speed) will take time. So the set of questions can be changed quite often at low cost, making it less feasible to re-train the bot every time.
Another alternative could be to only give questions that are trivial for a bot, but annoyingly difficult for a human, and let them through if they press "reset captcha" a couple times, though some people might find that annoying..
But note that humans also take awhile to learn these games/rules, and each version of these rules is probably going to accidentally lock people out (and those people will probably get angry). There's a nonzero cost to making people do new things, even when those things are net positive (think about a favorite game that had a UI patch or some such).
I wonder if you can detect if the player is a bot or not, regardless, most captchas are also ml training if I remember correctly
There are two issue posts on the Lemmy github about the captcha options they considered. It is an interesting read. I had no idea there were so many types or even the existence of embedded options. I thought all were 3rd party and most were google, but I was wrong. Still, there are recent Lemmy posts by the devs basically saying the only option to effectively control the bots is by requiring a valid email for account creation.
With AI capabilities now, surely it’s pretty easy for an AI to follow a set of instructions like: create an email, check email, click link in email…etc - is that correct? Or put another way - why would email verification stump ML so consistently if it’s trained to create emails and do the process
I think a reverse Turing test is much harder for a computer to fake. Stopping general bots is not hard. Stopping bots written specifically for the interface is hard.
We could have a “report bot” function for astroturfing and advertising, then the admin can look at the post/vote history and flag the account if it is bad. Maybe some can be automated e.g.posting 24 hours a day or many individual users reporting it.
Rather than kicking the account, add a read only flag “likely spam”, and then users can turn off visibility of all of these accounts in their preferences.
It's a significant piece of history, it would be like going to Gettysburg.
15 seconds to log into m'y vank account! Nah.
This would only stop humans, bots could do it easily