this post was submitted on 23 Aug 2023
83 points (96.6% liked)
Programming
17313 readers
34 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
With very few exceptions, yes. There should be no restrictions on characters used/length of password (within reason) if you're storing passwords correctly.
And if a site does have such restrictions, it could be an indication that they store passwords in plaintext, rather than hashed
A very high max of something like 500 characters just to make sure you don't get DOSed by folks hitting your endpoint with huge packets of data is about the most I would expect in terms of length restrictions. I'm not a security expert or anything though.
That's a misunderstanding of DDoS. 0 byte packets are actually worse than large packets.
Which is why most DDoS (at least was) is extremely slow 0 byte requests until the server throttles/crashes under the number of requests.
E: Consider this. Are you more likely to throttle a bandwidth of terabytes/petabytes with couple million 1gb requests; or break it entirely by sending >4294967295 0 byte requests that effectively never stop being requested from the server?
It depends on what the DoS is targeting. If hashing is being done with an expensive hash function you can absolutely cause a lot of resource usage (CPU or memory depending on the hash) by sending long passwords. That being said this likely isn't a huge concern because only the first round needs to process the whole submitted data, the later rounds only work on the previous round's output.
Simple empty requests or connection opening attempts are likely to be stopped by the edge services such as a CDN and fleet of caches which are often over-provisioned. A targeted DoS attack may find more success by crafting requests that make it through this layer and hit something that isn't so overprovisioned.
So yes, many DoS attacks are request or bandwidth floods but this is because they are generic attacks that work on many targets. But that doesn't mean that all DoS attacks work this way. The best attacks target specific weaknesses in the the target rather than pure brute-force floods.
Well to be fair, if they're hashing serverside, they were doomed to begin with.
But yeah, there's a lot of ways to DDoS, and so many tools that just make it a 1 button click.
Who isn't hashing server-side? That just turns the hash into the password which negates a lot of the benefits. (You can do split hashing but that doesn't prevent the need to hash server-side.)
Hashing on client side is both more private, and secure. All the user ever submits is a combined hash (auth/pubkey) of their username + password.
If the server has that hash? Check the DB if it requires 2FA, and if the user sent a challenge response. If not, fail the login.
Registering is pretty much the same. User submits hash, server checks DB against it, fail if exists.
Edit: If data is also encrypted properly in the DB, it doesn't even matter if the entire DB is completely public, leaked, or secured on their own servers.
This means that the submitted hash is effectively a password. You get a minor benefit in that it obscures the original password in case it contains sensitive info or is reused. But the DB is now storing the hash password in plain text. This means that if the DB leaks anyone can just log in by sending the hash.
If you want to do something like this you would need some sort of challenge to prevent replay attacks.
This scheme would also benefit from some salt. Although the included username does act as a form of weak salt.
Per your edit, the DB being "encrypted properly" just means "hashing server side". There's little benefit (though not necessarily zero) to encrypting the entire database, since the key has to live in plaintext somewhere on the same system. It's also making the slowest part of most systems even slower.
Very true and a good explanation of DDoS but I was talking about DoS generally, not specifically DDoS. In my (admittedly pretty limited) experience, a single mega request which is not blocked or rejected by your server can cause it to choke. If you don't have sufficient redundancy or if you get several of these requests coming through it can take down some of your backend services.
It's a good point though, there are lots of different attack vectors each fun in their own way that you need to watch out for.
Right, that's why I put the "within reason" in my comment. You still need to guard against malicious inputs so ultimately there is some max length limit, but it should be way beyond what a reasonable password length would be.
My password is the bee movie script
The best way to handle passwords IMO, is to have the browser compute a quick hash of the password, and then the server compute the hash of that. That way the "password" that is being sent to the server is always the same length.
Underappreciated fact: Bcrypt has a maximum of 72 bytes. It'll truncate passwords longer than that. Remember that UTF8 encoding of special characters can easily take more than one byte.
That said, this is rarely a problem in practice, except for some very long passphrases.
Interesting: https://en.wikipedia.org/wiki/Bcrypt#Maximum_password_length
Makes me question if bcrypt deserves to be widely used. Is there really no superior alternative?
Not only that, bcrypt could be run by GPUs and FPGA, that makes it more prone to bruteforcing attacks.
There are 2 modern alternatives: scrypt and argon2. They both require a substantial amount of memory, so gpu and hardware computation is no longer feasible.