115
Genetic testing giant 23andMe is reportedly turning the blame back on its customers for its recent data breach
(www.businessinsider.com)
Icon base by Lorc under CC BY 3.0 with modifications to add a gradient
I’m still quite on the fence about what to think about this. If you have a weak password that you reuse everywhere, and someone logs into your Gmail account and leaks your private data, is it Google’s fault?
If we take it a step further - if someone hacks your computer, because you are clicking on every link imaginable, and the steals your session cookies, which they then use to access such data, is it still the fault of the company for not being able to detect that kind of attack?
Yes, the company could have done more to prevent such an attack, mostly by forcing MFA (any other defense against password stuffing is easily bypassed via a botnet, unless it’s “always on CAPTCHA” - and good luck convincing anyone to use it), but the blame is still mostly on users with weak security habits, and in my opinion (as someone who works in cybersecurity), we should focus on blaming them, instead of the company.
Not because I want to defend the company or something, they have definitely done some things wrong (even though nowhere near as wrong as the users), but because of security awarness.
Shifting the blame solely on the company that it “hasn’t done enough” only lets the users, who due to their poor security habits caused the private data of millions of users being leaked, get away with it in, and let them live their life with “They’ve hacked the stupid company, it’s not my fault.”. No. It’s their fault. Get a password manager FFS.
Headlines like “A company was breached and leaked 7 000 000 of user’s worth of private data” will probably get mostly unnoticed. A headline “14 000 people with weak passwords caused the leak of 7 000 000 user’s worth of private data” may at least spread some awarness.
As someone else who dabbles in cybersecurity - hard disagree. If developers and alleged IT professionals got their shit together, most data breaches wouldn't be a significant problem. Looking at the OWASP top ten, every single item on that list can be boiled down to either 1) negligence, or 2) industry professionals negotiating with terrorist business leaders who prioritize profits over user safety.
Proper engineers have their standards, laws, and ethical principles written in blood. They are much less willing to bend safety requirements compared to the typical jr. developer who sees no problem storing unsalted passwords with an md5 hash.
I get what are you getting at, and I agree with that - a world where every product would follow best practices in regards to security, instead of prioritizing user convenience in places where it's definitely not worth it in the long term, would be awesome (and speaking from the experience of someone who's doing Red Teamings, we're slowly getting there - lately the engagements have been more and more difficult, for the larger companies at least).
But I think that since we're definitely not there yet, and probably won't ever be for was majority of webs and services, it's important to educate users and hammer proper security practices outside of their safe environment into them. Pragmatically speaking, a case like this, where you can illustrate what kind of impact will your personal lack of security practices cause, I think it's better to focus on the user's fault, instead of the company. Just for the sake of security awarness (and that is my main goal), because I still think that headlines about how "14000 people caused millions of people private data leaked", if framed properly, will have better overall impact than just another "company is dumb, they had a breach".
Also, I think that going with "lets force users into environment that is really annoying to use" just by policies alone isn't really what you want, because the users will only get more and more frustrated, that they have to use stupid smart cards, have to remember a password that's basically a sentence and change it every month, or take the time to sign emails and commits, and they will start taking shortcuts. I think that the ideal outcome would be if you managed to convince them that's what they want to do, and that they really understand the importance and reasoning for all of the uncomfortable security anoyancies. And this story could be, IMO, a perfect lesson in security awarness, if it wasn't turned into "company got breached".
But just as with what you were saying about what the company should be doing, but isn't, it's unfortunately the same problem with this point of view - we'll probably never get there, so you can't rely on other users being as security aware as you are, thus you need the company to force it onto them. And vice versa, many companies won't do that, so you need to also rely on your own security practices. But for this case - I think it would serve as a better lesson in personal security, than in the corporate security, because from what I've read the company didn't really do that much wrong, as far as security is considered - their only mistake was not forcing users to use MFA. And tbh, I don't think we even include "Users are not forced to use MFA" into pentest reports, although that may have changed, I haven't done a regular pentest it quite some time (but it's actually a great point, and I'll make sure to include it into our findings database if it isn't there).