this post was submitted on 25 Jan 2025
461 points (98.3% liked)

Technology

61129 readers
2948 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 22 points 2 days ago* (last edited 2 days ago) (3 children)

Shah and Curry's research that led them to the discovery of Subaru's vulnerabilities began when they found that Curry's mother's Starlink app connected to the domain SubaruCS.com, which they realized was an administrative domain for employees. Scouring that site for security flaws, they found that they could reset employees' passwords simply by guessing their email address, which gave them the ability to take over any employee's account whose email they could find. The password reset functionality did ask for answers to two security questions, but they found that those answers were checked with code that ran locally in a user's browser, not on Subaru's server, allowing the safeguard to be easily bypassed. “There were really multiple systemic failures that led to this,” Shah says.

Yeah, this kinda bothers me with computer security in general. So, the above is really poor design, right? But that emerges from the following:

  • Writing secure code is hard. Writing bug-free code in general is hard, haven't even solved that one yet, but specifically for security bugs you have someone down the line potentially actively trying to exploit the code.

  • It's often not very immediately visible to anyone how actually secure code code is. Not to customers, not to people at the company using the code, and sometimes not even to the code's author. It's not even very easy to quantify security -- I mean, there are attempts to do things like security certification of products, but...they're all kind of limited.

  • Cost -- and thus limitations on time expended and the knowledge base of whoever you have working on the thing -- is always going to be present. That's very much going to be visible to the company. Insecure code is cheaper to write than secure code.

In general, if you can't evaluate something, it's probably not going to be very good, because it won't be taken into account in purchasing decisions. If a consumer buys a car, they can realistically evaluate its 0-60 time or the trunk space it has. But they cannot realistically evaluate how secure the protection of their data is. And it's kinda hard to evaluate how secure code is. Even if you look at a history of exploits (software package X has had more reported security issues than software package Y), different code gets different levels of scrutiny.

You can disincentivize it via market regulation with fines. But that's got its own set of issues, like encouraging companies not to report actual problems, where they can get away with it. And it's not totally clear to me that companies are really able to effectively evaluate the security of the code they have.

And I've not been getting more comfortable with this over time, as compromises have gotten worse and worse.

thinks

Maybe do something like we have with whistleblower rewards.

https://www.whistleblowers.org/whistleblower-protections-and-rewards/

  • The False Claims Act, which requires payment to whistleblowers of between 15 and 30 percent of the government’s monetary sanctions collected if they assist with prosecution of fraud in connection with government contracting and other government programs;
  • The Dodd-Frank Act, which requires payment to whistleblowers of between 10 percent and 30 percent of monetary sanctions collected if they assist with prosecution of securities and commodities fraud; and
  • The IRS whistleblower law, which requires payment to whistleblowers of 15 to 30 percent of monetary sanctions collected if they assist with prosecution of tax fraud.

So, okay. Say we set something up where fines for having security flaws exposing certain data or providing access to certain controls exist, and white hat hackers get a mandatory N percent of that fine if they report it to the appropriate government agency. That creates an incentive to have an unaffiliated third party looking for problems. That's a more-antagonistic relationship with the target than normally currently exists -- today, we just expect white hats to report bugs for reputation or maybe, for companies that have it, for a reporting reward. This shifts things so that you have a bunch of people effectively working for the government. But it's also a market-based approach -- the government's just setting incentives.

Because otherwise, you have the incentives set for the company involved not to care all that much, and the hackers out there to go do black hat stuff, things like ransomware and espionage.

I'd imagine that it'd also be possible for an insurance market for covering fines of this sort to show up and for them to develop and mandate their own best practices for customers.

The status quo for computer security is just horrendous, and as more data is logged and computers become increasingly present everywhere, the issue is only going to get worse. If not this, then something else really does need to change.

[–] [email protected] 1 points 16 hours ago

Yeah, this kinda bothers me with computer security in general. So, the above is really poor design, right? But that emerges from the following:

  • Writing secure code is hard. Writing bug-free code in general is hard, haven’t even solved that one yet, but specifically for security bugs you have someone down the line potentially actively trying to exploit the code.
  • It’s often not very immediately visible to anyone how actually secure code code is. Not to customers, not to people at the company using the code, and sometimes not even to the code’s author. It’s not even very easy to quantify security – I mean, there are attempts to do things like security certification of products, but…they’re all kind of limited.
  • Cost – and thus limitations on time expended and the knowledge base of whoever you have working on the thing – is always going to be present. That’s very much going to be visible to the company. Insecure code is cheaper to write than secure code.

There is nothing wrong with your three points, in general. But I think there are some things in this given case that are very visible weak points before getting into the source code:

  • You should not have connections from the cars to the customer support domain at all. There should be a clear delineation between functions, and a single (redundant if necessary) connection gateway for the cars. This is to keep the attack surface small.

  • Authentication is always server side, passwords and reset-question-answers are the same in that regard. Even writing that code on the client was the wrong place from the start.

  • Resetting a password should involve verifying continued access to the associated email account.

So it seems to me that here the fundamental design was not done securely, far before we get into the hard part of avoiding writing bugs or finding written bugs.

This could have something to do with the existing structures. E.g. the CS platform was an external product and someone bolted on the password reset later in a bad way. The CS department needed to access details on cars during support calls and instead of going though the service that communicates with the cars usually, it was simpler to implement a separate connection to the cars directly. (I'm just guessing of course)

Maybe besides cost, there is also an issue that nobody in the organization has an overall responsibility or the power to enforce a sensible design on the interactions between various systems.

[–] simplejack 4 points 2 days ago

The thing will bullet point 1 is that finding exploits is becoming MUCH easier with LLMs. That said, it’s now arms race. Can you deploy AI to pressure test your systems and find the gaps before the bad actors do the same?

[–] Eheran 4 points 2 days ago

The same happens in science. Verifying or reproducing something needs to be incentivised.