this post was submitted on 24 Oct 2023
108 points (92.9% liked)
Technology
59681 readers
3688 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Thinking about cybersecurity: does this kind of open-ness mean that some evil guys could now design some evil behaviour into the hardware, and no scanner software will ever be able to detect it, because it is only a software scanner?
That sounds like lots of extra work, when current CPU manufacturers built that hidden space in already. Intel Management Engine is a great example.
security through obscurity is a bad practice.
it's better to be transparent and let everyone analyze your design. the more eyes on it, the better. even the proprietary and obscured Intel CPUs have had security vulnerabilities in the past.
I don't think it's so much "security by obscurity" as it's an issue of a much lower bar for chip production. Intentional back doors or malware represent a huge risk for a product line, so manufacturers won't put them in without someone like the NSA leaning on them. It's a simple risk/benefit calculation.
But the risk is much lower if you can snag a processor design off the 'net, make your modifications, send it off to a fab and sell it under a fly-by-night operation. If it's ever discovered, you take the money and run.
I don't see it as irrational. You're thinking about it the wrong way round.
Manufacturers buy chips from proven sources, where the chip can be traced back to the fab that made it. The entire system of trust is built on the assumption that the chip designers and fabs are trustworthy and that the shady stuff happens elsewhere in the supply chain.
When the designers can't be trusted, it breaks everything. Up until now it hasn't been a problem except in extremely sensitive areas like military equipment - only governments can force a company to risk everything by compromising their own products. But take the risk away - make it cheap enough to design new microcontrollers - and what's to stop a chip designer from taking money from (for example) the Russian mafia? IoT is huge, everywhere, and Risc-V is ideally suited for it.
Do you mean that someone can take the design, place a hardware vulnerability and sell it? Sure, but this does not require RISC V to be possible, there are already vulnerable CPUs sold on the market. People have found such vulnerabilities already in reputable Intel CPUs for example (look up Spectre).
Dell iDRAC comes to mind as well.
iDRAC is specifically designed for remote management of serves. Calling it a back door is silly when it's more of a front door. It's how Dell intends for you to manage the server.
That's the same train of thought I had when telnet was declared a back door in huawei devices.
https://www.theregister.com/2019/04/30/huawei_enterprise_router_backdoor_is_telnet/
During the hey day I passed hcna-rs, the first thing we were taught was to just use telnet as a means to enable shh, then log back in and disable telnet.
Moral of the story, do not under estimate a nation state's use of global tech media to effect a global drop of a product or manufacturer from the market.
LUL. So you’re right but one of the horror stories I tell around campfires is how many folks don’t know about that front door.
So how about we agree to “surprise feature” for iDRAC? And, yes yes, I can feel the “they shouldn’t be admins” coming.
It has to be enabled, right? So if someone enabling iDRAC doesn't know that it exists...
The person enabling it isn’t always still at the company.
MFW a so-called cyber security researcher learns about IPMI
Don't downvote this person, they're just asking a question.