this post was submitted on 23 Jun 2024
88 points (66.3% liked)
Technology
59710 readers
5603 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Putting the claim instead of the reality in the headline is journalistic malpractice. 2x for free is still pretty great tho.
Just finished the article, it's not for free at all. Chips need to be designed to use it. I'm skeptical again. There's no point IMO. Nobody wants to put the R&D into massively parallel CPUs when they can put that effort into GPUs.
Not every problem is amenable to GPUs. If it has a lot of branching, or needs to fetch back and forth from memory a lot, GPUs don't help.
Now, does this thing have exactly the same limitations? I'm guessing yes, but it's all too vague to know for sure. It's sounds like they're doing what superscalar CPUs have done for a while. On x86, that starts with the original Pentium from 1993, and Crays going back to the '60s. What are they doing to supercharge this idea?
Does this avoid some of security problems that have popped up with superscalar archs? For example, some kernel code running at ring 0 is running alongside userspace code, and it all gets the same ring 0 level as a result.