this post was submitted on 26 Jul 2024
311 points (98.1% liked)

Technology

59989 readers
2384 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BeatTakeshi 8 points 4 months ago (2 children)

What did they mess from 12th?

[–] Cort 14 points 4 months ago (2 children)

13th & 14th Gen were just higher voltage and clock speed and boost time limit versions of 12th Gen. It seems like they just over did it

[–] CatZoomies 7 points 4 months ago

Holy crap I barely escaped. I needed an upgrade years ago and settled on the i7-12700k. After I ride this chip out I’m switching to AMD.

I really hope customers get justice in this debacle. We need a lawsuit now.

[–] [email protected] 4 points 4 months ago

Money grab because they didn't have anything new to actually bring to the table this time.

[–] PM_Your_Nudes_Please 6 points 4 months ago* (last edited 4 months ago) (1 children)

13th and 14th gen are literally the exact same hardware as 12th gen, but with boosted clock speeds and power requirements. Basically, intel is struggling to develop new hardware, as they’re beginning to be limited by things like atom size and the speed of light across the width of the chip. So instead of developing new hardware, they just slapped new code onto the 12th gen chips and called them a new generation.

But they made the rookie mistake of not adequately dealing with heat dissipation (which is easy to make when overclocking,) and chips are burning out.

[–] [email protected] 5 points 4 months ago* (last edited 4 months ago) (1 children)

I don't think that the voltage issue is simply heat, not unless it is some kind of extremely-localized or extremely-short-in-time issue internal to the chip. I hit the problem with a very hefty water cooler that didn't let the attached processor ever get very warm, at least as the processor reported temperatures.

Wendell, at Level1Techs, who did an earlier video with Steve Burke talking about this, looked over a dataset of hundreds of machines. They were running with conservative speed settings, in a datacenter where all temperatures were being logged, and he said that the hottest he ever saw on any hotspot on any processor in his dataset was, IIRC, 85 degrees Celsius, and normally they were well below that. He saw about a 50% failure rate.

If we hit the problem on our well-cooled CPUs, if the CPU simply getting hot were a problem, I'd have expected people running them in hotter environments to have slammed into the thing immediately. Ditto for Intel -- I'd guess (I'd hope) that part of their QA cycle involves running the processors in an industrial oven, as a way to simulate more-serious conditions. Those things are supposed to be fine at 100 degrees Celsius, at which point they throttle themselves.

[–] trolololol 1 points 4 months ago

It's not about the CPU package getting too hot, it's about a specific set of transistors getting too hot. I think I read they're between the processing units and the cache. The size of these transistors combined is probably around a couple mm square. Unless you etch the package back you can't measure them precisely. And if you etch that you can't dissipate their temperature so you can rub CPU at maximum load.