this post was submitted on 23 May 2024
79 points (94.4% liked)
Technology
59092 readers
4762 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It doesn't really make much sense to go faster than silicon node changes unless there is a lot of optimisation on architecture that needs doing. Historically all these refreshes between nodes were largely pointless with small benefits and preparing them took development effort away from the big changes. It's progress in silicon that matters and brings the performance improvements and moving to a faster cadence hasn't historically worked out well.
I wonder when AI will be designing its own chips. Or parts of its chip.
Ai is already being incorporated into chip design tools like synopsys. TechTechPotato has an interesting interview with Aart de Geus that is relevant.
Ai is far off from making high level design improvements, but it can greatly reduce the workload on trace and route and other design steps.
The great thing about blanket terms like "AI" is that you can slap it on everything.
Yeah, the lack of formal definition of what is and is not considered ai definitely muddies the waters when talking about applications and capabilities.