this post was submitted on 28 Apr 2024
380 points (96.8% liked)
BecomeMe
753 readers
1 users here now
Social Experiment. Become Me. What I see, you see.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"AI Explorer will utilize next-gen neural processing unit (NPU) hardware to process these machine learning and generative AI experiences locally on the device with low latency."
"The feature is also said to be exclusive to devices powered by Qualcomm's upcoming Snapdragon X series chips,"
Tl;dr: you should read the article first
If you actually read Qualcomm's white paper for their NPU you'd know OPs concerns about windows resource bloat is still a reasonable concern to have at least on the memory side of things.
The memory on their SoC is shared between the CPU, GPU, and critically the NPU. This is so they can save on memory bandwidth since they won't have to copy the same data potentially three times over. However this also means by adding more AI bloat into windows will also clog up precise memory that could be used elsewhere.
That SoC, the snapdragon elite x supports eight channels of LPDDR5x which means the lowest amount of memory we could see is 16Gb assuming all eight channels used with 2Gb memory packages (which is the smallest size JEDEC allows LPDDR5x).
Is that really going to be enough for everything? In my opinion no because the the GPU will take a chunk, windows will take a chunk, the NPU will take a chunk, and the storage will try to take a chunk for caching if there's anything left.
TL;DR: You should read the white papers instead of the marketing fluff articles.
While that is compelling evidence, I offer a counterpoint:
He owns an airfryer.
The article said "The feature is also said to be exclusive to devices powered by Qualcomm's upcoming Snapdragon X series chips, at least at first.
And even if it requires a PC with at least 16GB RAM, then those computers will feel slower than they should be.