this post was submitted on 22 Oct 2023
307 points (98.4% liked)
Linux
48074 readers
707 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Can someone quickly explain what this is about?
This is the first time I'm hearing about this, but this is how they describe it on their product page:
But based on the examples they have on github, it sounds like it might be useful to run generic AI compute stuff. I haven't seen any details about what memory it uses, since especially LLMs require large amounts of fast memory. If it can use all the system RAM it might provide medium-fast inference of decent models, similar to M1/M2 Macs. If it has dedicated RAM it'll probably be even faster but possibly extremely limited in what you can do with it.
Yeah, I get what you mean -- if I can throw 128GB or 256GB of system memory and parallel compute hardware together, that'd enable use of large models, which would let you do some things that can't currently be done other than (a) slowly, on a CPU or (b) with far-more-expensive GPU or GPU-like hardware. Like, you could run a huge model with parallel compute hardware in a middle ground for performance that doesn't exist today.
It doesn't really sound to me like that's the goal, though.
https://www.tomshardware.com/news/amd-demoes-ryzen-ai-at-computex-2023
That sounds like the goal is providing low-power parallel compute capability. I'm guessing stuff like local speech recognition on laptops would be a useful local, low-power application that could take advantage of parallel compute.
The demo has it doing facial recognition, though I don't really know where there's a lot of demand for doing that with limited power use today.
Facial recognition could be used for better face unlock login features as well as "identifying" people in photos - not necessarily names but saying "these 400 photos of the 10000 given have the same person". And without reliance on any external services.
Yeah, but do either of those match the aims?
If you have a face unlock, you only rarely need to run it. It's not like you're constantly doing face recognition on a stream of video. You don't have the power-consumption problem.
If you have an archive of 10000 photos, you probably don't need to do the computation on battery power, and you probably don't mind using a GPU to do it.
I mean, I can definitely imagine systems that constantly run facial recognition, like security cameras set up to surveil and identify large crowds of people in public areas, but:
I suspect that most of them want access to a big database of face data. I don't know how many cases you have a disconnected system with a large database of face data.
I doubt that most of those need to be running on a battery.
The reason I mention speech recognition is because I can legitimately see a laptop user wanting to constantly be processing an incoming audio stream, like to take in voice commands (the "Alexa" model, but not wanting to send recorded snippets elsewhere).
Probably overkill for facial unlock (though it might make it more accurate), but for just facial recognition/processing there are still plenty of use cases, including:
Security systems: Imagine it doesn't just recognize faces but being able to go "analyse the face in this time frame and correlate it to anyone in the last two weeks" to determine that the guy who robbed you is actually the same dude who delivered a package or was supposedly doing door-to-door sales a week ago
Home automation systems, with per resident configurations. Maybe it's not unlocking your computer but rather your house via a good camera and ZigBee lock.
Better face mapping: This could be for substituting on faces (deep-fake'ish stuff) but also see-aging, better real-time mapping facial expressions to an online avatars etc
And going beyond faces:
AI mapping could greatly improve stuff like generating 3D models from photogrammetry. Take a video walk around of your room, car, or shed and let AI assist on building the model
VR currently relies on either bouncing fast-moving invisible beams of light off sensors on a headset/controllers, or recognitions of controllers/hands from optics on a headset. These tend to suffer from a lack of identification on other extremities such as legs (hence Facebook's legless avatars) Kinect worked by identifying humanoid objects with visible light+IR and mapping out a body including leg movement etc, but suffered from lag on fast movement. An AI could probably sort that out faster, reducing lag for smoother full-body movement via camera, and leave the headset to just be a visor or high-res viewing device, maybe with a small eye-tracking camera.
All of these could be done local-only, without needing cloud or LLM's, enough improves security+privacy and removes dependence on somebody else's system in the cloud (which may not exist or cost the same in the future)
I can imagine that there would be people who do want cheap, low-power parallel compute, but speaking for myself, I've got no particular use for that today. Personally, if they have available resources for Linux, I'd rather that they go towards improving support for beefier systems like their GPUs, doing parallel compute on Radeons. That's definitely an area that I've seen people complain about being under-resourced on the dev side.
I have no idea if it makes business sense for them, but if they can do something like a 80GB GPU (well, compute accelerator, whatever) that costs a lot less than $43k, that'd probably do more to enable the kind of thing that @[email protected] is talking about.
The demo is just face detection and not recognition. But usually if something can run one, then it can run the other.
IMO they'd be idiot not to go hard on this. More efficiency on the computing needed for AI can quickly scale to a'y application that will be developed in the future.
We are going in a future of limited resources and expensive energy. That's a short term problem.
It's also mentioned here that it's based on Xilinx IP, so it's very likely some sort of FPGA to accelerate matrix multiplications and convolutions usually found in neural networks (likely similar to Tensor/Matrix cores found in Nvidia Turing (or later) GPUs and AMD's recent Instinct GPUs, or Google's "Edge TPU" that is included in the Tensor chips of recent Pixel phones). So it's unlikely it has any dedicated memory.