this post was submitted on 22 Oct 2023
47 points (88.5% liked)

Apple

17481 readers
178 users here now

Welcome

to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!

Rules:
  1. No NSFW Content
  2. No Hate Speech or Personal Attacks
  3. No Ads / Spamming
    Self promotion is only allowed in the pinned monthly thread

Lemmy Code of Conduct

Communities of Interest:

Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple

Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode

Community banner courtesy of u/Antsomnia.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] abhibeckert 4 points 1 year ago* (last edited 1 year ago) (1 children)

To date there’s no local runnable generative LLM model that comes close to the gold standard GPT-4.

True - but iPhones do run a local language model now as part of their keyboard. It's definitely not GPT-4 quality but that's to be expected given it runs on a tiny battery and executes every single time you tap the keyboard. Apple has proven that useful language models can be run locally on the slowest hardware they sell. I don't know of anyone else who's done that?

Even coming close to GPT-3.5-turbo counts as impressive.

Llama 2 is GPT-3.5-Turbo quality and it runs well on modern Macs which have a lot of very fast memory. Even their smallest fanless laptop can be configured with 24GB of memory and it's fast memory too - 800Gbps. That's not quite enough to run the largest Llama2 model but it's close to enough memory. Their more expensive laptops have more memory and it's faster - they can run the 70 billion parameter llama 2 without breaking a sweat.

And on desktops Apple sells Macs with 192GB of memory and it's way faster at 6.4Tbps. That's slightly more memory (and for a lot less money) than the most expensive data center GPU NVIDIA sells (the NVIDIA unit is faster at compute operations but LLMs are often limited by available memory not compute speed).

[–] [email protected] 1 points 1 year ago

You can even run llama2 locally on android phones.