this post was submitted on 10 Jan 2025
12 points (92.9% liked)

LocalLLaMA

2480 readers
15 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Do i need industry grade gpu's or can i scrape by getring decent tps with a consumer level gpu.

you are viewing a single comment's thread
view the rest of the comments
[–] breakingcups 2 points 2 weeks ago

I still dont understand why u cant distribute a large llm over many different processors each holding a section of the parameters in memory.

Because each weight in a layer influences each weight in the next layer, which means the bandwidth requirements are enormous and regular networking solutions are insufficient for that.