this post was submitted on 02 Oct 2023
28 points (96.7% liked)

LocalLLaMA

2328 readers
35 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 2 years ago
MODERATORS
 

Trying something new, going to pin this thread as a place for beginners to ask what may or may not be stupid questions, to encourage both the asking and answering.

Depending on activity level I'll either make a new one once in awhile or I'll just leave this one up forever to be a place to learn and ask.

When asking a question, try to make it clear what your current knowledge level is and where you may have gaps, should help people provide more useful concise answers!

you are viewing a single comment's thread
view the rest of the comments
[–] drekly 5 points 1 year ago (2 children)

What can I run on a 1080ti and how does it compare to what's available in general?

[–] [email protected] 8 points 1 year ago (1 children)

On Huggingface is a space where you can select the model and your graphics card and see if you can run it, or how many cards you need to run it. https://huggingface.co/spaces/Vokturz/can-it-run-llm

You should be able to do inference on all 7b or smaller models with quantization.

[–] drekly 5 points 1 year ago

Wow thank you I'll look into it!