this post was submitted on 09 Oct 2023
34 points (100.0% liked)

Futurology

1784 readers
146 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sudo22 7 points 1 year ago (2 children)

Can this be easily self hosted?

[–] Speculater 0 points 1 year ago (2 children)

The problem is most of these models need like a terabyte of VRAM... And consumers have about 8-24GB.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

Old news pal! 😄

[4/27] Thanks to the community effort, LLaVA-13B with 4-bit quantization allows you to run on a GPU with as few as 12GB VRAM! Try it out here.

[–] [email protected] 2 points 1 year ago (1 children)

12GB of VRAM is still an upgrade away for most people and a 4bit quantized 13B model is barely going to be a tech demo. When open source ai is proclaimed to be near/on par/better then gpt4 they are talking about nothing else than their biggest models in a prime environment.

[–] just_another_person 1 points 1 year ago (1 children)

Sure, but not for standard cloud instances that are very affordable for companies wanting to get away from OpenAI.

[–] [email protected] 1 points 1 year ago

I usually don’t think much about companies and cloud instances when it comes to Fossai but fair enough.

For me its all about locally run consumer models. If we cannot archive that it means we will always need to rely on the wims and decisions of others to acces the most transforming technology ever invented.

[–] sudo22 2 points 1 year ago (1 children)
[–] Speculater 4 points 1 year ago (1 children)

This specific one says it'll run on 24GB actually. But some are just crazy big.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

There are smaller models that can run on most laptops.

https://www.maginative.com/article/stability-ai-releases-stable-lm-3b-a-small-high-performance-language-model-for-smart-devices/

In benchmarks this looks like it is not far off Chat-GPT 3.5.

[–] BetaDoggo_ 1 points 1 year ago

It's not even close, less than half of 3.5's 85.5% in ARC. Some larger Open models are competitive in Hellaswag, TruthfulQA and MMLU but ARC is still a major struggle for small models.

3Bs are kind of pointless right now because the machines with processors capable of running them at a usable speed probably have enough memory to run a 7B anyway.