this post was submitted on 24 Jan 2025
30 points (94.1% liked)

Free Open-Source Artificial Intelligence

3022 readers
72 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 5 points 6 days ago (1 children)

How. I though u needed huge amounts of vram on exorbitantly prices GPUs to run LLM with decent capacity? Are the just running a really small model or is it hyper parametrised? Or is the "thinking" process just that effective u can make up for a weak LLM?

[โ€“] cm0002 8 points 6 days ago

Even though it is the smallest of the distilled models that model still outperforms GPT 4o and Claude Sonnet 3.5.

The 7B parameter models crush the older models on performance benchmarks. The 14 billion parameter model is very competitive with OpenAI o1 mini in many metrics.

Yea sounds like it's their smallest model