this post was submitted on 31 Jul 2023
9 points (90.9% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS
9
submitted 1 year ago* (last edited 1 year ago) by cll7793 to c/[email protected]
 

Leaderboard scores often can be a bit misleading since there are other factors to consider.

  • Censorship: Is the model censored?
  • Verbosity: How concise is the output?
  • Intelligence: Does the model know what it is talking about?
  • Hallucination: How much does the model makes up facts?
  • Domain Knowledge: What specialization a model has.
  • Size: Best models for 70b, 30b, 7b respectively.

And much more! What models do you use and would recommend to everyone?

The model that has caught my attention the most personally is the original 65b Llama. It seems genuine and truly has a personality. Everyone should chat with the original non-fine tuned version if they can get a chance. It's an experience that is quite unique within the sea of "As an AI language model" openai tunes.

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 year ago

I've been very partial lately to anything ORCA tuned, i'm not sure if it's placebo but it always feels like they're just that much smarter and have a bit more ability to think things through

for instance, I have a character in oobabooga, and in its description/pre-prompt I told it to ask questions about what it doesn't know with "It only answers questions it knows the answer to, choosing to ask for additional context when information is unclear." and anything that's tuned on orca is 10x more likely to actually consider what it doesn't know and ask for context rather than hallucinating information

lately I've been playing with Dolphin which is llama 1 based, and it's an absolute pleasure https://huggingface.co/ehartford/dolphin-llama-13b

[–] [email protected] 3 points 1 year ago

Uncensored WizardLM 33B has been quite helpful for some general quick tech help

[–] [email protected] 2 points 1 year ago

I didn't try out many models but Vicuna has been the best one so far

[–] Audalin 1 points 1 year ago

Wizard-Vicuna-30B-Uncensored works pretty well for most purposes. It feels the smartest of all I've tried. Even when it hallucinates, it gives enough to refine the google query on some obscure topic. As usual, hallucinations are also easily counteracted by light non-argumentative gaslighting.

It isn't very new though. What's the current SOTA for universal models of similar size? (both foundation and chat-tuned)

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

What kind of prompt (format) do you use when chatting to something like the original base LLaMA?

[–] [email protected] 1 points 1 year ago (1 children)

I thought the original LLaMa was not particularly good for conversational-format interaction as it is not instruction fine-tuned? I thought its mode of operation is always "continue the provided text" (so for example instead of saying "please write an article about..." you would have to write the title and opening paragraph of an article and then it would continue the article).

[–] [email protected] 1 points 1 year ago

I thought its mode of operation is always “continue the provided text”

I haven't played with trying to use it for conversation like stuff so I can't say anything about whether it's "particularly good" or not. However, "continue the provided text" doesn't preclude conversational stuff. If you give it enough of an example of the "conversation", even non-conversation tuned models will complete it. They'll write both sides of the conversation if you let them, but you can use stuff like reverse prompts to return control what it's your "turn".

I'd guess the chat tuned models are kind of more aimed at question/answer and specifically providing accurate and helpful answers rather than just dialog in general as well.