this post was submitted on 20 Jan 2024
413 points (96.0% liked)

Technology

60071 readers
4000 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 92 points 11 months ago (4 children)

Also check out LLM Studio and GPT4all. Both of these let you run private ChatGPT alternatives from Hugging Face and run them off your ram and processor (can also offload to GPU).

[–] Just_Pizza_Crust 26 points 11 months ago (2 children)

I'd also recommend Oobabooga if you're already familiar with Automatic1111 for Stable diffusion. I have found being able to write the first part of the bots response gets much better results and seems to make up false info much less.

[–] [email protected] 10 points 11 months ago (1 children)

There's also koboldcpp, which is fairly newbie friendly.

[–] [email protected] 3 points 11 months ago

And llama file, which is a chat bot in a single executable file.

[–] EarMaster 8 points 11 months ago

I feel like you're all making these names up...but they were probably suggested by a LLM all together...

[–] [email protected] 10 points 11 months ago (3 children)

Are they as good as chatgpt?

[–] [email protected] 40 points 11 months ago* (last edited 11 months ago) (4 children)

Mistral is thought to be almost as good. I’ve used the latest version of mistral and found it more or less identical in quality of output.

It’s not as fast though as I am running it off of 16gb of ram and an old GTX 1060 card.

If you use LLM Studio I’d say it’s actually better because you can give it a pre-prompt so that all of its answers are within predefined guardrails (ex: you are glorb the cheese pirate and you have a passion for mink fur coats).

There’s also the benefit of being able to load in uncensored models if you would like questionable content created (erotica, sketchy instructions on how to synthesize crystal meth, etc).

[–] [email protected] 8 points 11 months ago (1 children)

I’m sure that meth is for personal use right? Right?

[–] [email protected] 5 points 11 months ago (1 children)

Absolutely. Synthesizing hard drugs is time consuming and a lot of hard work. Only I get to enjoy it.

[–] [email protected] 4 points 11 months ago (3 children)

No one gets my mushrooms either ;)

[–] [email protected] 3 points 11 months ago (3 children)
[–] [email protected] 3 points 11 months ago (9 children)

I just buy my substrate online. I’m far less experimental than most. I just want it to work in a consistent way that yields an amount I can predict.

What I really want to grow is Peyote or San Pedro, but the slow growth and lack of sun in my location would make that difficult.

load more comments (9 replies)
load more comments (2 replies)
load more comments (2 replies)
load more comments (3 replies)
[–] [email protected] 3 points 11 months ago
load more comments (1 replies)
[–] [email protected] 7 points 11 months ago (3 children)

Something i am really missing is a breakdown of How good these models actually are compared to eachother.

A demo on hugging face couldnt tell me the boiling point of water while the authors own example prompt asked the boiling point for some chemical.

[–] MTK 4 points 11 months ago* (last edited 11 months ago)
load more comments (2 replies)
[–] [email protected] 5 points 11 months ago (5 children)

I can't find a way to run any of these on my homeserver and access it over http. It looks like it is possible but you need a gui to install it in the first place.

load more comments (5 replies)
[–] stevedidWHAT 79 points 11 months ago (2 children)

Open source good, together monkey strong 💪🏻

Build cool village with other frens, make new things, celebrate as village

[–] [email protected] 15 points 11 months ago (1 children)
[–] stevedidWHAT 7 points 11 months ago

See case in point

[–] Zeon 4 points 11 months ago* (last edited 11 months ago)

It's free / libre software, which is even better, because it gives you more freedom than just 'open-source' software. Make sure to check the licenses of software that you use. Anything based on GPL, MIT, or Apache 2.0 are Free Software licenses. Anyways, together monkey strong 💪

[–] TootSweet 64 points 11 months ago (1 children)

It seems like usually when an LLM is called "Open Source", it's not. It's refreshing to see that Jan actually is, at least.

[–] [email protected] 14 points 11 months ago* (last edited 11 months ago)

Jan is just a frontend. It supports various models under multiple licence. It also supports some proprietary models.

[–] drislands 41 points 11 months ago (1 children)
[–] blazeknave 7 points 11 months ago

Marsha Marsha Marsha!

[–] wetferret 10 points 11 months ago

I would also reccommend faraday.dev as a way to try out different models locally using either CPU or GPU. I believe they have a build for every desktop OS.

[–] randon31415 8 points 11 months ago (8 children)

I have recently been playing with llamafiles, particularly Llava which, as far as I know, is the first multimodal open source llm (others might exist, this is just the first one I have seen). I was having it look at pictures of prospective houses I want to buy and asking it if it sees anything wrong with the house.

The only problem I ran into is that window 10 cmd doesn't like the sed command, and I don't know of an alternative.

[–] [email protected] 8 points 11 months ago

Would it help to run it under WSL?

[–] [email protected] 4 points 11 months ago
[–] [email protected] 3 points 11 months ago

might be a good idea to use windows terminal or cmder and wsl instead of windows shells

load more comments (5 replies)
[–] ElPussyKangaroo 5 points 11 months ago (2 children)

Any recommendations from the community for models? I use ChatGPT for light work like touching up a draft I wrote, etc. I also use it for data related tasks like reorganization, identification etc.

Which model would be appropriate?

[–] Falcon 8 points 11 months ago (1 children)

The mistral-7b is a good compromise of speed and intelligence. Grab it in a GPTQ 4bit.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 3 points 11 months ago* (last edited 11 months ago) (2 children)
[–] [email protected] 12 points 11 months ago (1 children)

The question is quickly answered as none is currently that good, open or not.

Anyway it seems that this is just a manager. I see some competitors available that I have heard good things about, like mistral.

[–] [email protected] 9 points 11 months ago (2 children)

Local LLMs can beat GPT 3.5 now.

[–] Speculater 5 points 11 months ago (1 children)

I think a good 13B model running on 12GB of VRAM can do pretty well. But I'd be hard pressed to believe anything under 33B would beat 3.5.

[–] [email protected] 4 points 11 months ago* (last edited 11 months ago) (1 children)

Asking as someone who doesn't know anything about any of this:

Does more B mean better?

[–] [email protected] 5 points 11 months ago

B stands for Billion (Parameters) IIRC

load more comments (1 replies)
[–] Falcon 5 points 11 months ago* (last edited 11 months ago) (6 children)

Many are close!

In terms of usability though, they are better.

For example, ask GPT4 for an example of cross site scripting in flask and you'll have an ethics discussion. Grab an uncensored model off HuggingFace you're off to the races

load more comments (6 replies)
load more comments
view more: next ›