this post was submitted on 06 Sep 2023
35 points (90.7% liked)
Technology
59658 readers
3035 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Huggingface takes a bit of getting used to but it's the place to find models and datasets, imo it may be one of the most important websites on the internet today.
But what exactly is a model? Can I download and run it? Do I need to access it through their API? Do I need to pay for some server that has all the needed software already running on it? It seems open and not open at the same time.
It's a bit complex, and you can find a better answer elsewhere, but a model is a set of "weights" and "bias" that make up the pathways of the neurons in a neural network.
A model can include other features but at its core it gives users the ability to run an "ai" like gpt, though models aren't limited to only natural language processing.
Yes, you can download the models and run them on your computer, generally there will be instructions in each repository, but in general it involves downloading the model which can be very large and running it using an existing ml framework like pytorch.
It's not a place for the layman right now, but with a few hours of research you could make it happen.
I personally run several models that I got through huggingface on my computer, llama2 which is similar to gpt3 is the one I use the most.
The model is the brain, you use something like Kobold.cpp to load the model. You'll have to work with the settings and try different models to get the right load on your GPU.