this post was submitted on 19 Jul 2023
53 points (96.5% liked)

Free Open-Source Artificial Intelligence

3041 readers
48 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

The AI Horde is a project I started in order to provide access to Generative AI to everyone in the world, regardless of wealth and resources. The objective is to provide a truly open REST API that anyone is free integrate with for their own software and games and allows people to experiment without requiring online payment that is not always possible for everyone.

It is fully FOSS and relies on people volunteering their idle compute from their PCs. In exchange, you receive more priority for your own generations. We already have close to 100 workers, providing generations from stable diffusion to 70b LLM models!

Also the lemmy community is at [email protected]

If you are interested in democratizing access to Generative AI, consider joining us!

you are viewing a single comment's thread
view the rest of the comments
[–] INeedMana 2 points 2 years ago* (last edited 2 years ago) (1 children)

Out of pure curiosity: isn't communicating via internet a bottleneck for distributed neural networks implementation? I mean the distribution of the work, not the API part

[–] [email protected] 2 points 2 years ago* (last edited 2 years ago) (1 children)

We're not doing distributed inference. We're doing distributed clustering The inference runs on individual PCs

[–] INeedMana 2 points 2 years ago (1 children)

So learning phase is distributed but then one prompt goes to one worker?

[–] [email protected] 2 points 2 years ago (1 children)

Yes, each prompt goes to a single worker. The AI Horde also doesn't do training. We only do inference.

[–] INeedMana 1 points 2 years ago (1 children)

Does that mean that with a big (parameter-wise) model and not that powerful worker it can take a long time to respond?

Or the distributed clustering prevents from choking a worker?

[–] [email protected] 2 points 2 years ago

Yes, it could. But we have a timeout. If a worker is unreasonably slow to respond, they get put into maintenance, so we expect people to only serve what they can run.