this post was submitted on 01 Oct 2023
57 points (100.0% liked)

Free Open-Source Artificial Intelligence

2936 readers
3 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

Hey everyone!

I think it's time we had a fosai model on HuggingFace. I'd like to start collecting ideas, strategies, and approaches for fine-tuning our first community model.

I'm open to hearing what you think we should do. We will release more in time. This is just the beginning.

For now, I say let's pick a current open-source foundation model and fine-tune on datasets we all curate together, built around a loose concept of using a fine-tuned LLM to teach ourselves more bleeding-edge technologies (and how to build them using technical tools and concepts).

FOSAI is a non-profit movement. You own everything fosai as much as I do. It is synonymous with the concept of FOSS. It is for everyone to champion as they see fit. Anyone is welcome to join me in training or tuning using the workflows I share along the way.

You are encouraged to leverage fosai tools to create and express ideas of your own. All fosai models will be licensed under Apache 2.0. I am open to hearing thoughts if other licenses should be considered.


We're Building FOSAI Models! 🤖

Our goal is to fine-tune a foundation model and open-source it. We're going to start with one foundation family with smaller parameters (7B/13B) then work our way up to 40B (or other sizes), moving to the next as we vote on what foundation we should fine-tune as a community.


Fine-Tuned Use Case ☑️

Technical

  • FOSAI Model Idea #1 - Research & Development Assistant
  • FOSAI Model Idea #2 - Technical Project Manager
  • FOSAI Model Idea #3 - Personal Software Developer
  • FOSAI Model Idea #4 - Life Coach / Teacher / Mentor
  • FOSAI Model Idea #5 - FOSAI OS / System Assistant

Non-Technical

  • FOSAI Model Idea #6 - Dungeon Master / Lore Master
  • FOSAI Model Idea #7 - Sentient Robot Character
  • FOSAI Model Idea #8 - Friendly Companion Character
  • FOSAI Model Idea #9 - General RPG or Sci-Fi Character
  • FOSAI Model Idea #10 - Philosophical Character

OR

FOSAI Foundation Model ☑️


Foundation Model ☑️

(Pick one)

  • Mistral
  • Llama 2
  • Falcon
  • ..(Your Submission Here)

Model Name & Convention

  • snake_case_example
  • CamelCaseExample
  • kebab-case-example

0.) FOSAI ☑️

  • fosai-7B
  • fosai-13B

1.) FOSAI Assistant ☑️

  • fosai-assitant-7B
  • fosai-assistant-13B

2.) FOSAI Atlas ☑️

  • fosai-atlas-7B
  • fosai-atlas-13B

3.) FOSAI Navigator ☑️

  • fosai-navigator-7B
  • fosai-navigator-13B

4.) ?


Datasets ☑️

  • TBD!
  • What datasets do you think we should fine-tune on?

Alignment ☑️

To embody open-source mentalities, I think it's worth releasing both censored and uncensored versions of our models. This is something I will consider as we train and fine-tune over time. Like any tool, you are responsible for your usage and how you choose to incorporate into your business and/or personal life.


License ☑️

All fosai models will be licensed under Apache 2.0. I am open to hearing thoughts if other licenses should be considered.

This will be a fine-tuned model, so it may inherit some of the permissions and license agreements as its foundation model and have other implications depending on your country or local law.

Generally speaking, you can expect that all fosai models will be commercially viable through the selection process of its foundation family and the post-processing steps that are fine-tuning the model.


Costs

I will be personally covering all training and deployment costs. This may change if I choose to put together some sort of patronage, but for now - don't worry about this. I will be using something like RunPod or some other custom deployed solution for training.


Cast Your Votes! ☑️

Share Your Ideas & Vote in the Comments Below! ✅

What do you want to see out of this first community model? What are some of the fine-tuning ideas you've wanted to try, but never had the time or chance to test? Let me know in the comments and we'll brainstorm together.

I am in no rush to get this out, so I will leave this up for everyone to see and interact with until I feel we have a solid direction we can all agree upon. There will be plenty of more opportunities to create, curate, and customize more fosai models I plan to release in the future.

Update [10/25/23]: I may have found a fine-tuning workflow for both Llama (2) and Mistral, but I haven't had any time to validate the first test run. Once I have a chance to do this and test some inference I'll be updating this post with the workflow, the models, and some sample output with example datasets. Unfortunately, I have ran out of personal funds to allocate to training, so it is unsure when I will have a chance to make another attempt at this if this first attempt doesn't pan out. Will keep everyone posted as we approach the end of 2023.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago (6 children)

Are the llama2 models Apache 2.0 compatible? I think they use a custom license with some restrictions, could be totally wrong though.

[–] Blaed 6 points 1 year ago* (last edited 1 year ago) (2 children)

This will be a fine-tuned model, so it may inherit some of the permissions and license agreements as its foundation model and have other implications depending on your country or local law.

You are correct, if we chose Llama 2 - the fine-tune derivative may be subject to their original license terms. However, Apache 2.0 would apply and transfer to something like a fine-tuned version of Mistral, since its base license is also Apache 2.0.

If there is enough support - I'd be more than open to creating an entirely new foundation model family. This would be a larger undertaking than this initial fine-tuning deployment, but building a completely free FOSAI foundation family of models was the penultimate goal of this project so if this garners enough attention I could absolutely put energy and focus into creating another Mistral-like product instead of splashing around with fine-tuning.

Whatever would help everyone the most! I like where you're thinking though, I'm going to update the thread to include an option to vote for a new foundation family instead. At the end of the day, it's likely I'll do all of the above - I'm just not sure in what order yet..

[–] fhein 5 points 1 year ago (1 children)

You are correct, if we chose Llama 2 - the fine-tune derivative may be subject to their original license terms

The first time I read through the Llama 2 license I thought it said that any llama derivate work also had to be licensed under the same license, but reading it again I think it sounds like the only requirement is that you include a copy of the llama-2 license text. Though I suppose that if someone uses your l2 fine-tune to create something, it would also count as "llama 2 derivate work" and thus be affected by the original license. I'm obviously no license lawyer but personally I wouldn't want risk a legal battle with a company the size of Meta, so I'd vote for the other options just to be one the safe side.

If there is enough support - I’d be more than open to creating an entirely new foundation model family.

Do you have the resources for this to be a viable option? Llama-2 7b used 184320 GPU hours on A100-80GB, and while the exact numbers for Mistral haven't been revealed some article claims it was around 200k hours (which we don't know if they were A100 or H100 hours). And if you have that kind of money to spend, are you confident that the end result will be better than Mistral? If not, why spend that much on creating something equivalent or possibly even inferior? Then there's also the question of how long a model is going to be relevant before some other new model with all the latest innovations is released and makes everything else look outdated.. Even if you can create a model which rivals llama-2 and mistral now, are you going to create a new one to compete with llama-3 and mistral-2 when those come along?

Sorry for the negativity but I think creating a base model sounds likely to be a massive waste of resources. If you have a lot of time and money to throw at this project, I think it would be much better spent on fine-tuning existing models.

[–] Blaed 2 points 1 year ago

I wouldn’t want risk a legal battle with a company the size of Meta, so I’d vote for the other options just to be one the safe side.

Completely reasonable, I agree.

Do you have the resources for this to be a viable option?

Where there's a will, there's a way. I could muster the resources for a foundation model, but it's definitely not the most optimal option we have at our disposal. The original plan was a.) fine-tune a small series (short-term) b.) release a foundation model (long-term). I only recently considered skipping Plan A, but I'm glad I've got feedback to prevent me from doing otherwise. Would've enjoyed the process nonetheless.

Are you confident that the end result will be better than Mistral? If not, why spend that much on creating something equivalent or possibly even inferior?

Of course not. I don't do this to be the best. I offer to do this to understand. To document how to build and release a foundation model from start to finish is knowledge that could be valuable to someone else - which is why I was willing to skip ahead if that was a topic others wanted to dive more into. For me, it's more about the friends we make along the way. There is grace in polishing a product and being the best, but I'd like to think there is also something special in doing something just to document it for others. There is something fulfilling exploring a new frontier with nothing but sheer curiosity.

Then there’s also the question of how long a model is going to be relevant before some other new model with all the latest innovations is released and makes everything else look outdated… Even if you can create a model which rivals llama-2 and mistral now, are you going to create a new one to compete with llama-3 and mistral-2 when those come along?

I also don't do this to be relevant. To be a part of the this is enough for me. In my studies, I have found something bigger than me - I see myself doing this for many years so I know I'll be around to see it evolve and current technologies become irrelevant in time. If you consider existing alongside these models as 'competing' then yes, I would be doing that I suppose.

Sorry for the negativity but I think creating a base model sounds likely to be a massive waste of resources. If you have a lot of time and money to throw at this project, I think it would be much better spent on fine-tuning existing models.

Don't worry, it was very great feedback. Exactly why I made this post! I'm glad you made all your points. It's the same logic I had (and the same logic I was willing to throw aside for others). At this point, it seems like fine-tuning is what most of you want to see. So fine-tuning it shall be!

[–] [email protected] 2 points 1 year ago (1 children)

I don't have too much experience with deep learning, I'm just an enthusiastic spectator. With that said, it seems to me that it would help to build some momentum first with a finetuned foundational model based on an existing model. That would make it more feasible to set our eyes on the goal of a new foundation model in the future with a win under our belt.

Thanks so much for doing this, this seems really cool!

[–] Blaed 2 points 1 year ago (1 children)

I appreciate your comment! It seems like we're going the fine-tuning route. I think it's the best way to do it too. I'm still glad I floated around the foundation model idea. We'll get one of our own eventually!

Welcome to the show! Enthusiast or not, you are part of [email protected]. Your input is valued and your curiosity is encouraged!

[–] [email protected] 1 points 1 year ago

Woohoo! This is exciting :)

load more comments (3 replies)