this post was submitted on 01 Jul 2023
35 points (100.0% liked)

Learn Machine Learning

524 readers
1 users here now

Welcome! This is a place for people to learn more about machine learning techniques, discuss applications and ask questions.

Example questions:

Please do:

Please don't:

Other communities in this area:

Similar subreddits: r/MLquestions, r/askmachinelearning, r/learnmachinelearning

founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.intai.tech/post/40699

Models

Datasets

Repos

Related Papers

Credit:

Tweet

Archive:

@Yampeleg The first model to beat 100% of ChatGPT-3.5 Available on Huggingface

๐Ÿ”ฅ OpenChat_8192

๐Ÿ”ฅ 105.7% of ChatGPT (Vicuna GPT-4 Benchmark)

Less than a month ago the world witnessed as ORCA [1] became the first model to ever outpace ChatGPT on Vicuna's benchmark.

Today, the race to replicate these results open-source comes to an end.

Minutes ago OpenChat scored 105.7% of ChatGPT.

But wait! There is more!

Not only OpenChat beated Vicuna's benchmark, it did so pulling off a LIMA [2] move!

Training was done using 6K GPT-4 conversations out of the ~90K ShareGPT conversations.

The model comes in three versions: the basic OpenChat model, OpenChat-8192 and OpenCoderPlus (Code generation: 102.5% ChatGPT)

This is a significant achievement considering that it's the first (released) open-source model to surpass the Vicuna benchmark. ๐ŸŽ‰๐ŸŽ‰

Congratulations to the authors!!


[1] - Orca: The first model to cross 100% of ChatGPT: https://arxiv.org/pdf/2306.02707.pdf [2] - LIMA: Less Is More for Alignment - TL;DR: Using small number of VERY high quality samples (1000 in the paper) can be as powerful as much larger datasets: https://arxiv.org/pdf/2305.11206

you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 3 points 1 year ago* (last edited 1 year ago)

give it 1-2 weeks, someone will post a free one. and ill post it here.