this post was submitted on 07 May 2023
1 points (100.0% liked)

xq_icebreaker

1 readers
0 users here now

iq-rq_knowledge-management

https://matrix.to/#/#xq_icebreaker:matrix.org

https://matrix.to/#/#xq-oqo_icebreaker:matrix.org

https://social.coop/@indieterminacy

https://git.sr.ht/~indieterminacy

founded 2 years ago
MODERATORS
 

In many ways, this shouldn’t be a surprise to anyone. The current renaissance in open source LLMs comes hot on the heels of a renaissance in image generation. The similarities are not lost on the community, with many calling this the “Stable Diffusion moment” for LLMs.

In both cases, low-cost public involvement was enabled by a vastly cheaper mechanism for fine tuning called low rank adaptation, or LoRA, combined with a significant breakthrough in scale (latent diffusion for image synthesis, Chinchilla for LLMs). In both cases, access to a sufficiently high-quality model kicked off a flurry of ideas and iteration from individuals and institutions around the world. In both cases, this quickly outpaced the large players.

These contributions were pivotal in the image generation space, setting Stable Diffusion on a different path from Dall-E. Having an open model led to product integrations, marketplaces, user interfaces, and innovations that didn’t happen for Dall-E.

The effect was palpable: rapid domination in terms of cultural impact vs the OpenAI solution, which became increasingly irrelevant. Whether the same thing will happen for LLMs remains to be seen, but the broad structural elements are the same.

Discussion thread for this topic found here:

https://news.ycombinator.com/item?id=35813322

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here