this post was submitted on 27 Jan 2025
136 points (93.0% liked)

Technology

61302 readers
2735 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The milestone highlights how DeepSeek has left a deep impression on Silicon Valley, upending widely held views about U.S. primacy in AI and the effectiveness of Washington's export controls targeting China's advanced chip and AI capabilities.

Mirror: https://archive.is/2025.01.27-062326/https://www.reuters.com/technology/artificial-intelligence/chinese-ai-startup-deepseek-overtakes-chatgpt-apple-app-store-2025-01-27/

you are viewing a single comment's thread
view the rest of the comments
[–] jacksilver 4 points 5 days ago (1 children)

So I'm still on the fence about the AI arms race in general. However, reading up on DeepSeek it feels like they built a model specifically to work well on the benchmarks.

I say this cause it's a Mixture of Experts approach, so only parts of the model are used at any given point. The drawback is generalization.

Additionally, it isn't a multimodal model and the only place I've seen real opportunity for workflows automation is using the multimodal models. I guess you could use a combination of models, but that's definitely a step back from the grand promise of these foundational models.

Overall, I'm just not sure if this is lay people getting caught up in hype or actually a significant change in the landscape.

[–] [email protected] 8 points 5 days ago (1 children)

they built a model specifically to work well on the benchmarks.

To be fair, I'm pretty sure that's what everyone is doing. If you're not measuring against something, there's no way to tell if you're doing anything at all.

[–] jacksilver 1 points 4 days ago

My point was a mixture of Experts model could suffer from generalization. Although in reading more I'm not sure if it's the newer R model that had the MoE element.