this post was submitted on 24 Dec 2024
-2 points (33.3% liked)

Artificial Intelligence

1447 readers
20 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 2 months ago* (last edited 2 months ago)

It's for sure not impossible. But my guess is that it's because you learn the new model and your behavior and expectations change. It's a known phenomenon and I do believe the developers/companies when they say that they didn't change anything. It's also quite easy to verify/test this hypothesis by using locally hosted LLMs. There are probably a few papers covering this already.

Though it does happen that one is downgraded to a smaller model when using free versions OpenAI, Anthropic and others. But my experience is that this information allways is explicit in the UI. Still, it's probably quite easy to miss.

Also, I'm almost exclusively using the free version of Mistral Large (Le Chat) and I've never experienced regression. But Mistral also never downgrades, it just becomes very slow.