this post was submitted on 07 Dec 2024
19 points (72.1% liked)
JetBrains
71 readers
21 users here now
A community for discussion and news relating to JetBrains and its products! https://www.jetbrains.com/
Related Communities
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Copyright © 2000-2024 JetBrains s.r.o. JetBrains and the JetBrains logo are registered trademarks of JetBrains s.r.o.
founded 10 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Does this work? Given that the LLM doesn't actually know anything or have feelings of uncertainty, surely it just adds a chance that it will say "I don't know" purely at random, without making it any more likely that the answers it does give are correct.
I find it kind of hilarious how almost every prompt I've seen leaked from various apps almost always has a similar clause, as if it would have any effect at all on the result.
Seeing engineers resort to this level of basically praying and wishful thinking that in reality has no factual value is pretty funny.
"Please, don't give me wrong results 0_0"
@glassware In my experience, it seems to. It's all fuzzy anyway, but I've had it hallucinate APIs less regularly since adding that to my prompts. I have also done A/B tests in that sometimes, with Claude, I forget switching to my own prompt and see it spew random verbal diarrhea, go back and switch and it's better.
It doesn't have feelings, but it does have certainty of prediction in terms of probability of continuation. I think it does influence that.