488
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis
(www.theverge.com)
This is a most excellent place for technology news and articles.
So if I find a single example of an AI doing a reasoning task that's not in its training material, would you agree that you're wrong and AI does reason?
You won't find one. LLMs are literally incapable of the kind of reasoning you're talking about. All of their solutions are based on training data, no matter how "original" your problem might seem.
You didn't answer my question.