I’ve noticed this too. It also uses the same turns of phrase across many characters, which makes me think the conversational training data is somewhat limited. One thing you can do is a “nuclear” option. I was testing the Strict GM default character on a sci-fi adventure, and the AI became obsessed with putting my party in these crystal caves.
Everything became about the caves. The caves would start to feel my words and resonate with all the action. I manually edited out the caves from all of the previous replies, then just dropped in a relevant subject in the last of its replies.
So in your case, remove everything from its last reply and just put "What is it about video games that you enjoy?" and this will nudge it in the right direction. The AI seems to randomly draw from lore, its description, reminders, and recent QA without any logic to which is most important.
As an example, in a recent chat I accidentally ended a sentence with "/" instead of ".", two replies later AI ended its sentence with /.
It took me a while to figure out that the AI does not understand how to match pronoun with its subject. Too-many-fingers aberrations would happen in the text where character genders would suddenly morph or they would transport magically to replace the actions of another person. It's kind of an annoying problem.