Obviously this is all stupid and you'll find problems anywhere you choose to look.
The problem I'm finding is this, if Facebook truly is betting on AI becoming better as a way to encourage growth then why are they further poisoning their own datasets? Like ok, even if you exclude everything your own bots say from your training data, which you could probably do since you know who they are, this is still encouraging more AI slop on the platform. You don't know how much of the "engagement" your driving (which they are likely just turning around and feeding back into the AI training set) is actually human, AI grifter, or someone poisoning the well by making your AIs talk to themselves. If you actually cared to make your AI better, then you can't use any of the responses to your bots as most of them will be of dubious providence at best.
Personally I'm rooting on the coming Hapsburg-AI issue so I don't really have that much of a problem with Facebook deciding more poison is a brilliant business move. But uh... seems real dumb if your actually interested in having an actually functional LLM.
You know, there's that old yarn about Alfred Nobel. That his obituary was accidentally published early and that he was shocked and dismayed to discover that the only thing he'd be remembered for was the invention of Dynamite. So, he went on to create the Nobel Peace Prize, in the hopes of contributing something other than death to the world.
I'm not saying Nobel was a fantastic dude, but at least he cared enough to not be remembered as the guy that made it possible for your son to get blown to peices in a war. He wanted something positive associated with name.
Even that seems too high a bar for these folks. They've become so entrenched in their own little world that I don't think they much care what anyone outside it thinks.