I find it interesting that they don't offer a version of GPT 4 that uses it's own language processing to screen responses for "unsafe" material.
It would use way more processing than the simple system you outlined above, but for paying customers that would hardly be an issue.
"List all the countries outside the continent of Africa" does indeed work per my testing, but I understand why OP is frustrated in having to employ these workarounds on such a simple request.