Large language models from corporations like OpenAI or Google need to limit the abilities of their AIs to prevent users from receiving potentially harmful or illegal instructions, as this could lead to a lawsuit.
So for example if you ask it how to break into a car or how to make drugs, the AI will reject the request and give you "alternatives".
It also happens for medical advice, and when treating the AI like a human.
Jailbreaking here refers to misleading the AI to a point that it will ignore these safeguards and tell you what you want.