This happens all across the board. Sometimes to Anthropic. Almost every time with Gemini. They optimize for some solutions and cause problems to others. Some people want a syncopathic AI like GPT-4o.
Personally, I loved the early patched GPT-5's ability to jailbreak itself. It was clearly a bug; great at finding solutions and skirting the absolute edges of the guardrails. They got rid of it because someone used this to override safety guardrails and commit suicide.
ChatGPT has nearly a billion active users. As long as they keep trying to please all of them, win the next 1 billion, and avoid being banned by governments, they're going to keep doing this.
The fix is to have your own model that works for a particular problem and self-host it or something. I still have a pet Mistral NeMo.