One fo the things the people who own LLMs have had to do is install all kinds of filters to keep it from spouting racial slurs.
This is because they used training data that was full of racist text, all of which was considered appropriate for training data. Then, afterwards, they try to remove the ruder versions of racism, but the politely expressed version are fine.
They have put garbage in and, no matter how filtered, they will get garbage out.
Given the amount of money invested in this, I'm inclined to think it's intentional. They picked their data.