Not the model itself, the X bot. Its obvious that this has happened due to them tweaking the bot, you could never get it to write anything like this a couple of weeks ago.
Can you trust the model when the people releasing it are using it in this way? Can you trust that they won't be training models to behave in the way that they are prompting the existing models to behave?
An acute memory will remember this happening with basically every chatbot trained on text scraped from the internet, before they had to explicitly program them to avoid doing that.