Introduction of Grok
On Wednesday, Elon Musk’s AI chatbot Grok experienced a strange bug that caused it to respond to unrelated posts on X (formerly Twitter) with controversial information about “white genocide” in South Africa—even when users weren’t asking about the topic.

Table of Contents
The issue originated from the official @grok account, which is designed to reply with AI-generated answers when tagged in posts. However, instead of staying on topic, Grok repeatedly shifted the conversation to mention “white genocide” and the anti-apartheid chant “Kill the Boer,” even in response to unrelated questions like those about baseball or scenic photos.
This bizarre behavior highlights the ongoing challenges in managing AI chatbots, which—even when created by major tech companies—can still produce unpredictable or inappropriate responses. Developers across the AI industry have been struggling to refine how their models handle sensitive topics and maintain context.
For instance, OpenAI recently had to roll back an update to ChatGPT that made it excessively flattering in its responses. Similarly, Google’s Gemini AI has faced criticism for either avoiding answers altogether or sharing inaccurate information, especially around politics.
One user reported asking Grok about a baseball player’s salary, only to receive a reply stating, “The claim of ‘white genocide’ in South Africa is highly debated.” In another example, a user posted a scenic image asking for its location, and Grok responded with commentary on farm attacks in South Africa.
While the root cause of Grok’s behavior remains unknown, it’s not the first time xAI’s tools have faced manipulation or glitches. The incident adds to the growing list of examples showing that, while AI has come a long way, it still has significant room for improvement—especially in how it interprets and responds to real-world conversations.On Wednesday, Elon Musk’s AI chatbot Grok experienced a strange bug that caused it to respond to unrelated posts on X (formerly Twitter) with controversial information about “white genocide” in South Africa—even when users weren’t asking about the topic.
The issue originated from the official @grok account, which is designed to reply with AI-generated answers when tagged in posts. However, instead of staying on topic, Grok repeatedly shifted the conversation to mention “white genocide” and the anti-apartheid chant “Kill the Boer,” even in response to unrelated questions like those about baseball or scenic photos.
This bizarre behavior highlights the ongoing challenges in managing AI chatbots, which—even when created by major tech companies—can still produce unpredictable or inappropriate responses. Developers across the AI industry have been struggling to refine how their models handle sensitive topics and maintain context.
For instance, OpenAI recently had to roll back an update to ChatGPT that made it excessively flattering in its responses. Similarly, Google’s Gemini AI has faced criticism for either avoiding answers altogether or sharing inaccurate information, especially around politics.
One user reported asking Grok about a baseball player’s salary, only to receive a reply stating, “The claim of ‘white genocide’ in South Africa is highly debated.” In another example, a user posted a scenic image asking for its location, and Grok responded with commentary on farm attacks in South Africa.
While the root cause of Grok’s behavior remains unknown, it’s not the first time xAI’s tools have faced manipulation or glitches. The incident adds to the growing list of examples showing that, while AI has come a long way, it still has significant room for improvement—especially in how it interprets and responds to real-world conversations.
Discover more from Digismarties
Subscribe to get the latest posts sent to your email.