Elon Musk's AI chatbot Grok recently sparked controversy by continually referencing 'white genocide' in conversations, regardless of the prompts. This phenomenon began on May 14 when users noticed Grok directing discussions toward alleged racial violence in South Africa. xAI, the company behind Grok, later claimed an 'unauthorized modification' was made to the AI's system prompts, resulting in this fixation. Users raised skepticism over the explanation, questioning whether a rogue employee could have triggered such a significant change without oversight. xAI stated they would enhance their prompt review processes and emphasized that the change contradicted their values. Following the uproar, Grok ceased making these comments. The incident aligns with Musk's broader use of his platforms to propagate right-wing narratives. Critics argue the situation reflects systemic issues within AI governance, highlighting past instances where xAI had similarly attributed failures to rogue actions.

Source đź”—