• Mié. Feb 4th, 2026

Elon Musk’s xAI chatbot Grok under ICO probe for creating sexual imagery of children in UK

Michael Bunting

PorMichael Bunting

Feb 4, 2026
The Grok AI and Elon Musk

The UK’s information watchdog will investigate reports that Elon Musk’s AI chatbot, Grok, has been used to generate sexual imagery of children. Grok was developed by Musk’s xAI in 2023, designed to be a «truth-seeking» assistant with a witty, rebellious personality. Integrated into X, formerly Twitter, it uses real-time data from the platform to generate text, images, and code.

But complaints have mounted that Grok was being used to generate sexual photos of real women and children, and now the Information Commissioner’s Office (ICO) is investigating.

The announcement comes on the same day the X offices in Paris were raided by French prosecutors examining similar allegations. In a statement on its website, the ICO confirmed it had opened a formal probe into two X companies concerning their processing of personal data in relation to Grok and the AI’s potential to produce harmful sexualized image and video content.

«We have taken this step following reports that Grok has been used to generate non-consensual sexual imagery of individuals, including children,» the statement said. «The reported creation and circulation of such content raises serious concerns under UK data protection law and presents a risk of significant potential harm to the public.»

William Malcolm of the ICO said the investigation would probe whether X Internet Unlimited Company and xAI had complied with data protection laws and provided sufficient safeguards. He said, «The reports about Grok raise deeply troubling questions about how people’s personal data has been used to generate intimate or sexualized images without their knowledge or consent, and whether the necessary safeguards were put in place to prevent this. Losing control of personal data in this way can cause immediate and significant harm. This is particularly the case where children are involved. Our role is to address the data protection concerns at the center of this, while recognizing that other organizations also have important responsibilities.»

Ofcom, another regulatory watchdog, also opened a formal investigation into X last month under the UK’s Online Safety Act, to determine whether the firm was complying with its duties to protect people from illegal content. The European Commission launched a probe into Grok last month too, looking at whether it disseminates illegal content, such as manipulated sexualized images, in the EU. The EU is one of a number of authorities around the world to have raised concerns about Grok, with government officials in Germany, Sweden, India, Japan, Malaysia, California, Indonesia, and the Philippines among those that have spoken out. Mr. Malcolm said the ICO was working closely with Ofcom and «international regulators.»

Ofcom said that because of the way the Online Safety Act relates to chatbots, it was currently unable to investigate the creation of illegal images by the standalone Grok itself. xAI said on January 14 it had restricted image editing for Grok AI users and blocked users, based on their location, from generating images of people in revealing clothing in jurisdictions where it’s illegal. It is yet to identify the countries where those restrictions apply. xAI earlier said it had limited the use of Grok’s image generation and editing features to paying subscribers only.

SOURCE

Michael Bunting

Por Michael Bunting

“I’m Michael Bunting, Communications Director with over 20 years of experience in corporate reputation, crisis management, and digital strategy. I have led teams in multinational companies and agencies, advised executives, and designed high-impact strategies. I am driven by transparency, innovation, and leveraging communication as a competitive advantage.”

London-listed tungsten miner in talks for £50m share sale
Shell to unveil multimillion-pound pay hike for CEO Sawan
Former TikTok employee Lynda Ouazar is speaking out against the company, alleging a toxic environment of bullying, harassment, and union busting. Along with three former colleagues, she is taking legal action against TikTok, marking the second court case from former UK employees in recent months. Lynda started as a moderator and quality control worker, but was eventually exposed to disturbing content that took a toll on her mental health. Despite claims of support from TikTok, moderators like Lynda felt pressured to work faster and harder, even in the face of distressing content. The accusation of constant monitoring and pressure on moderators has been echoed by others at the company, highlighting the need for change in how employees are treated. Former TikTok employee accuses the company of bullying, harassment, and union busting. This employee, Lynda, who worked at TikTok for two years, joined the United Tech and Allied Workers (UTAW) union and became a union representative. She started to feel bullied and harassed after becoming a union member, leading to a downgrading of her performance rating without proper explanation. Other employees who she recruited to become union members also faced similar patterns of exclusion and bullying. Lynda emphasizes that working under pressure and speed can lead to errors that affect user safety on the platform. Despite TikTok’s transparency report showing that over 99% of harmful content is removed before being reported, Lynda highlights the negative impact of pressure on moderation accuracy. She also notes that when TikTok announced a restructuring program to change content moderation processes, her team was informed they were at risk.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *