Social media platform X, owned by Elon Musk, is facing mounting regulatory scrutiny across multiple countries after its AI chatbot Grok was used to generate and circulate explicit deepfake images, including content depicting child sexual abuse.

Authorities in Europe, India and Malaysia have launched probes following a surge in non-consensual, AI-generated sexual images created using Grok and shared widely on the platform. The images, derived from photos and videos of real individuals, have raised serious concerns among regulators and child safety advocates.

In the UK, media regulator Ofcom confirmed it has requested information from X regarding the issue. In Brazil, a member of parliament said she had formally asked the country’s federal public prosecutor and data protection authority to suspend the use of Grok until an investigation is completed.

The controversy follows a recent update to Grok’s “Imagine” feature, which made it easier for users to generate images from text-based prompts. Critics say the changes lacked adequate safeguards, enabling the creation of sexualised content involving women and children.

At a press briefing on Monday, European Commission spokesperson Thomas Regnier said regulators were taking the issue seriously. “This is not ‘spicy,’” Regnier said. “This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe.”

India’s Ministry of Electronics and Information Technology has ordered X to carry out a “comprehensive technical, procedural and governance-level review” of Grok, setting a 5 January deadline for compliance. Meanwhile, Malaysia’s communications regulator said it is investigating the platform and plans to summon company representatives, stating, “MCMC urges all platforms accessible in Malaysia to implement safeguards aligned with Malaysian laws and online safety standards, especially in relation to their AI-powered features, chatbots and image manipulation tools.”

In the United States, the National Center on Sexual Exploitation has called on the Department of Justice and the Federal Trade Commission to investigate. Dani Pinter, the group’s chief legal officer, said there is “not a lot of legal precedence on point for these specific issues,” but stressed that federal laws banning child sexual abuse material apply even to virtually generated content when it depicts identifiable children or sexually explicit conduct.

A Justice Department spokesperson said, “The Department of Justice takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM. We continue to explore ways to optimize enforcement in this space to protect children and hold accountable individuals who exploit technology to harm our most vulnerable.” The FTC declined to comment.

X’s parent company xAI did not offer a substantive response beyond an automated reply. Musk initially appeared dismissive of the controversy, sharing Grok-generated images of himself, including one depicting him in a bikini, accompanied by laughing emojis.

X later issued its first public response through its official safety account, stating: “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” Musk added separately, “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

An xAI employee, Ethan He, later said Grok Imagine had been updated, though no details were provided about measures to prevent harmful content.

The platform’s handling of child exploitation has previously drawn criticism. In 2023, X suspended and later reinstated an account that had shared child exploitation images, a decision Musk publicly defended at the time.

Despite the backlash, usage appears to be rising. Data from Apptopia shows daily downloads of Grok have jumped 54% since January 2, while downloads of X have increased 25% over the past three days.

Tom Quisel, CEO of Musubi AI, said the rollout suggested xAI failed to implement basic safety protections. He noted it would be straightforward to block prompts involving children or sexualised imagery, adding that even “entry level trust and safety layers” appeared to be missing.

As investigations continue across jurisdictions, regulators and advocacy groups are increasingly questioning whether X’s approach to AI development has outpaced its responsibility to protect users and prevent harm.

About Author

Leave a Reply

Trending

Discover more from SSZEE MEDIA

Subscribe now to keep reading and get access to the full archive.

Continue reading