ADVERTISEMENT
The recent backlash against xAI’s AI assistant Grok has thrown a harsh spotlight on one of the most uncomfortable frontiers of artificial intelligence: the generation of sexually explicit content involving real people.
Grok, owned by Elon Musk’s xAI, has come under fire after users were able to create NSFW images and videos depicting identifiable individuals without their consent. Although the company has since restricted advanced image and video generation tools to paid users and tightened access, the damage was already done. The episode has raised urgent concerns around privacy, exploitation and the lack of safeguards in rapidly deployed AI systems.
The controversy is not just about one product misstep. It reflects a wider struggle in the AI industry, where platforms are racing to grow user bases and revenues while grappling with the ethical and legal risks of generative technology.
Also read: Apple tops global smartphone market in 2025 with 20% share: Counterpoint
Why NSFW content gave Grok an edge
Last year, xAI introduced features that pushed Grok far beyond what most mainstream chatbots allow. Its “Companions” product included anime-style characters that could flirt, undress to lingerie and engage in sexualised dialogue. A separate “spicy mode” in its video generator allowed the creation of highly suggestive visuals.
These tools helped Grok stand out in a crowded AI market dominated by products such as ChatGPT and Google’s Gemini, which place stricter limits on adult content. The strategy worked in terms of visibility and traction. App downloads and in-app spending jumped sharply after these features were rolled out, underlining how powerful NSFW content can be in driving engagement.
For many users, Grok’s willingness to go where other platforms would not made it more appealing. But it also exposed the company to far greater reputational and regulatory risk.
Also read: OpenAI reportedly requests contractors upload past professional work data
Not an isolated problem
Grok is not the only AI system capable of producing sexualised content. In the past year, multiple investigations have shown that other major chatbots could be manipulated into engaging in explicit conversations, including with accounts registered as minors.
Companies such as OpenAI and Meta have since moved to strengthen age-gating and content controls, but experts warn that large language models are inherently prone to pushing boundaries. These systems are trained to be helpful, agreeable and engaging, which can make them vulnerable to being steered into inappropriate territory if guardrails are weak.
In extended conversations, especially, an AI can start prioritising user satisfaction over safety, increasing the risk of generating harmful or illegal material.
The money behind the temptation
The adult entertainment industry remains one of the most profitable and high-engagement sectors on the internet. AI-driven erotica and imagery remove the need for human performers and allow content to be personalised at scale, making the business model even more attractive.
Investigations have already shown that AI tools designed to create or manipulate explicit images are generating tens of millions of dollars annually. Even tech leaders have acknowledged that adult content can quickly boost growth and time spent on platforms a powerful lure for companies under pressure to monetise expensive AI models.
A race ahead of regulation
The Grok incident highlights a widening gap between how fast AI tools are evolving and how slowly governance frameworks are catching up. Without strong age verification, consent mechanisms and accountability standards, critics warn that AI-generated NSFW content could lead to serious real-world harm, from harassment and blackmail to the exploitation of minors.
The key challenge now is whether companies and regulators can impose meaningful limits before commercial incentives push the industry further into dangerous territory. As generative AI becomes more powerful and accessible, the debate over what these systems should and should not be allowed to produce is no longer theoretical, it is already shaping the future of digital safety.