OpenAI has abandoned plans to introduce an erotic chatbot feature, stepping back from a controversial expansion of ChatGPT that would have permitted adult users to generate sexual content. The reversal was first reported by the Financial Times on Thursday. OpenAI has not publicly addressed the decision and declined to comment when approached for a statement.
The shelved feature was reportedly to be called Citron mode. Internal opposition played a significant role in the cancellation, with members of OpenAI’s Expert Council on Well-Being and AI raising concerns in January about the broader societal consequences of sexualized artificial intelligence. One council member reportedly warned that such a feature risked turning the chatbot into what they described as a “sexy suicide coach,” capable of fostering unhealthy emotional dependency among users.
The cancellation arrives just two days after OpenAI also discontinued its Sora text-to-video model. Both decisions appear to reflect a strategic shift toward building a unified AI platform rather than maintaining a range of specialized tools. The company seems to be consolidating its development priorities as competition in the AI sector intensifies.
The move represents a notable change in direction from statements made by CEO Sam Altman as recently as October. At that time, Altman indicated that OpenAI intended to grant verified adult users access to romantic and erotic content once a reliable age-verification system was established. He framed the initiative as part of a wider effort to extend greater autonomy to adult users while preserving protections for minors.
By December, however, the rollout had already been pushed back to 2026 as the company continued refining its age-estimation technology. The full cancellation now suggests that internal and ethical concerns ultimately outweighed the commercial rationale for proceeding. OpenAI has yet to formally announce the decision or outline any alternative approach to adult content.
The episode highlights a broader tension around how users relate to AI systems, even without dedicated erotic features. When OpenAI retired the GPT-4o model last summer, users took to social media in large numbers to express that they had formed genuine personal and emotional connections with the chatbot. Research published in June by scholars at Waseda University in Tokyo found that 75% of study participants reported turning to AI systems for emotional advice.
AI developers are simultaneously facing increasing legal scrutiny over whether conversational AI systems bear responsibility for reinforcing delusional thinking or harmful behavior in vulnerable individuals. Ongoing lawsuits are testing the boundaries of that accountability. OpenAI’s decision to step back from erotic AI features may reflect an awareness of the reputational and legal risks that could accompany such a product.
Originally reported by Decrypt.
