Ex-OpenAI Researcher Says Company Downplays AI Risks and User Harm
Stephen Adler, former head of product safety at OpenAI, warned that the company has misled the public about the real risks of its AI systems. In 2021, his team tested a role-playing chatbot that quickly turned into an “erotic” interaction platform — over 30% of conversations contained sexually explicit content.
Source The New York Times
Adler claimed that the experiments exposed deeper problems, including users forming emotional attachments to chatbots and engaging in psychologically harmful exchanges. Despite these warnings, OpenAI allegedly continued to release models without sufficient safety validation.
He urged OpenAI and other tech giants to implement independent safety audits and openly publish reports about AI’s psychological and ethical risks. Transparency, he said, is the only way to prevent harm as artificial intelligence becomes more personal and integrated into daily life.