ChatGPT to detect extremists among users
OpenAI is launching a system that will automatically identify users showing signs of extremist tendencies in their queries. This initiative responds to criticism of AI companies' inability to prevent extremist manifestations promptly.
Criticism of AI developers intensified after an incident in Canada where a ChatGPT user, blocked for extremist queries, perpetrated a school shooting. Authorities did not receive any alerts about suspicious activity on the platform, raising security concerns.
This innovation by OpenAI aims to enhance safety, reduce risks of technology being used for radicalization, and increase accountability in content management.
OpenAI, founded in 2015, is a leading AI company known for developing advanced text generation and multimedia models. Its products are widely applied across various sectors, including business and education.
The new system may set a precedent in combating online extremism and contribute to creating a safer digital environment.
Given the growing security challenges, further development of such systems will likely lead to stricter controls over potentially harmful content and users.