Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

OpenAI has terminated the accounts of several Chinese users who were found to be using ChatGPT to develop large-scale surveillance tools. According to the company’s security report, these individuals, linked to the Chinese government, used the AI to generate plans for monitoring Western social media platforms (like Facebook, X, and YouTube) to identify political, religious, or extremist content. Other reported abuses included seeking tools to track the movements of ethnic minorities and identifying government critics. The incident highlights the challenge of preventing powerful general-purpose AI from being misused for digital espionage and state control, a risk that OpenAI is actively combatting by enforcing its policy against misuse.
The rapid evolution of large language models like ChatGPT has brought immense benefits, but also unforeseen risks regarding misuse. OpenAI, the developer of ChatGPT, recently published an update detailing measures taken to prevent the abuse of its technology, highlighting several concerning cases—one of the most significant involving the use of the chatbot to develop sophisticated surveillance tools.
OpenAI confirmed it shut down the accounts of several Chinese users who leveraged ChatGPT to create tools capable of monitoring Western social media platforms. The company reiterated its commitment to developing AI for the benefit of all, explicitly stating its intent to prevent the technology from being used by authoritarian regimes to control their citizens.
The incidents detailed in OpenAI’s report reveal a clear intention to use the AI for large-scale digital espionage. The primary targets were popular Western social media sites and specific political or ethnic groups.
OpenAI noted that while the chatbot in some instances provided only publicly available online information—not sensitive data like names or geographic locations—the intent behind the queries was clearly focused on surveillance and state control.
In response to these security violations, OpenAI acted swiftly, terminating all accounts implicated in the development of surveillance tools.
The company’s October security report also described other forms of illicit use, including the abuse of ChatGPT by developers of malware linked to Russian, North Korean, and Chinese groups. Furthermore, accounts in Cambodia, Myanmar, and Nigeria were closed due to various attempted scams and phishing activities.
These cases underscore a critical challenge for all generative AI developers: while the models are designed with safety guardrails to block the generation of malicious content, sophisticated users can often employ clever prompt engineering to bypass these filters, turning a general-purpose AI into a powerful tool for authoritarian surveillance and cybercrime. The continuous cat-and-mouse game between AI security teams and bad actors highlights the urgent need for robust, proactive security measures to prevent technology designed for progress from being weaponised for control.