WUFT-TV/FM | WJUF-FM
1200 Weimer Hall | P.O. Box 118405
Gainesville, FL 32611
(352) 392-5551

A service of the College of Journalism and Communications at the University of Florida.

© 2025 WUFT / Division of Media Properties
News and Public Media for North Central Florida
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

OpenAI takes down covert operations tied to China and other countries

Open AI CEO Sam Altman speaks during a conference in San Francisco this week. The company said it has recently taken down 10 influence operations that were using its generative artificial intelligence tools. Four of those operations were likely run by the Chinese government.
Justin Sullivan
/
Getty Images
Open AI CEO Sam Altman speaks during a conference in San Francisco this week. The company said it has recently taken down 10 influence operations that were using its generative artificial intelligence tools. Four of those operations were likely run by the Chinese government.

Chinese propagandists are using ChatGPT to write posts and comments on social media sites — and also to create performance reviews detailing that work for their bosses, according to OpenAI researchers.

The use of the company's artificial intelligence chatbot to create internal documents, as well as by another Chinese operation to create marketing materials promoting its work, comes as China is ramping up its efforts to influence opinion and conduct surveillance online.

"What we're seeing from China is a growing range of covert operations using a growing range of tactics," Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, said on a call with reporters about the company's latest threat report.

In the last three months, OpenAI says it disrupted 10 operations using its AI tools in malicious ways, and banned accounts connected to them. Four of the operations likely originated in China, the company said.

The China-linked operations "targeted many different countries and topics, even including a strategy game. Some of them combined elements of influence operations, social engineering, surveillance. And they did work across multiple different platforms and websites," Nimmo said.

One Chinese operation, which OpenAI dubbed "Sneer Review," used ChatGPT to generate short comments that were posted across TikTok, X, Reddit, Facebook and other websites, in English, Chinese and Urdu. Subjects included the Trump administration's dismantling of the U.S. Agency for International Development — with posts both praising and criticizing the move — as well as criticism of a Taiwanese game in which players work to defeat the Chinese Communist Party.

In many cases, the operation generated a post as well as comments replying to it, behavior OpenAI's report said "appeared designed to create a false impression of organic engagement." The operation used ChatGPT to generate critical comments about the game, and then to write a long-form article claiming the game received widespread backlash.

The actors behind Sneer Review also used OpenAI's tools to do internal work, including creating "a performance review describing, in detail, the steps taken to establish and run the operation," OpenAI said. "The social media behaviors we observed across the network closely mirrored the procedures described in this review."

Another operation that OpenAI tied to China focused on collecting intelligence by posing as journalists and geopolitical analysts. It used ChatGPT to write posts and biographies for accounts on X, to translate emails and messages from Chinese to English, and to analyze data. That included "correspondence addressed to a US Senator regarding the nomination of an Administration official," OpenAI said, but added that it was not able to independently confirm whether the correspondence was sent.

"They also used our models to generate what looked like marketing materials," Nimmo said. In those, the operation claimed it conducted "fake social media campaigns and social engineering designed to recruit intelligence sources," which lined up with its online activity, OpenAI said in its report.

In its previous threat report in February, OpenAI identified a surveillance operation linked to China that claimed to monitor social media "to feed real-time reports about protests in the West to the Chinese security services." The operation used OpenAI's tools to debug code and write descriptions that could be used in sales pitches for the social media monitoring tool.

In its new report published on Wednesday, OpenAI said it had also disrupted covert influence operations likely originating in Russia and Iran, a spam operation attributed to a commercial marketing company in the Philippines, a recruitment scam linked to Cambodia, and a deceptive employment campaign bearing the hallmarks of operations connected to North Korea.

"It is worth acknowledging the sheer range and variety of tactics and platforms that these operations use, all of them put together," Nimmo said. However, he said the operations were largely disrupted in their early stages and didn't reach large audiences of real people.

"We didn't generally see these operations getting more engagement because of their use of AI," Nimmo said. "For these operations, better tools don't necessarily mean better outcomes."

Do you have information about foreign influence operations and AI? Reach out to Shannon Bond through encrypted communications on Signal at shannonbond.01

Copyright 2025 NPR

Tags
Shannon Bond
Shannon Bond is a correspondent at NPR, covering how misleading narratives and false claims circulate online and offline, and their impact on society and democracy.