It's starting to get harder for humans to detect audio deepfakes.
The term refers to audio created by artificial intelligence that sounds like a real person is talking.
A new study from the University of Florida College of Engineering is looking to find how to identify them.
Researchers tested 1,200 humans to identify whether an audio message was real or a deepfake.
They were 73% accurate but had a difficult time with foreign accents.
Patrick Traynor is a UF professor and the lead investigator of the study.
"We hope to learn how to effectively train human beings so that we're not just raising alarms," he said. "We're actually giving them the necessary skills to detect deep fakes to be part of a pipeline at stopping this kind of source of misinformation."
According to Deep Media, a media intelligence company, at least 500,000 video and audio deepfakes were shared on social media last year.