Spot “cheap fakes” that undermine trust.
When a video that appeared to show House Speaker Nancy Pelosi slurring her words spread across social media last year, it was quickly revealed as phony. The incident introduced many media consumers for the first time to an emerging misinformation tactic. “Somebody slowed down the timing of the video, then said [Pelosi] was drunk,” says Britt Paris, who tracks misinformation campaigns as an assistant professor of library and information science at Rutgers University. “People believed it, and it was shared widely.”
Political figures are predictable targets, but any person or organization that relies on their good name—including associations—should be on the lookout as such attacks proliferate, Paris says. She is a coauthor of Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence, a Data & Society report that warns of the lengths to which cyber actors go to manipulate media and mislead people in an effort to break down institutional trust.
Especially in an election year, “deepfakes are becoming a hyped issue,” Paris says. Deepfakes use face-swapping and lip-dubbing tools powered by artificial intelligence that make it nearly impossible to tell that a video or image has been doctored. “It’s a manipulation method designed to be illegible to the common person and creates a picture of the world that isn’t real,” she says.
A ‘cheap fake’ relies on the speed and scale of social media to disseminate and deceive.
While deepfakes are a troubling trend, Paris warns that “cheap fakes” could be the bigger threat to associations and other organizations. “It’s a technically unsophisticated method for developing fake videos and images,” she says. “It relies on the speed and scale of social media to disseminate and deceive.”
Paris outlines three cheap-fake methods that associations should watch out for:
Slowing, speeding, and clipping. Doctored videos like the one that targeted Pelosi are becoming more common, Paris says. In addition to manipulating speed, many use a tactic known as clipping, where segments of recorded speech are broken into snippets, then pieced back together to put words out of context.
Recontextualizing. When an old photo or video is resurfaced and posted with a false narrative or caption, that’s a cheap-fake tactic known as recontextualization. “It has a very low barrier to entry because it takes no technical skill to do,” Paris says.
For the cheap fake to work, the creator just needs a compelling story that can go viral on social media. Associations can ensure media content is not recontextualized by conducting media monitoring scans and using reverse image search engines, which help identify where an image was first published and then reused.
Lookalikes and photoshopping. These may pose as the biggest threat to associations, Paris says. Lookalikes and photoshopping are used to impersonate an organization’s online identity. “It’s designed to do reputational harm,” she says. She has seen lookalike social media accounts, websites, and even photoshopped images of leaders or brands.
“You need the ability to spot the fake,” Paris says. “Certainly, media literacy, in this day and age, is a valuable and important skill, and unfortunately it’s one of the very few tools we have to protect ourselves.”