Deepfakes: From Entertainment to Enigma
It’s a strange name – but what are deepfakes? There’s a decent chance you’ve seen one without knowing it. The short explanation is they are videos that have been edited to make one person look like another. The more complex version takes time, but it’s worth learning where deepfakes came from, how they’ve evolved, and their ongoing influence.
The ability to manipulate video has been around longer than you might think. It began with a project titled the Video Rewrite program published in 1997. We would consider it rather primitive now, as it simply used an existing video of a person to create a new video of that person speaking different, previously recorded words. The actual term “deepfakes” was coined in 2017, and not surprising with online activity, it was related to and popularized by porn. A group on Reddit created numerous videos, often with celebrities’ faces digitally placed on pornography actresses, as well as switching actors into different movies.
At their inception, one could argue that deepfakes were largely created for simple entertainment. But over the years with the evolution of technology, a darker side began to emerge. The fake pornography videos of celebrities were used for revenge porn. This was one way for a scorned lover to get revenge on their ex. Before long, users decided to use deepfakes to convince others that people, especially politicians, were saying everything imaginable. Thanks to the growing technical sophistication available through artificial intelligence, it has become incredibly easy to spread disinformation. It is no longer necessary to have a high level of software or skills. As USC Computer Science Professor Hao Li put it, these new technologies enable designers to create deepfakes that are “nearly indistinguishable from reality.”
The genie is out of the bottle, and it’s never returning.
It’s a simple truth that humans tend to believe what they see. When it fits with what’s already thought or believed, the illusion is even more powerful. Whether the deepfake is a doctored video of Donald Trump or Nancy Pelosi, it can serve to reinforce strongly held beliefs – even when the images are completely false.
Social media companies have a variety of viewpoints on deepfakes. Facebook acknowledged it had a problem. Last year, Facebook CEO Mark Zuckerberg stated that deepfakes were an “emerging threat” in an appearance before Congress. More recently, Facebook’s VP of Global Policy Management, Monika Bickert, called deepfakes “rare” while also stating the company would “remove misleading manipulated media” if it met certain criteria. On the other hand, social media rivals Snapchat and TikTok wholeheartedly embrace the AI and technology that enable these types of videos, seeming to shrug off the negative potential. Both companies employ highly popular face-swapping technology and show no sign of slowing it down. TikTok has recently updated its community guidelines, stating that the company will “remove content distributed by disinformation campaigns.”
This past September, Facebook took an unusual step and announced the Deepfake Detection Challenge (DFDC). According to the company, “The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer.” Professor Antonio Torralba of MIT is not part of the Challenge but explains further: “People have manipulated images for almost as long as photography has existed. But it’s now possible for almost anyone to create and pass off fakes to a mass audience. The goal of this competition is to build AI systems that can detect the slight imperfections in a doctored image and expose its fraudulent representation of reality.”
In a world where that reality can easily be altered, a decision must be made: whether to accept what’s there for consumption without questioning it or to research further in order to determine if what’s seen and heard is really the truth.