If you’ve never heard of the word deepfake, don’t feel bad. After all, the term has only been in use since 2017.
In January 2019 CNN Business published a page dedicated to the dangers of deepfake videos. At the top is a section heading, “When seeing is no longer believing”. The page cautioned,
“Deepfake technology could change the game … anyone could have the ability to make a convincing fake video, including some people who might seek to “weaponize” it for political or other malicious purposes.”
Malicious Intent
In 2020, deepfake programs – often used to alter photos or recordings to seriously damage a group or individual’s reputation – are popping up all over the Internet.
The New York Times posted an article in August 2019 titled, “This Video May Not Be Real”. The Op-Ed starts with a video clip demonstrating just how clever video fakes can be.
Julie Smith, university professor and author of “Master the Media: How Teaching Media Literacy Can Save Our Plugged-In World” says,
“If a clip we see gives us a strong emotional response, that’s our first clue to check it for authenticity.”
But, knowing human nature, few people will bother to do that.
And, some experts say there may soon be no way to use technical means to tell a fake from a real recording. The fake will be that good.
More on deepfake’s current status was covered in a Washington Post article, dated June 12, 2020, titled, “Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned’”
Goodbye Trust
According to Claire Wardle, author of the New York Times Op-Ed piece,
“The real danger of fakes — deep or shallow — is that their very existence creates a world in which almost everything can be dismissed as false.”
Clearly, that is a dysfunctional state.
Photo credit: Matrix by Gerd Altmann, License: CC0.