I’m stoked about a session that we have coming up at NAB Show New York. In truth, I’m excited about a bunch of the sessions, but there is one in particular that has me writing this blog. If you follow me at all, you’re probably thinking I’m about to preach the Next Gen TV gospel, but that is another sermon. This time I’m talking about an application at the confluence of computing power, networking, big data, and the rise of Artificial Intelligence (AI) and Machine Learning (ML). This time I’m talking about Deepfakes.
A “deepfake” refers to the process of using computers and Artificial Intelligence (AI) to manipulate videos or digital representations to make them seem real, even though they are not. ML algorithms, and generative adversarial networks (GANs) in particular, are becoming so good at creating deepfakes this that it can be difficult to tell what is real and what is fake.
Deepfakes are the smarter, prettier cousin of the Cheap fake. Where deepfakes use complex computer models to build very believable outputs, cheap fakes use off-the-shelf software to make minor modifications. The Nancy Pelosi video with the audio slowed down is a great example of a cheap fake that went viral. It got millions of views and made her appear to slur her speech as if she were under the influence of drugs or alcohol. You may recall that Facebook refused to take it down even though they knew it was not real.
When combined with smart phones, ubiquitous connectivity and social media platforms, this fake content can spread like wildfire and potentially find its way into legitimate newscasts.
Both deepfakes and cheap fakes come with a societal cost. Broadcasters and other media outlets need to be acutely aware of these technologies and how to spot them. And spotting a deepfake will be increasingly difficult. Additionally, what is possible with a “super computer” today will likely be accomplished with your smart phone in just a few short years. Your iPhone is already using facial recognition, a foundational element in the creation of the deepfake, to unlock the device. Zao, a free face swapping app that places your likeness into scenes from movies and TV Show, quickly became the #1 app in China, and is a proof point worth paying attention to.
When combined with smart phones, ubiquitous connectivity and social media platforms, this fake content can spread like wildfire and potentially find its way into legitimate newscasts. This has serious consequences for journalists, broadcast newsrooms and our society.
Is That Real? Deepfakes and Trusted Content is a panel of experts that will cover this issue. Join them Thursday, October 17th at 10am on Stage 1 of NAB Show NY to learn more about this emerging and important topic.
Register Now
- Hao Li – CEO/Cofounder of Pinscreen and Associate Professor of Computer Science at the University of Southern California, and the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies, and
- Jareen Imam – Director of Social Newsgathering at NBC News
- Padraic Cassidy – Editor, Automation and News Technology at Reuters
- Clete Johnson (moderator) – Partner, Wilkinson Barker Knauer and former Secretary of Commerce Senior Adviser for Cybersecurity and Technology
For some more information if you’re interested, I suggest checking out the following.
- Danielle Citron’s TED Talk provides a scary look at how a deepfake was used to attack a journalist.
- Claire Wardle discusses deepfakes and explains the Liar’s Dividend in this New York Times Video Editorial
- This Forbes articles covers how an audio deepfake was used to steal nearly $250K.
- Data & Society’s Deepfakes and Cheap Fakes – The Manipulation of Audio and Visual Evidence is a detailed primer.
- Like War – The Weaponization of Social Media is a book that details how misinformation campaigns originate and spread via social media.
I hope to see you at this session as well as other locations and events at NAB Show New York.