The snapshots above look like people you’d know. Your daughter’s best friend from college, maybe? That guy from human resources at work? The emergency-room doctor who took care of your sprained ankle? One of the kids from down the street?

Nope. All of these images are “deepfakes” — the nickname for computer-generated, photorealistic media created via cutting-edge artificial intelligence technology. They are just one example of what this fast-evolving method can do. (You could create synthetic images yourself at ThisPersonDoesNotExist.com.) Hobbyists, for example, have used the same AI techniques to populate YouTube with a host of startlingly lifelike video spoofs — the kind that show real people such as Barack Obama or Vladimir Putin doing or saying goofy things they never did or said, or that revise famous movie scenes to give actors like Amy Adams or Sharon Stone the face of Nicolas Cage. All the hobbyists need is a PC with a high-end graphics chip, and maybe 48 hours of processing time.

It’s good fun, not to mention jaw-droppingly impressive. And coming down the line are some equally remarkable applications that could make quick work out of once-painstaking tasks: filling in gaps and scratches in damaged images or video; turning satellite photos into maps; creating realistic streetscape videos to train autonomous vehicles; giving a natural-sounding voice to those who have lost their own; turning Hollywood actors into their older or younger selves; and much more.

Deepfake artificial-intelligence methods can map the face of, say, actor Nicolas Cage onto anyone else — in this case, actor Amy Adams in the film Man of Steel.

CREDIT: MEMES COFFEE

Yet this technology has an obvious — and potentially enormous — dark side. Witness the many denunciations of deepfakes as a menace, Facebook’s decision in January to ban (some) deepfakes outright and Twitter’s announcement a month later that it would follow suit.

“Deepfakes play to our weaknesses,” explains Jennifer Kavanagh, a political scientist at the RAND Corporation and coauthor of “Truth Decay,” a 2018 RAND report about the diminishing role of facts and data in public discourse. When we see a doctored video that looks utterly real, she says, “it’s really hard for our brains to disentangle whether that’s true or false.” And the internet being what it is, there are any number of online scammers, partisan zealots, state-sponsored hackers and other bad actors eager to take advantage of that fact.

“The threat here is not, ‘Oh, we have fake content!’” says Hany Farid, a computer scientist at the University of California, Berkeley, and author of an overview of image forensics in the 2019 Annual Review of Vision Science. Media manipulation has been around forever. “The threat is the democratization of Hollywood-style technology that can create really compelling fake content.” It’s photorealism that requires no skill or effort, he says, coupled with a social-media ecosystem that can spread that content around the world with a mouse click.

Digital image forensics expert Hany Farid of UC Berkeley discusses how artificial intelligence can create fake media, how it proliferates and what people can do to guard against it.

CREDIT: HUNNIMEDIA FOR KNOWABLE MAGAZINE

The technology gets its nickname from Deepfakes, an anonymous Reddit user who launched the movement in November 2017 by posting AI-generated videos in which the faces of celebrities such as Scarlett Johansson and Gal Gadot are mapped onto the bodies of porn stars in action. This kind of non-consensual celebrity pornography still accounts for about 95 percent of all the deepfakes out there, with most of the rest being jokes of the Nicolas Cage variety.

But while the current targets are at least somewhat protected by fame — “People assume it’s not actually me in a porno, however demeaning it is,” Johansson said in a 2018 interview — abuse-survivor advocate Adam Dodge figures that non-celebrities will increasingly be targeted, as well. Old-fashioned revenge porn is a ubiquitous feature of domestic violence cases as it is, says Dodge, who works with victims of such abuse as the legal director for Laura’s House, a nonprofit agency in Orange County, California. And now with deepfakes, he says, “unsophisticated perpetrators no longer require nudes or a sex tape to threaten a victim. They can simply manufacture them.”

Then there’s the potential for political abuse. Want to discredit an enemy? Indian journalist Rana Ayyub knows how that works: In April 2018, her face was in a deepfake porn video that went viral across the subcontinent, apparently because she is an outspoken Muslim woman whose investigations offended India’s ruling party. Or how about subverting democracy? We got a taste of that in the fake-news and disinformation campaigns of 2016, says Farid. And there could be more to come. Imagine it’s election eve in 2020 or 2024, and someone posts a convincing deepfake video of a presidential candidate doing or saying something vile. In the hours or days it would take to expose the fakery, Farid says, millions of voters might go to the polls thinking the video is real — thereby undermining the outcome and legitimacy of the election.

Meanwhile, don’t forget old-fashioned greed. With today’s deepfake technology, Farid says, “I could create a fake video of Jeff Bezos saying, ‘I’m quitting Amazon,’ or ‘Amazon’s profits are down 10 percent.’ ” And if that video went viral for even a few minutes, he says, markets could be thrown into turmoil. “You could have global stock manipulation to the tune of billions of dollars.”

And beyond all that, Farid says, looms the “terrifying landscape” of the post-truth world, when deepfakes have become ubiquitous, seeing is no longer believing and miscreants can bask in a whole new kind of plausible deniability. Body-cam footage? CCTV tapes? Photographic evidence of human-rights atrocities? Audio of a presidential candidate boasting he can grab women anywhere he wants? “DEEPFAKE!”

Deepfake video methods can digitally alter a person’s lip movements to match words that they never said. As part of an effort to grow awareness about such technologies through art, the MIT Center for Advanced Virtuality created a fake video showing President Richard Nixon giving a speech about astronauts being stranded on the moon.

CREDIT: HALSEY BURGUND / MIT CENTER FOR ADVANCED VIRTUALITY

Thus the widespread concern about deepfake technology, which has triggered an urgent search for answers among journalists, police investigators, insurance companies, human-rights activists, intelligence analysts and just about anyone else who relies on audiovisual evidence.

Among the leaders in that search has been Sam Gregory, a documentary filmmaker who has spent two decades working for WITNESS, a human-rights organization based in Brooklyn, New York. One of WITNESS’ major goals, says Gregory, is to help people in troubled parts of the world take advantage of dramatically improved cell-phone cameras “to document their realities in ways that are trustworthy and compelling and safe to share.”

Unfortunately, he adds, “you can’t do that in this day and age without thinking about the downsides” of those technologies — deepfakes being a prime example. So in June 2018, WITNESS partnered with First Draft, a global nonprofit that supports journalists grappling with media manipulation, to host one of the first serious workshops on the subject. Technologists, human-rights activists, journalists and people from social-media platforms developed a roadmap to prepare for a world of deepfakes.

More WITNESS-sponsored meetings have followed, refining that roadmap down to a few key issues. One is a technical challenge for researchers: Find a quick and easy way to tell trustworthy media from fake. Another is a legal and economic challenge for big social media platforms such as Facebook and YouTube: Where does your responsibility lie? After all, says Farid, “if we did not have a delivery mechanism for deepfakes in the form of social media, this would not be a threat that we are concerned about.”

And for everyone there is the challenge of education — helping people understand what deepfake technology is, and what it can do.

First, deep learning

Deepfakes have their roots in the triumph of the “neural networks,” a once-underdog form of artificial intelligence that has re-emerged to power today’s revolution in driverless cars, speech and image recognition, and a host of other applications.

Although the neural-network idea can be traced back to the 1940s, it began to take hold only in the 1960s, when AI was barely a decade old and progress was frustratingly slow. Trivial-seeming aspects of intelligence, such as recognizing a face or understanding a spoken word, were proving to be far tougher to program than supposedly hard skills like playing chess or solving cryptograms. In response, a cadre of rebel researchers declared that AI should give up trying to generate intelligent behavior with high-level algorithms — then the mainstream approach — and instead do so from the bottom up by simulating the brain.

To recognize what’s in an image, for example, a neural network would pipe the raw pixels into a network of simulated nodes, which were highly simplified analogs of brain cells known as neurons. These signals would then flow from node to node along connections: analogs of the synaptic junctions that pass nerve impulses from one neuron to the next. Depending on how the connections were organized, the signals would combine and split as they went, until eventually they would activate one of a series of output nodes. Each output, in turn, would correspond to a high-level classification of the image’s content — “puppy,” for example, or “eagle,” or “George.”

The payoff, advocates argued, was that neural networks could be much better than standard, algorithmic AI at dealing with real-world inputs, which tend to be full of noise, distortion and ambiguity. (Say “service station” out loud. They’re two words — but it’s hard to hear the boundary.)

And better still, the networks wouldn’t need to be programmed, just trained. Simply show your network a few zillion examples of, say, puppies and not-puppies, like so many flash cards, and ask it to guess what each image shows. Then feed any wrong answers back through all those connections, tweaking each one to amplify or dampen the signals in a way that produces a better outcome next time.

The earliest attempts to implement neural networks weren’t terribly impressive, which explains the underdog status. But in the 1980s, researchers greatly improved the performance of networks by organizing the nodes into a series of layers, which were roughly analogous to different processing centers in the brain’s cortex. So in the image example, pixel data would flow into an input layer; then be combined and fed into a second layer that contained nodes responding to simple features like lines and curves; then into a third layer that had nodes responding to more complex shapes such as noses; and so on.

Later, in the mid-2000s, exponential advances in computer power allowed advocates to develop networks that were far “deeper” than before — meaning they could be built not just with one or two layers, but dozens. The performance gains were spectacular. In 2009, neural network pioneer Geoffrey Hinton and two of his graduate students at the University of Toronto demonstrated that such a “deep-learning” network could recognize speech much better than any other known method. Then in 2012, Hinton and two other students showed that a deep-learning network could recognize images better than any standard vision system — and neural networks were underdogs no more. Tech giants such as Google, Microsoft and Amazon quickly started incorporating deep-learning techniques into every product they could, as did researchers in biomedicine, high-energy physics and many other fields.

Graphic compares a simplified diagram of a 1980s-era neural network, with only one processing layer of nodes between input and output, with today’s more robust ones, which may have three or many more layers.
The neural-network approach to artificial intelligence is designed to model the brain’s neurons and links with a web of simulated nodes and connections. Such a network processes signals by combining and recombining them as they flow from node to node. Early networks were small and limited. But today’s versions are far more powerful, thanks to modern computers that can run networks both bigger and “deeper” than before, with their nodes organized into many more layers.

Yet as spectacular as deep learning’s successes were, they almost always boiled down to some form of recognition or classification —  for example, Does this image from the drone footage show a rocket launcher? It wasn’t until 2014 that a PhD student at the University of Montreal, Ian Goodfellow, showed how deep learning could be used to generate images in a practical way.

Goodfellow’s idea, dubbed the generative adversarial network (GAN), was to gradually improve an image’s quality through competition — an ever-escalating race in which two neural networks try to outwit each other. The process begins when a “generator” network tries to create a synthetic image that looks like it belongs to a particular set of images — say, a big collection of faces. That initial attempt might be crude. The generator then passes its effort to a “discriminator” network that tries to see through the deception: Is the generator’s output fake, yes or no? The generator takes that feedback, tries to learn from its mistakes and adjusts its connections to do better on the next cycle. But so does the discriminator — on and on they go, cycle after cycle, until the generator’s output has improved to the point where the discriminator is baffled.

The images generated for that first GAN paper were low-resolution and not always convincing. But as Facebook’s AI chief Yann LeCun later put it, GANs were “the coolest idea in deep learning in the last 20 years.” Researchers were soon jumping in with a multitude of variations on the idea. And along the way, says Farid, the quality of the generated imagery increased at an astonishing rate. “I don’t think I’ve ever seen a technology develop as fast,” he says.

Diagram shows the process by which one computer synthesizes fake images and another tries to tell whether those are real or fake. Millions of cycles of back and forth can lead to more realistic images.
A generative adversarial network (GAN) learns to generate images by pitting two neural networks against each other. The generator network tries to create an image that looks real, while a discriminator network tries to see through the deception — and both try to learn from their mistakes. After enough cycles, the results can look startlingly realistic.

Make it till you fake it

What few researchers seemed to have anticipated, however, was that the malicious uses of their technology would develop just as rapidly. Deepfakes, the Redditor, kicked things off on November 2, 2017, with the launch of the subReddit /r/deepfakes as a showcase for GAN-generated porn. Specifically, he announced that he was using open-source AI software from Google and elsewhere to implement previously scattered academic research on face-swapping: putting one person’s face onto another person’s body.

Face swapping was nothing new. But in conventional applications such as Photoshop, it had always been a digital cut-and-paste operation that looked pretty obvious unless the replacement face had the same lighting and orientation as the original. The GAN approach made such swapping seamless. First, the generator network would be trained on thousands of images and videos of, say, the film star Gadot, producing a 3D model of her head and face. Next, the generator could use this model to map Gadot’s face digitally onto the target, giving it the same alignment, expressions and lighting as an actor in, say, an adult film. And then it would repeat the process frame by frame to make a video.

Deepfakes’ initial results weren’t quite good enough to fool a careful observer — but still gained a following. He soon had a host of imitators, especially after another anonymous individual packaged Deepfake’s algorithm into self-contained “FakeApp” software, and launched it as a free download in January 2018. Within a month, sensing a lawsuit waiting to happen, Reddit had banned non-consensual deepfake porn videos outright — as did the world’s largest porn sharing site, Pornhub. (“How bad do you have to be to be banned by Reddit and Pornhub at the same time?” says Farid.)

Yet other sites were already coming online, and the flood continued — as did anxieties about the software’s potential use in the fake-news era. Heightening those anxieties were two variations that predated Deepfakes’ face-swap software, and that many experts consider more threatening. “Puppet master,” a facial-reenactment approach pioneered by researchers at Stanford University and others in 2016 with the goal of improving facial motion-capture techniques for video games and movies, allowed an actor to impose her facial expressions on someone in, say, a YouTube video — in real time. “Lip sync,” pioneered at the University of Washington in 2017 with similar goals, could start with an existing video and change the targeted person’s mouth movements to correspond to a fake audio track.

A video describes one of the technologies that could be used to make convincing deepfakes. This approach offers a way to capture an actor’s facial expressions and embed them on an existing video of another person.

CREDIT: ASSOCIATION FOR COMPUTING MACHINERY (ACM)

Legitimate goals or not, these two techniques together could make anyone say or do anything. In April 2018, film director Jordan Peele dramatized the issue with a lip-synced fake video of former President Obama giving a public-service announcement about the dangers of deepfakes on Buzzfeed. And later that same year, when WITNESS’ Gregory and his colleagues started working on the roadmap for dealing with the problem, the issue took the form of a technical challenge for researchers: Find a simple way to tell trustworthy media from fake.

Detecting forgeries

The good news is that this already was a research priority well before deepfakes came along, in response to the surge in low-tech “cheapfakes” made with non-AI-based tools such as Photoshop. Many of the counter-techniques have turned out to apply to all forms of manipulation.

Images show pixel changes generated by computer to transform one image into another.
One approach uses generative adversarial network techniques to turn real images into altered ones. The program called CycleGAN transforms horses into zebras, winter into summer and vice versa.

CREDIT: J.  ZHU ET AL / ICCV 2017

Broadly speaking, the efforts to date follow two strategies: authentication and detection.

With authentication — the focus of startups such as Truepic in La Jolla, Calif., and Amber Video in San Francisco — the idea is to give each media file a software badge guaranteeing it has not been manipulated.

In Truepic’s case, says company vice president Mounir Ibrahim, the software looks like any other smartphone photo app. But in its inner workings, he says, the app inscribes each image or video it records with the digital equivalent of a DNA fingerprint. Anyone could then verify that file’s authenticity by scanning it with a known algorithm. If the scan and fingerprint do not match, it means the file has been manipulated.

To safeguard against fakery of the digital fingerprints, the software creates a public record of each fingerprint using the blockchain technique originally developed for cryptocurrencies like Bitcoin. “And what that does is close the digital chain of custody,” says Ibrahim. Records in the blockchain are effectively un-hackable: They are distributed across computers all over the world, yet are so intertwined that you couldn’t change one without making impossibly intricate changes to all the others simultaneously.

Amber Video’s approach is broadly similar, says company CEO Shamir Allibhai. Like Ibrahim, he emphasizes that his company is not associated with Bitcoin itself, nor dependent on cryptocurrencies in any way. The important thing about a blockchain is that it’s public, distributed and immutable, says Allibhai. So you could share, say, a 15-minute segment of a surveillance video with police, prosecutors, defense attorneys, judges, jurors, reporters and the general public — across years of appeals — and all of them could confirm the clip’s veracity. Says Allibhai: “You only need to trust mathematics.”

Skeptics say that a blockchain-based authentication scheme isn’t likely to become universal for a long time, if ever; it would essentially require every digital camera on the planet to incorporate the same standard algorithm. And while it could become viable for organizations that can insist on authenticated media — think courts, police departments and insurance companies — it doesn’t do anything about the myriad digital images and videos already out there.

Thus the second strategy: detecting manipulations after the fact. That’s the approach of Amsterdam-based startup DeepTrace, says Henry Ajder, head of threat intelligence at the company. The trick, he says, is not to get hung up on specific image artifacts, which can change rapidly as technology evolves. He points to a classic case in 2018, when researchers at the State University of New York in Albany found that figures in deepfake videos don’t blink realistically (mainly because the images used to train the GANs rarely show people with their eyes shut). Media accounts hailed this as a foolproof way of detecting deepfakes, says Ajder. But just as the researchers themselves had predicted, he says, “within a couple of months new fakes were being developed that were perfectly good at blinking.”

A better detection approach is to focus on harder-to-fake tells, such as the ones that arise from the way a digital camera turns light into bits. This process starts when the camera’s lens focuses incoming photons onto the digital equivalent of film: a sensor chip that collects the light energy in a rectangular grid of pixels. As that happens, says Edward Delp, an electrical engineer at Purdue University in Indiana, each sensor leaves a signature on the image — a subtle pattern of spontaneous electrical static unique to that chip. “You can exploit those unique signatures to determine which camera took which picture,” says Delp. And if someone tries to splice in an object photographed with a different camera — or a face synthesized with AI — “we can detect that there is an anomaly.”

Similarly, when algorithms like JPEG and MPV4 compress image and video files to a size that can be stored on a digital camera’s memory chip, they produce characteristic correlations between neighboring pixels. So again, says Jeff Smith, who works on media forensics at the University of Colorado, Denver, “if you change the pixels due to face swapping, whether traditional or deepfake, you’ve disrupted the relationships between the pixels.” From there, an investigator can build up a scan of the image or video to show exactly what parts have been manipulated.

Unfortunately, Smith adds, neither the noise nor the compression technique is bulletproof. Once the manipulation is done, “the next step is uploading the video to the internet and hoping it goes viral.” And as soon as that happens, YouTube or another hosting platform puts the file through its own compression algorithm — as does every site the video is shared with after that. Add up all those compressions, says Smith, “and that can wash out the correlations that we were looking for.”

That’s why detection methods also have to look beyond the pixels. Say you have a video that was recorded in daylight and want to check whether a car shown was really in the original. The perspective could be off, Smith says, “or there could be shadows within the car image that don’t match the shadows in the original image.” Any such violation of basic physics could be a red flag for the detection system, he says.

In addition, says Delp, you could check an image’s context against other sources of knowledge. For example, does a rainy-day video’s digital timestamp match up with weather-service records for that alleged place and time? Or, for a video shot indoors: “We can extract the 60-hertz line frequency from the lights in the room,” Delp says, “and then determine geographically whether the place you say the video is shot is where the video was actually shot.” That’s possible, he explains, because the ever-changing balance of supply and demand means that the power-line frequency is never precisely 60 cycles per second, and the power companies keep very careful records of how that frequency varies at each location.

Or maybe you have a video of someone talking. One way to tell if the speaker has been deepfaked is to look at their unconscious habits — quirks that Farid and his colleagues have found to be as individual as fingerprints. For example, says Farid, “when President Obama frowns, he tends to tilt his head downward, and when he smiles he tends to turn his head upwards into the left.” Obama also tends to have very animated eyebrows. “Senator [Elizabeth] Warren, on the other hand, tends to not smile a lot, but moves her head left and right quite a bit.” And President Donald Trump tends to be un-animated except for his chin and mouth, says Farid. “They are extremely animated in a very particular way.”

Farid and his colleagues have analyzed hours upon hours of videos to map those quirks for various world leaders. So if anyone tries to create a deepfake of one of those leaders, he says, “whether it’s a face swap, a lip sync or a puppet master, those properties are disrupted and we can detect it with a fair degree of accuracy.”

Not by technology alone

To spur the development of such technologies, tech companies such as Amazon, Facebook and Microsoft have joined with WITNESS, First Draft and others to launch a Deepfake Detection Challenge: Participants submit code that will be scored on how well it can detect a set of reference fakes. Cash prizes range up to $500,000.

In the meantime, though, there is a widespread consensus in the field that technology will never solve the deepfake challenge by itself. Most of the issues that have surfaced in the WITNESS roadmap exercises are non-technological. What’s the best way of raising public awareness, for example? How do you regulate the technology without killing legitimate applications? And how much responsibility should be borne by social media?

In April 2018, director Jordan Peele worked with Buzzfeed Media to demonstrate the power of deepfakes. They created a video of former President Barack Obama (voiced by Peele) giving a public-service announcement about the dangers of deepfakes — an announcement that Obama never made.

CREDIT: BUZZFEEDVIDEO

Facebook partially answered that last question on January 6, when it announced it would ban any deepfake that was intended to deceive the viewers. It remains to be seen how effective that ban will be. And in any case, there are other platforms still to be heard from, and many issues still unresolved.

On the technological front, should the platforms start demanding a valid authentication tag for each file that’s posted? Or failing that, do they have a responsibility to check each file for signs of manipulation? It won’t be long before the detection algorithms are fast enough to keep up, says Delp. And the platforms already do extensive vetting to check for copyright violations, as well as for illegal content such as child pornography, so deepfake checks would fit right in. Alternatively, the platforms could go the crowdsourcing route by making the detection tools available for users to flag deepfakes on their own — complete with leaderboards and bounties for each one detected.

There’s a gnarlier question, though — and one that gets caught up in the larger debate over fake news and disinformation in general: What should sites do when they discover bogus posts? Some might be tempted to go the libertarian route that Facebook has taken with deceptive political ads, or non-AI video manipulation: Simply label the known fakes and let the viewers beware.

Unfortunately, that overlooks the damage such a post can do — given that a lot of people will believe it’s real despite the “fake” label. Should, then, sites take down deepfakes as soon as they are discovered? That’s pretty clear-cut when it comes to non-consensual porn, says Farid: Not only do most mainstream sites ban pornography of any kind, but even those that don’t can see there’s a specific person being hurt. But there is a reason that Facebook’s ban carves out a large exception for Nicolas Cage-style joke videos, and deepfakes that are clearly intended as satire, parody or political commentary. In those cases, says Farid, “the legislative and regulatory side is complex because of the free-speech issue.”

A lot of people are afraid to touch this question, says Kavanagh, the RAND political scientist. Particularly in the Silicon Valley culture, she says, “there’s often a sense that either we have no regulation, or we have a Ministry of Truth.” People aren’t thinking about the vast gray area in between, which is where past generations hammered out workable ways to manage print newspapers, radio and television. “I don't necessarily have a good answer to what we should do about the new media environment,” Kavanagh says. “But I do think that the gray areas are where we need to have this conversation.”