Deepfakes are getting biologically realistic
31669 views
If you’ve ever seen a deepfake video online, a politician saying something they never said, or a celebrity starring in a movie they never filmed, you know how eerie and convincing they can be. Just a few years ago, these synthetic videos were clunky, with strange blinking patterns or stiff facial movements that made them easy to spot. But today, thanks to advances in artificial intelligence, deepfakes are becoming so realistic that even trained experts can be fooled.
Now, a group of researchers from Berlin has made a startling discovery: high-quality deepfakes don’t just look real, they can “borrow” something very human: a heartbeat.
Published in Frontiers in Imaging, the study “High-quality deepfakes have a heart!” comes from a team at the Fraunhofer Heinrich-Hertz-Institute and Humboldt University in Berlin. Led by Clemens Seibold and Eric L. Wisotzky, and Professor Peter Eisert, the researchers focused on a long-standing assumption in digital forensics: that while deepfakes may look convincing, they can’t reproduce the subtle biological signals present in real human faces.
It turns out that assumption is no longer true.
Every time blood pumps through your body, it causes microscopic color changes in your skin. These fluctuations are invisible to the naked eye, but they can be detected by a technique called remote photoplethysmography (or rPPG). Using just a camera, rPPG algorithms can pick up tiny variations in facial color caused by blood flow, effectively estimating your heart rate without needing to touch you.
For years, forensic experts believed this was a smoking gun for spotting deepfakes. Real faces should have a natural pulse signal; fake ones shouldn’t. A famous project called FakeCatcher, for instance, built a deepfake detector based on this very principle.
But when the Berlin team tested modern deepfakes, they found something unexpected: the fakes had heartbeats too.
To dig deeper, the researchers built a new analysis pipeline that could extract rPPG signals from videos, filter out background noise, and compare the results with actual ground-truth heart rates recorded via electrocardiogram.
Other studies rely on existing datasets, which often contain low-quality or heavily compressed videos. Instead, they created their own high-quality dataset in a controlled studio. Volunteers of different ages and backgrounds were recorded under even lighting, while their heart activity was simultaneously tracked with medical sensors.
With this setup, the team could compare three things: real videos with real heartbeats, deepfakes generated from those videos, and older public datasets for comparison.
Their findings show how quickly deepfake technology is evolving. The deepfakes, whether made with advanced academic techniques or with publicly available tools like DeepFaceLive, showed valid heart rate signals. Even more intriguingly, the pulse in the fake wasn’t random: it closely matched the heartbeat of the person in the original “driver” video that powered the fake.
In other words, the deepfake wasn’t generating a fake heartbeat, it was inheriting a real one.
For years, digital forensics experts have looked for biological inconsistencies as a way to separate the real from the fake. Heartbeat signals seemed like a promising path: a physiological truth that deepfake algorithms couldn’t replicate.
But this study shows that the line of defense has been breached. As Professor Eisert and his colleagues explain, global heart rate analysis is no longer enough to catch high-quality fakes. Today’s deepfakes carry over the subtle biological rhythms of the original footage, making them appear more authentic than ever.
That doesn’t mean all hope is lost. The researchers propose shifting from global pulse detection to a more detailed analysis of how blood flow is distributed across different regions of the face. Real faces have plausible, anatomically consistent blood flow patterns. Deepfakes, while good at copying overall rhythms, may still struggle to reproduce these fine-grained local details.
Preliminary experiments using this approach already look promising, with the team reporting strong results when training detectors on localized rPPG maps.
The discovery that deepfakes can “beat with a heart” strikes at the core of a global challenge: how can we trust what we see in the age of synthetic media? Deepfakes aren’t just internet pranks anymore. They’ve been used to impersonate world leaders like Barack Obama and Donald Trump, to harass private citizens, and even to commit financial fraud. As they become more sophisticated, the stakes for detection rise dramatically.
If biological signals like heartbeats are no longer safe markers, researchers and policymakers will need new strategies. Some possibilities include: multi-modal detection, that is, combining visual, audio, and physiological cues; localized physiological analysis, as proposed in this study; authentication at the source, which involves ensuring videos carry cryptographic signatures proving their origin; and public awareness campaigns to help everyday users understand the risks of synthetic media.
So, what does it mean when fake faces can mimic a heartbeat? On one hand, it’s a fascinating technical achievement that shows how far generative AI has come in reproducing the fine details of human physiology. On the other, it’s a reminder that our defenses against misinformation must evolve just as quickly as the fakes themselves. Looking for a heartbeat is no longer enough. We need to look at where and how that heartbeat appears in the face.
If you want to learn more, the original article titled "High-quality deepfakes have a heart!" is available on Frontiers in Imaging at http://dx.doi.org/10.3389/fimag.2025.1504551.