How Our Brains Detect Deepfake Voices
In the digital age, the ability to distinguish between real and artificially generated content is becoming vital. Recent research led by scientists from the University of Zurich in collaboration with our very own, Dr Thaya Kathiresan in Speech Pathology at The University of Melbourne, has uncovered how our brains process natural versus deepfake voices.
Published in Communications Biology, this groundbreaking study reveals that different brain regions respond uniquely to authentic and deepfake voices, offering new insights into the neural mechanisms behind our ability to detect deceptive information.
Using advanced neuroimaging techniques, the research team identified a central cortico-striatal network in the brain that plays a key role in differentiating between natural speakers and their deepfake counterparts. This network, involving regions like the auditory cortex and nucleus accumbens, was found to decode vocal acoustic patterns and recognize the level of deepfake manipulation.
To achieve these findings, the researchers employed high-quality deepfake technologies to create voice identity clones from natural speakers. In an identity matching task, 25 participants were able to detect deepfakes in about two-thirds of cases while their brain activity was recorded. The study's results contribute to our understanding of how the brain processes deepfake information and suggest potential pathways for enhancing human resilience against digital deception.
Citation: Roswandowitz, C., Kathiresan, T., Pellegrino, E. et al. Cortical-striatal brain network distinguishes deepfake from real speaker identity. Commun Biol 7, 711 (2024). https://doi.org/10.1038/s42003-024-06372-6