Deepfakes have come a long way since the term’s debut on Reddit in 2017. Recent breakthroughs in AI technology allow clearer, more convincing videos and audio to be fabricated. While AI advancements hold positive possibilities, deepfake technology allows for more sophisticated fraud to ambush the banking industry — which remains professedly unprepared.
In a recent European study of AI-driven identity fraud, researchers found that deepfakes comprise 6.5 percent of total fraud attempts on financial institutions. This marks a staggering 2,137 percent increase over the past three years. Even more worrisome is that organizations reported the deck is stacked against them: They lack budget, expertise and time. This concern is shared by 88 percent of U.S. fraud experts who agree that AI-generated fraud will worsen before effective prevention is established. Their conclusions can be found in a 2023 report on synthetic fraud from Virginia-based Wakefield Research. Synthetic fraud fabricates identities by combining real social security numbers with fake names and addresses; deepfakes have now leveled up their complexity.
Though the threat of AI fraud is understood, confusion reigns regarding the best prevention measures. As security companies leverage AI technology to create cutting-edge solutions, it only partially eases the burden on financial institutions. Fraud is costly, both in damages and defense. One in 5 finance professionals surveyed by Wakefield Research quote the average loss per fraud incident as between $50K and $100K, and almost a quarter say it is $100K or more. News of a heftier loss came at the beginning of 2024, when a finance worker for a multinational company transferred $26 million to scammers. The scammer’s choice of attack: Deepfaked video calls.
While detection solutions are developed, experts emphasize banks’ continued need for airtight security procedures and awareness on an individual level. Andrew Froehlich, founder of InfraMomentum and president of West Gate Networks, both of Loveland, Colo., highlights tips to help individuals detect deepfake content. For video, Froehlich suggests examining facial and body movements for discrepancies and noting unsettling intuitive responses. When a participant is speaking, look for poor synchronization between lip movements and audio. Eye blinking isn’t instinctive for AI simulations — watch for irregular or nonexistent blinking. Unnatural shadows and reflections can also be a tell; pay attention to background surfaces.
To hide inconsistencies, Froehlich points out that deepfakes commonly add artificial noise to audio. Pairing this knowledge with identity verification can help prevent losses from fraudulent calls, according to generative AI expert Chris Snider, professor at Drake University, Des Moines, Iowa. In an article from Mills Marketing, Snider highlighted the importance of adhering to identification protocols and being willing to hang up and call back at a verified number.
Though AI-driven identity fraud threatens the future of banking, prevention on a local and individual level cannot suffer neglect as large-scale defense measures undergo development, experts say.