AI-generated fraud isn’t coming—it’s already here. Criminals use synthetic voices to redirect veterans’ benefits. They generate fake medical documents to claim reimbursements. They deploy deepfakes to bypass identity verification. Most notably, British engineering giant Arup was defrauded of $25M using video and voice clones. And unlike other fraud schemes that traditionally require individual effort for each attempt, AI lets fraudsters increase their chances of success by scaling, personalizing, and automatically adjusting attacks on the fly.
As a result, the FBI recently warned “…that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes.” Meanwhile, the Department of the Treasury reported that “…FinCEN has observed an increase in suspicious activity reporting by financial institutions describing the suspected use of deepfake media in fraud schemes…”
The tools for these attacks are widely available and becoming more sophisticated. In February 2025, ByteDance, the Chinese tech giant behind TikTok, unveiled OmniHuman-1, an advanced AI model capable of creating highly realistic deepfake videos from a single still image. And Consumer Reports recently tested six voice-cloning products and found they offered limited safeguards to prevent misuse. While deepfake videos are becoming increasingly realistic and concerning to those responsible for identity verification, audio and text attacks have emerged as immediate threats to federal programs.
Given the significant risk to federal agencies, Booz Allen conducts ongoing research into specific techniques used for deepfake fraud and develops dynamic defenses against these attacks. In this blog, we share insight into key trends and emerging best practices federal agencies can adopt to defend themselves.