AI is changing the face of identity fraud. Or at least how that face is generated.
The quality of fraudulent IDs has always varied – from obvious fakes, easily caught by the naked eye, to high-quality fake IDs, only detectable with specialized authentication hardware and software. Cheaper production methods, foreign imports, and darkweb knowledge-sharing between fraudsters are increasing the accessibility and sophistication of fake identity documents and their ability to be used in complex identity fraud scams.
While the sheer impact of identity fraud-related costs is difficult to assess in its entirety, Javelin Research from 2021 suggests the figure is upwards of $50 billion per year for US businesses. However, with businesses losing an average of 5% of their revenue as a direct result of fraud, the true cost is far, far greater. Only by adapting to the latest fraud methods, many driven by AI, can businesses stem these losses.
Fraudulent IDs: A New Threat
In 2025, most fake IDs look genuine to the human eye. All types of businesses, from retail and financial institutions, to car dealerships and casinos, have steadily understood the evolving challenges fake IDs and deepfakes present. In the US, organizations large and small deploy digital and physical scanning to stop fraudsters using fake IDs. At once, they are protecting their customers, revenues, and brand reputation against the risk of fraud from individuals or organized criminal networks.
The rise of AI-generated IDs poses a serious threat to digital transactions for three key reasons.
- The physical and digital processes businesses use to catch fraudulent IDs are not created equal. Less sophisticated solutions may not be advanced enough to identify emerging fraud methods.
- With AI-generated ID images readily available on the dark web for as little as $5, ownership and usage are proliferating.
- IDScan.net research from 2024 demonstrated that 78% of consumers pointed to the misuse of AI as their core fear around identity protection. Equally, 55% believe current technology isn’t enough to protect our identities. Left unchallenged, AI fraud will damage consumer trust, purchasing behavior, and business bottom lines.
Hiding behind the furor of nefarious, super-advanced AI, generating AI IDs is fairly rudimentary. Darkweb suppliers rely on PDF417 and ID image generators, using a degree of automation to match data inputs onto a contextual background. Easy-to-use tools such as Thispersondoesnotexist make it simple for anyone to cobble together a quality fake ID image and a synthetic identity.
To deter potential AI-generated fake ID buyers from purchasing, the identity verification industry needs to demonstrate that our solutions are advanced enough to spot them, even as they increase in quality.
With this in mind, IDScan.net took to the dark web and purchased 200 ID images, putting them to the test against the latest in identity verification solutions.
AI Fake IDs: Are They Better?
Catch rates for non-AI generated IDs processed through IDScan.net’s proprietary and third-party checks average a catch rate of 95%. In our study, we caught 99.6% of AI-generated fake IDs.
It is clear that the higher catch rate for AI-generated IDs is, in part, because they aren’t as methodically produced as more traditional versions, which fraudsters have had a long time to refine. The AI being used to create these fraudulent IDs has trouble with the intricacies of managing the differing templates and data syntax across states.
Each state’s ID has its own system for encoding personal data into the barcode, and even the slightest discrepancy in this is enough for ID verification systems to identify them. When the data is in the incorrect order or format, it is a clear indication that the ID is fraudulent.
Along with the data syntax, AI fake ID generators also have trouble with the differing state design templates. ID templates are created at a state level; ID data placement and designs vary widely and must be in the exact correct location and size. If an ID claiming to be from Arizona has background design images from New Mexico’s ID, it can be instantly flagged as fraudulent. Suppliers of AI IDs are still honing these techniques to mimic the templates and data syntax of legitimate IDs.
Additionally, use of AI-generated fake ID images often comes with specific digital behaviors which can be monitored and flagged in the digital identity verification process. These suspicious activities can trigger a deeper review of the ID image and face match, which allows verification tech to better stop fraudsters.
However, AI continues to evolve so it is not a leap to imagine that the next generation of AI IDs will be more effective than their predecessors.
Where are identity solutions winning?
When it comes to a specific downfall of the AI-generated ID to commit fraud, one key factor stood out as an obvious discrepancy. Across our study, we found 24.2% of AI-IDs showed evidence of photo tampering. While not always obvious to the human eye, verification systems identified the smallest discrepancies in document tampering. While a positive for businesses that have implemented a document tampering solution, it’s evidence that barcode and OCR validation alone may not be enough to identify evidence of tampering.
Our tests also found a few other key areas that sound the alarm on an AI ID. Blurring across the document often supplied a tell-tale giveaway. Equally, some AI-generated IDs show either a complete absence or incorrect presence of holograms and watermarks, which are present on IDs in specific places and sizes according to the state’s template.
Last, AI IDs generally lack a sophisticated understanding of barcode data formatting. Every jurisdiction and document type has different expected fields which today AI fails to properly replicate in the majority of cases.
State by State
The diversification of identity documents throughout the US creates an ever-changing cycle of new IDs, introducing risk to businesses that may lack the knowledge or software applications to assess at a nationwide level. This is not a new problem, but if AI IDs produce an uptick in the volume of fake IDs, the issue of varying ID formats is compounded, as telltale signs of a fake differ.
Our study revealed that the quality of fake IDs, analyzed through various physical and software-based fraud prevention tools, varied state by state. While states such as New York, Texas, and Arizona see the most frequent use of fake IDs, according to IDScan.net’s 2023 and 2024 Annual Fake ID Reports, they were also the easiest to catch. We caught 100% of the fakes from these high-volume states.
The reason for this may come back to quality. In states where fake ID markets are more saturated, and therefore more common, fraudsters producing AI IDs may operate on smaller margins, producing IDs quicker and ultimately putting fewer resources behind producing a truly robust fake.
While a positive for businesses using verification solutions right now, we must stress this position should not inform complacency – AI IDs will continue to increase in quality, and the risk of not catching them will accelerate in turn.
As things stand
While our study concluded that AI IDs are easier to catch than traditional fake methodologies, we must issue a warning to businesses that AI IDs will improve at a faster rate. To mitigate the risk of ever-improving AI fraud, businesses must have AI-ready methods of their own, by way of best-in-class identity solutions.
Only time will tell how sophisticated AI IDs will become, yet, if we get ahead of the challenge now, we are better placed to mitigate their widespread risks in the future.
About the Author

Jimmy Roussel is COO, IDScan.net. We build technology that builds trust. IDScan.net offers the leading AI-powered identity verification platform focusing on age validation and fraud reduction for compliance in both digital and physical environments.