
Fraudulent Users & Risk in Digital Systems: Threats and Prevention in 2025
OnSefy Team
Jul 22, 2025
The rapid evolution of fraud in digital systems—from fake signups and bots to deepfakes and synthetic identities—presents mounting threats to SaaS platforms, financial institutions, and user research. This article synthesizes recent data, trends, and academic frameworks to map the fraud landscape in 2025 and outlines proven strategies for detection and mitigation.
1. The Evolving Fraud Landscape
1.1 Fake signups & bots in SaaS
- In early 2025, 52% of fraud incidents in SaaS stem from fake registrations using disposable email, spoofed IPs, and virtual phone numbers; bots accounted for 21% of malicious web traffic, with 64% of overall site traffic automated ([Alloy][1], [Mitek Systems][2], [Wikipedia][4]).
- Roughly 33% of new freemium SaaS users register via disposable emails ([onsefy.com][3]).
1.2 Credential stuffing & account takeover
- Around 20% of login requests are credential-stuffing attacks, often leveraging leaked credentials from data breaches .
- Account takeover remains a top threat in SaaS environments; securing accounts with sophisticated authentication is key ([Paddle][5]).
1.3 Synthetic identities & research fraud
- Synthetic identities—AI-generated personas with elaborate backstories—were responsible for a 31% increase in related fraud year-over-year ([Mitek Systems][2]).
- In user research, coordinated fraud rings manipulate screening tools, often spoofing IDs and VPNs, to earn incentives ([User Interviews][6]).
1.4 Deepfakes & advanced impersonation
- Deepfake technology is rapidly becoming a tool for financial scams; a major $25M fraud in Hong Kong used deepfake video calls ([Business Insider][7]).
- The “liar’s dividend” effect—where genuine content is dismissed as fake—undermines trust ([TechRadar][8]).
1.5 Fraud-as-a-Service & industrial fraud models
- Fraud is being commoditized through “Fraud-as-a-Service” platforms, enabling fraud without advanced technical skills ([Forbes][9]).
- Organized fraud groups operate like tech startups—R&D, tool integration, and identity services ([TechRadar][10]).
2. Risk & Business Impact
- Up to 5% of corporate revenue may be lost to fraud, including hidden costs like compliance, churn, and support ([SEON][11]).
- Fraud is growing faster than revenue for 43% of businesses ([SEON][11]).
- In Q1 2025 consumer-facing platforms reported an 89% rise in fraud exposure year-over-year ([Sift][12]).
- The UK recorded £1 billion in fraud in 2025—up 12% from 2023—with fraud now leveraging AI-enabled tactics ([The Times][13]).
These figures highlight the multifaceted risk—financial, operational, legal, and reputational.
3. Detection & Prevention Strategies
3.1 Real-time AI & layered systems
- 62% of companies are deploying real-time transaction monitoring, aided by ML-powered fraud tools ([SEON][11]).
- Machine learning enables dynamic risk scoring, anomaly detection, and faster adaptation to new fraud patterns ([arXiv][14], [Finance Alliance][15]).
- Graph Neural Networks (e.g., detectGNN) improve detection of linked transaction fraud ([arXiv][16]).
3.2 Multi-layered authentication & human oversight
- Implementing MFA and role-based access controls (RBAC) significantly reduces credential-based attacks ([Forbes][17]).
- Blending automated detection with expert human review minimizes false positives and refines ML models ([Finance Alliance][15]).
3.3 Identity verification & consortium validation
- Consortium-based identity sharing helps detect “Repeaters”—deepfake profiles used across platforms ([TechRadar][18]).
- AI-driven document verification, biometrics, and device fingerprinting strengthen defenses ([Entrust][19]).
3.4 Organizational adaptability & human training
- 86% of firms now budget >3% of revenue on fraud prevention ([SEON][11]).
- 63% of companies train their workforce on fraud-spotting; untrained firms face double the financial losses ([Preczn][20]).
- Continuous sharing of intelligence across platforms enhances preparedness ([Datavisor][21], [TechRadar][18]).
4. FIST: A Structured Threat Framework
Yi‑Chen Dai et al.’s FIST framework offers a structured model mapping technical vectors (bots, exploits, credential abuse) and psychological tactics (social engineering), enabling standardized threat modeling and automated risk scoring . Effectively implemented, FIST bridges academic rigor and enterprise security.
5. Recommendations
Area | Strategy |
---|---|
Detection | Deploy real-time ML systems (e.g., OnSefy), integrate behavioral analytics |
Authentication | Enforce MFA, RBAC, biometric verification; reject disposable emails |
Verification | Use document/bio verification, device fingerprinting, and consortium checks |
Operational Culture | Train staff, allocate budget (>3% revenue), use intelligence-sharing bonds |
Governance | Adopt frameworks like FIST; invest in AI research and structured threat mapping |
By 2025, fraud in digital systems has matured into a multi-dimensional, industry-level threat. Combating it effectively requires a layered defense combining AI systems, human expertise, consortium-based identity verification, and comprehensive frameworks like FIST. Organizations that adapt quickly, foster learning cultures, and embrace shared intelligence will best safeguard revenue, data integrity, and user trust.
References
- SEON Global Digital Fraud Report ‘25
- User Interviews on Research Fraud
- Alloy 2025 State of Fraud Report
- Forbes on MFA & RBAC
- DataVisor Fraud Trends ‘25
- Paddle on SaaS Payment Fraud
- TechRadar & Business Insider on deepfakes
- UK Finance annual fraud report 2025
- ArXiv research: ML, detectGNN, FIST
- Wikipedia on layered fraud detection & AI in fraud