Fraud signals and transaction scoring
1) Why scoring and how it affects monetization
Anti-fraud scoring determines whether the transaction will pass frictionless, go into 3DS-challenge/SCA, or will be rejected/reoriented to another method. Proper calibration gives:- ↑ Approval Rate without chargeback growth,
- ↓ SCA/Challenge and Support costs,
- ↑ LTV through sustainable COF/MIT payments,
- PSD2-TRA compliance (Transaction Risk Analysis) in providers/banks.
2) Signal map (what to collect)
2. 1 Device/session identification
Device fingerprint (canvas/webgl/audio, user-agent, fonts, timezone, languages).
Cookie/LocalStorage/SDK-ID, stable identifiers (privacy-safe).
Emulators/root/jailbreak, proxy/VPN/datacenter-IP, TOR.
2. 2 Geo and Network
IP geo vs BIN country vs billing country, network latency/RTT, ASN/provider.
IP/geo change rate, timezone "hops," known "toxic" subnets.
2. 3 Payment attributes
BIN: Scheme, Country, Bank, Debit/Credit/Prepaid, Commercial/Personal.
MCC 7995, amount/currency, rate of token/card/device/account attempts.
3DS history (frictionless/challenge), AVS/CVV normalization, network tokens (VTS/MDES/NSPK).
2. 4 Behavior and bio-behavior
Input speed/rhythm, copy paste, field order, CVV/index errors.
Patterns of "bots" (headless, automatic clicks), abnormal cycles.
2. 5 Account and connection graph
The age of the account passed by KYC, a bundle with devices/payments.
Graph: shared devices/IP/cards between accounts, clusters of multi-accounts.
Deposit/withdrawal history, in-game behavior, returns/disputes.
2. 6 External sources
IP/device/BIN blacklists, behavioral signals of anti-fraud services, risk regions/time windows.
3) Fichestor and data quality
Feature Store: uniform feature definitions, versioning, TTL/time windows (1h/24h/7d/30d).
Online/offline parity: the same transformations in realtime and training.
Data control: schema validation, "not null," ranges, anti-download (leakage).
Labeling: label chargeback, confirmed fraud, friendly fraud, legit with dates; apply label delay.
4) Scoring approaches
4. 1 Rules (policy engine)
Fast and explainable: geo mismatch + velocity → 3DS.
Cons: toughness, a lot of false positives.
4. 2 ML models
GBDT (XGBoost/LightGBM/CatBoost) - standard for tabular features; strong interpretability (SHAP).
Graph models (GraphSAGE/GAT) - for device/IP/card connections.
Neural networks (TabNet/MLP) - when there are many non-linearities/interactions.
Ensembles: GBDT + Graph Embedding (node2vec) + Rules.
4. 3 Anomalisms
Isolation Forest/LOF/AE for new markets/weak history; are used as signals rather than the final verdict.
5) Threshold strategy and SCA/3DS
Speed → action (example):- 'score ≤ T1 '→ approve (in eEA: TRA-expt at PSP/bank if available)
- 'T1
- 'score> T2 '→ decline/request alternative (A2A/purse)
Calibration: T1/T2 CBR% and AR% targets based on challenge cost and chargeback risk. In PSD2 zones, use TRA at partners where the provider's fraud rate is 6) Online Decision Architecture 1. Pre-auth step: collecting device/geo/velocity → scoring ≤ 50-150 ms. 7) Specific features (cheat-sheet) 8) Explainability and control of bias SHAP/feature importance for T1/T2 boundary solutions. 9) Experiments and calibration A/B tests: baseline rules vs ML; ML-on vs ML-off; different T1/T2. 10) Monitoring and drift Data drift (PSI/KL) by key features; target drift (chargebacks). 11) Relationship with routing and PSP Scoring affects smart-routing: for border speeds, send to PSP with the best AR to BIN/issuer. 12) Processes and "governance" Model card: owner, version, release date, target KPIs, risks. 13) Anti-patterns Mix offline and online features without controlling delays → leaks/false victories. 14) Implementation checklist 15) Summary Strong anti-fraud in iGaming is a combination of rich signals (device/geo/BIN/behavior/graph), stable fichestore, ML + rules ensemble, clear threshold strategy for SCA/TRA, and exploitation discipline (A/B, drift, explainability). That way you hold the conversion, lower the chargebacks and make the revenue predictable.
2. Solution: approve/3DS/decline/alternative routing (PSP-B, other method).
3. 3DS integration: if soft-decline → repeated with SCA without re-entering the card.
4. Logging: save 'score', top features (SHAP top-k), accepted action and authorization outcome.
5. Feedback loop: chargers/disputes → labels in fichestore.
Geo/Net:
Behavioral:
Payments:
Graph:
Safety net rules over ML: e.g. 'CVV = N' ⇒ challenge/decline regardless of low scoring.
Fairness policies: do not use prohibited attributes; audit of indirect discrimination features.
Metrics: AR, CBR%, 3DS rate, Challenge success%, Cost/approved.
Profit-weighted ROC: optimize not AUC in a vacuum, but economics (loss matrix: FP = lost turnover, FN = chargeback-loss + fees).
Alerts: 'score> T2' growth in BIN cluster/country; '05' surge after 3DS.
Regular retraining (weekly/monthly) with safe-deploy (shadow → canary → full).
Calibration control (Brier score, reliability curves).
If ACS/emitter degrades (spike '91/96'), temporarily raise T1 (more frictionless with low-risk) or redirect to PSP-B.
Change-control: RFC for new rules/thresholds, recording A/B results.
TRA docking package for PSD2: description of methodology, fraud metrics, procedure frequencies.
Doing a "total decline" during peak hours - kills AR and LTV.
Rely on rules only or ML only.
Ignore SCA-soft signals and do not initiate 3DS if necessary.
Logging PAN/PII without a mask is a PCI/GDPR violation.