Tuning anti-fraud and rules
TL; DR
Antifraud is not "catching intruders," but optimizing profits: we minimize Expected Loss (EL) from fraud and chargebacks when limiting Cost of Friction (CoF) and AR_net. Basic scheme: scoring (ML) → threshold/ladder step-up → rules (policy & velocity) → manual verification. Success is given by: clean labels, stable features, economically calibrated threshold, canary releases, strict idempotence and manageability of rules.
1) Economic staging
Expected Loss:- `EL = P_fraud(tx) × Exposure(tx)`; usually'Exposure = captured_amount'.
- `CoF = (Abandon_on_Friction × LTV_new/ret) + Opex_review + Fees_stepup`.
- `Profit = GGR − Cost_payments − EL − CoF`.
Optimal threshold 'τ': select score-cutoff so that'd (Profit )/d τ = 0 ', or according to the grid min (' EL + CoF '). In practice, cost-sensitive ROC/PR with weights: 'w _ fraud = Exposure', 'w _ fp = LTV_loss + opex'.
2) Authentication ladder (step-up ladder)
1. Auto-approve (low risk): instant pass, 3DS frictionless wherever possible.
2. Step-up A: 3DS challenge / SCA / device-challenge / reCAPTCHA.
3. Step-up B: легкий KYC (doc selfie/face-match, liveness).
4. Manual review: case at the analyst (SLA, reason-codes).
5. Auto-decline: high risk/sanctions/mules/voucher anomalies.
The threshold/branch depends on the scoring score, amount ('ticket _ size'), country, BIN/issuer, behavioral features and context (bonus campaigns, night windows, velocity).
3) Signals and features (minimum basis)
Payment: BIN/IIN, issuer_country, ECI/3DS flow, AVS/CVV match, soft-decline codes, returns/disputes in history.
Behavioral: speed of events (velocity: 'cards/device/ip/email'), time of day, first-seen/last-seen, "topology" of accounts (graph-connections: shared devices/cards/wallets).
Device/network: device fingerprint, emulators/jail/root, proxy/VPN/TOR, ASN/hosting.
Anti-bonus: referrals-syndicates, "pumping" bonuses, abnormal patterns of depozit→vyvod without playing.
Payments/wallets/vouchers: PIN repetitions, geo-mismatch, "high-speed" redims, muling cascades.
KYC/KYB: level, validations, SoF/SoW flags.
Sanctions/POP/block lists: list matches, fuzzy match name/addresses.
4) Stack: ML + rules
5) Quality metrics (with clear bases)
AR_clean = `Auth_Approved / (Auth_Attempted − Fraud_preblocked − Abandon_3DS)`
Fraud Rate = 'Fraud _ captured _ amount/ Captured_amount'
Chargeback Rate = 'Chargeback _ count/ Captured_Tx' (or by amount)
False Positive Rate (FP) = `Legit_declined / Legit_attempted`
Step-up Rate = `StepUp_tx / Auth_Attempted`, Abandon_on_StepUp
Auto-approve %, Manual review %, Review SLA/TtA
Net Profit uplift after tuning (AB difference EL + CoF vs control).
Benchmarks: FP for new users ≤ 1-2% (by volume), Fraud (by amount) - in the target corridor of the license/schemes.
6) Thresholds and policy rules
6. 1 Threshold calibration
We build a cost-curve: for each 'τ' we consider 'EL (τ) + CoF (τ)'.
Choose 'τ' with a minimum. For high-ticket - a separate 'τ _ hi'.
6. 2 Typical rules (pseudocode)
yaml
- name: SANCTIONS_HIT when: sanctions_match==true action: DECLINE reason: "Sanctions/PEP match"
- name: BIN_RISKY_3DS when: bin in RISKY_BINS and score in [τ_low, τ_mid)
action: STEPUP_3DS
- name: DEVICE_VELOCITY_LOCK when: device_id in last_10min.deposits > 3 action: DECLINE_TEMPORARY ttl: 2h
- name: BONUS_ABUSE_GUARD when: (bonus_received and gameplay_turnover < Xdeposit_amount) and payout_request action: HOLD_REVIEW reason: "Turnover not met"
6. 3 Dynamic limits
The limit of the amount and number of transactions by risk level (risk-tier): 'R1/R2/R3'.
Adaptive limits for new accounts, warming up with a good history.
7) Rule life cycle (governance)
DSL/rule registry with versions, owner and effect description.
Shadow mode → canary (5–10%) → full rollout.
RACI: Owner (Payments Risk), Approver (Compliance/Legal), Consulted (Support/Treasury), Informed (Ops).
Audit log: who/when changed which metrics/AB, rollback.
Rule shelf life and revaluation (for example, 30/60 days).
8) Model data and training
Splits in time, without leakage (features only from the previous window).
Target label: confirmed fraud/chargeback; individual bonus abuse labels.
Reweighing classes by amount (amount-weighted loss).
Drift monitoring: PSI for key features, KS for speed, baseline stability.
Retrain triggers: PSI> 0. 25, KS drop, traffic/jurisdictional shift.
9) Explainability and support
For each solution, we generate a reason_codes (up to 5 reasons) with human-readable prompts.
Step-up/failure support macros (3DS, KYC, turnover).
Disputes/disputes: feedback gets into labeling pipeline (close the loop).
10) Compliance and privacy
GDPR/DSAR: right to explain the decision; PII minimization; hashing (salted) identifiers (email/phone/PAN token).
PCI-DSS: PAN-safe streams, tokenization.
Sanctions/AML: Separate MLRO screening + escalation loop.
Retention: policies for storing signals and justifying decisions.
11) Monitoring and alerts (hourly/daily)
AR_clean, Fraud (amt%), FP (retention-weighted), Step-up/Abandon, Review SLA, Chargeback Rate (lagged).
Velocity adhesions, growth of TOR/Proxy/ASN hosting, BIN degradation, voucher derivatives.
Alerts at: FP> corridor, Fraud> target, Abandon> base + X pp, PSI/KS drift.
12) SQL slices (example)
12. 1 Baseline metrics
sql
WITH base AS (
SELECT
DATE_TRUNC('day', attempt_ts) d, country, provider, method_code,
COUNT() FILTER (WHERE auth_status='ATTEMPTED') AS attempted,
COUNT() FILTER (WHERE auth_status='APPROVED') AS approved,
COUNT() FILTER (WHERE decision='DECLINE' AND label='LEGIT') AS fp_cnt,
SUM(captured_amount) AS cap_amt,
SUM(CASE WHEN label='FRAUD' THEN captured_amount ELSE 0 END) AS fraud_amt
FROM payments_flat
GROUP BY 1,2,3,4
)
SELECT d, country, provider, method_code,
approved::decimal/NULLIF(attempted,0) AS ar_clean,
fraud_amt::decimal/NULLIF(cap_amt,0) AS fraud_rate_amt,
fp_cnt::decimal/NULLIF(attempted,0) AS fp_rate
FROM base;
12. 2 Share of step-up and speed failures
sql
SELECT
DATE_TRUNC('day', attempt_ts) d,
WIDTH_BUCKET(score, 0, 1, 10) AS bucket,
AVG(CASE WHEN decision='STEPUP' THEN 1 ELSE 0 END) AS stepup_share,
AVG(CASE WHEN decision='DECLINE' THEN 1 ELSE 0 END) AS decline_share,
AVG(CASE WHEN stepup_abandon THEN 1 ELSE 0 END) AS abandon_after_stepup
FROM risk_events
GROUP BY 1,2
ORDER BY d, bucket;
13) Tuning playbooks
Fraud growth (amt%) with stable FP → raise 'τ', strengthen velocity by devices/ASN, enable 3DS-challenge on vulnerable BINs.
High FP in new → mitigate 'τ' for low-ticket, move part to Step-up A instead of deviation.
Abandon on 3DS↑ → agree with PSP on 3DS2 parameters, improve UX, narrow step-up on mobile for low-risk.
Syndividual bonus networks → graph features, limit "parallel" payments, turn-over rules.
Voucher anomalies → velocity by PIN/retailer/geo, device-binding, hold before verification.
14) Implementation: checklist
- Economic threshold calibration ('EL + CoF'), individual 'τ' by segment.
- Rule register (DSL), shadow→canary→rollout, auditing, and rollback.
- Reason-codes and communication templates.
- PSI/KS monitoring, fit/speed drift, regular retrain.
- Feedback channel (disputy→leybly).
- KYC/step-up, SLA review, and TtA/TtR policies.
- Privacy: ID hashing, PII minimization.
15) Summary
Anti-fraud tuning is a system optimization of profits with controlled friction: ML scoring + well-thought-out step-up ladder, strict legal rules and neat velocity limits. Economic calibration of the threshold, clean labels, canary displays and strict controllability give low Fraud in terms of amount, low FP in new ones, high AR_net - no surprises for compliance and UX.