GH GambleHub

Computer vision in iGaming

1) Why CV pipelines iGaming platform

KYC/AML: OCR documents, authentication, liveness/anti-spoofing.
Antifraud/risk: detection of bots/multi-accounts (behavioral + visual), identification of "screen sharing" and proxy devices.
Marketing/ASO: moderation of creatives (text/symbols/rating 18 +), brand safety, A/B visual elements.
Operations/QA: UI automatic regression tests, visual telemetry of lags/crushes.
Streams/social networks: extracting events, logos, games/providers, tonality and violations.
Responsible Gaming: control of visual communications (lack of aggressive patterns for vulnerable groups).


2) Key scenarios and solutions

2. 1 KYC: document + person

OCR: extraction of full name/date/document number, format validation, comparison with the application.
Face match: Comparing selfies to photos in a document.
Liveness: passive signs (micro-motion, Moiré, blink) and active (prompt-challenge).
Document authenticity: watermarks/fonts/microprint, photoshop detection.

2. 2 Antifraud and safety

Device cam check (where allowed): signs of playback from the screen/mask.
Multi-account: combining CV signals (selfies/backgrounds) with behavioral and device graphs.
Content policies: blocking payment card/passport images in open channels.

2. 3 Marketing/Creative/ASO

Moderation: detection of prohibited symbols/slogans, "18 +," QR/links, bets.
Brand safety: compliance with guides by logo, colors, location.
A/B: automatic composition analysis (CTA, contrast, "workload"), correlation with CTR/CR.

2. 4 Streams & Videos (Games/eSports/Influencers)

Logo/Game detection: providers' promo counters.
Highlight mining: clips by event (big win/bug/connection break).
Video moderation: P-rating, gambling content by hour of display/jurisdiction.

2. 5 UI/QA

Visual regression: comparison of screenshots by page/version/device.
Optical telemetry: frame timings, render omissions, "blinking" elements.
Accessibility: checking contrast/size/alt-text in creatives and pages.


3) Architectures and deployment

On-device (mobile SDK, WebAssembly): instant liveness/OCR without sending frames (privacy by default).
Edge (PoR/region): low latency and geo-isolation of data/keys.
Cloud: heavy models (detection, segmentation, video analysis), asynchronous tasks.
Confidential inference: TEE/SGX for VIP/payout; protected pipelines.
Hybrid: Easy on-device pre-validation → accurate edge/cloud validation.


4) Data and augmentation

Collection: consent, PII disguise, geo-retention policies.
Synthetics: document/selfie generation with lighting/angle/noise variations; domain randomization.
Augmentation: blur, motion, glare, print-scan, screen-to-screen (screen re-capture), JPEG artifacts.
Balance: classes "spoof," "photo from the screen," "mask," "multi-exposure" - at least positive.
Markup: active learning; QA-double verification of disputed cases.


5) Models and patterns

Classification/detection: YOLOv8/YOLOv9, EfficientDet, ViT/DETR; for logos - specialized detectors.
Segmentation: SegFormer/Mask2Former (background/masks, path document).
OCR: TrOCR/ABINet/CRNN + rectification; multilingual support.
Face: ArcFace/FaceNet for embeddings; Anti-spoof CNN/ViT; liveness by micro-movements.
Video: SlowFast/X3D/TimeSformer; for highlights - event classifiers + Energy-based filters.
Multimodality: CLIP-like models for creatives (image + text).


6) Pipelines (end-to-end view)

6. 1 KYC/Liveness (edge + cloud)

1. On-device: qualifier frame (sharpness/lighting) → passive liveness.
2. Edge: OCR of the document, comparison of face-embeddings, spoof-check; risk-rate.
3. Cloud: manual verification of disputed cases (HITL), audit, DSAR log.

6. 2 Moderation of creatives

1. Ingest creatives (from DAM/admin panel) →

2. Detection of text/symbols/logos →

3. Classification of "allow/flag/deny" by jurisdiction →

4. API to ad engine + reporting.

6. 3 Visual regression UI

1. Script/screenshot generator by device/local →

2. Per-pixel/per-object comparison + tolerances →

3. Alert in PR/CI; auto-capture before/after.


7) Quality and SLO metrics

DirectionModel metricsOperation SLO
KYC/OCRCER/WER, F1 document fields, Face-match ROC, Spoof AUCp95 inference ≤ 300 ms (edge), success ≥ 99. 5%
LivenessAPCER/BPCER, EERFalse-accept ≤ target; Incident MTTR ≤ 30 min
ModerationPrecision @ deny, Recall @ deny, FPR by regionp95 ≤ 500ms, 0 "dangerous" gaps in sales
Logo/StreammAP @ 50/75, Hit-rate, Coverage by ProviderDetection lag ≤ 2 s; uptime ≥ 99. 5%
UI regressionPSNR/SSIM Δ, pixel-diff% in tolerancesPR-gate: fail at diff%> threshold

Optional: Bias/Fairness by skin/lighting/camera; Privacy (zero PII frame/log leaks).


8) Security, privacy and compliance

Biometrics-by-design: minimization/locality (on-device), encryption, shelf life by policy.
Tokenization of face embeddings, prohibition of reversibility, separate keys.
DSAR/delete: search by subject token, crypto erase.
Legal Hold: Video/footage freeze for investigations.
Jurisdictions: geo-isolation of data/keys, different 18 +/advertising rules.
Audit: immutable inference/decision logs (WORM), explainability of boundary cases.
Tricks of intruders: protection against re-capture, adversarial patterns, rate limiting.


9) Observability and alerts

Online metrics: latency p50/95/99, error rate, saturations (GPU/CPU/IO).
Quality: drift by lighting/cameras/countries; growth of APCER or FPR.
Operating system: queue of controversial cases, SLA manual verification.
Alerts: surge in deny misses/false positives, drop in OCR accuracy.


10) Integrations (API/Contracts)

10. 1 KYC Service

yaml api: /v1/kyc/check request:
selfie: image_token document_front: image_token document_back: image_token country: "EE"
purpose: "account_opening"
response:
scores: {face_match: 0.93, spoof: 0.02}
ocr: {name: "IVAN IVANOV", dob: "1994-02-14"}
decision: "allow    manual    deny"
trace_id: "..."
privacy: {pii: true, tokenized: true}

10. 2 Moderation of creatives

yaml api: /v1/creative/moderate request: {image_token: "...", market: "TR", channel: "display"}
response:
violations: ["age_rating_missing","prohibited_text"]
decision: "deny"
trace_id: "..."

11) MLOps for CV

Registry: model/data/augmentation/versions; limitations of use.
Releases: shadow/canary/blue-green, rollback by FPR/latency.
Tests: golden set with "heavy" cases (masks, glare plastic, reshoot screen).
Monitoring: drift light-feature (illumination, sharpness), bias-reports.
Cost: INT8/FP16, sparsity, batch-size, preprocessing cache, light/heavy routing model.


12) Templates (ready to use)

12. 1 Inference Policy (SLO/Privacy)

yaml cv_service: vision.core slo:
p95_latency_ms: 300 success_rate: 0.995 privacy:
store_frames: false biometrics_tokenized: true retention: "P30D"
monitoring:
spoof_apcer_max: 0.03 ocr_cer_max: 0.06 bias_gap_pp_max: 3

12. 2 KYC module start-up checklist

  • On-device pre-validation and passive liveness enabled
  • CER/WER on ≤ threshold golden set
  • Bias report on cameras/lighting/document types
  • Shadow 5-10% of applications, manual revision of disputed
  • DSAR/removal and Legal Hold verified
  • APCER/BPCER and latency alerts

12. 3 Runbook "APCER Growth"

1. Check dashboard by cameras/countries; Define hot segments.
2. Switch to the "heavy" anti-spoof model on Edge in these segments.
3. Tighten the thresholds, enable the active check (blink/prompt).
4. Update augmentation and golden set; post-mortem.


13) Implementation Roadmap

0-30 days (MVP)

1. KYC: OCR + basic face-match, passive liveness on-device, manual verification of controversial.
2. Moderation of creatives: rules + text/logo detector; deny list by jurisdiction.
3. UI-regression: visa-snapshots of top screens, PR-gate by diff%.

30-90 days

1. Anti-spoof ViT, active promptas; document/selfie synthetics.
2. Video analytics of streams: logo/highlights; reports to providers.
3. bias/fairness reports, drift monitoring; canary releases, SLO alerts.

3-6 months

1. Confidential inference (TEE) for VIP/payout.
2. Full control of brand safety and A/B creatives with correlation to CR/ARPPU.
3. Auto-generation of golden sets from controversial cases; champion-challenger configs.
4. External integrations with providers/CUS partners for signed webhooks.


14) Anti-patterns

The lack of a golden set and bias audit → "good on average, bad on the edges."

Storage of "raw" personnel without need and time; logs with PII.
Liveness is only active (no passive) or vice versa.
Universal thresholds for all countries/cameras/scenes (ignore seasonality/illumination).
Run heavy models without profiling and latency/cost budgets.
Moderating creatives with the "last step" before release is expensive and late.


15) Related Sections

KYC/AML and access control, DataOps practices, MLOps: model exploitation, analytics and metrics APIs, Feedback sentiment analysis, Data stream alerts, Data ethics and transparency, Data retention policies.


Result

Computer vision is not a "separate neural network," but part of the production pipeline of data and risks: from on-device privacy and geo-isolation to MLOps and quality alerts. The correct CV architecture reduces fraud and manual checks, speeds up KYC, makes marketing safe and measurable, and the product more stable and affordable.

Contact

Get in Touch

Reach out with any questions or support needs.We are always ready to help!

Start Integration

Email is required. Telegram or WhatsApp — optional.

Your Name optional
Email optional
Subject optional
Message optional
Telegram optional
@
If you include Telegram — we will reply there as well, in addition to Email.
WhatsApp optional
Format: +country code and number (e.g., +380XXXXXXXXX).

By clicking this button, you agree to data processing.