GH GambleHub

Risk scoring and prioritization

1) Purpose and results

The objective is to make risk assessment and ranking reproducible and verifiable so that decisions on budgets/timing/resources are:
  • comparable (unified scales and formulas),
  • transparent (data sources and assumptions are documented),
  • measurable (metrics and KRI tied to controls and incidents),
  • executable (each risk corresponds to a CAPA/waiver plan with an expiration date).

Outputs: unified risk register, prioritized measure backlog, heat maps, residual risk reports, audit-ready artifacts.

2) Terms and risk levels

Inherent Risk - risk excluding controls.
Residual Risk - risk taking into account current controls (verified ToD/ToE/CCM).
Target Risk - target level after CAPA/compensatory measures.
Likelihood (L) - probability of scenario occurrence in the evaluation horizon.
Impact (I) - the largest of: finance, licenses/law, privacy/data, operations/SLO, reputation.
KRI - risk indicators affecting L/I (for example, dsar_response_p95, chargeback ratio).

3) Scales and basic models

3. 1 Discrete matrix (5 × 5 or 4 × 4)

Score = L × I → range 1-25 (or 1-16).

Categories (Example 5 × 5):
  • 20–25 = Critical, 12–19 = High, 6–11 = Medium, 1–5 = Low.
  • Thresholds are published in the Scoring Policy and invariably apply to all domains.
Likelihood scale (example, 5 levels):
  • 1 - once in> 3 years; 2 - once every 1-3 years; 3 - annually; 4 - quarterly; 5 monthly/more frequent.
Impact scale (by max-criterion, example):
  • 1 — <€10k; 2 — €10–100k; 3 — €100–300k; 4 — €300k–€1m; 5 — >€1m; with legal/licensing risks, the level rises to at least 4-5.

3. 2 Quantitative models

ALE (Annualized Loss Expectation): 'ALE = SLE × ARO', where 'SLE' is the average damage per event, 'ARO' is the expected frequency per year.
FAIR approach (in simplification): we simulate the frequency (Threat Event Frequency) and the value of losses (Loss Magnitude), use percentiles (p50/p95) to make a decision.
Monte Carlo: distributions for frequency and damage (lognorm/gamma, etc.), 10-100k runs → loss curves (loss exceedance curve). Apply for the most expensive/regulatory critical risks.

Recommendation: 80% of cases - matrix 5 × 5, 20% (top risks) - ALE/FAIR/Monte Carlo.

4) Residual and target risk

1. Calculate Inherent from "no controls" assumptions.
2. Consider the effectiveness of existing controls (tested ToD/ToE/CCM) → Residual.
3. Determine Target taking into account the planned CAPA/compensatory measures and the date of achievement.
4. If Target ≤ the tolerance threshold (risk appetite) - ok; if not, waiver with expiry date and compensating controls is required.

5) Data sources and evidence

Metrics and KRI (dashboards, logs, incident reports).
Control test results (CCM), audits (internal/external).
Provider reports: SLA/certificates/incidents/changes in data locations.
Financial analytics: fines, chargeback, fraud loss%.
Each score is accompanied by evidence links with a timestamp and a hash receipt (WORM).

6) Prioritization of initiatives (transfer of risk → action)

6. 1 RICE (risk adaptation)

`RICE = (Reach × Impact_adj × Confidence) / Effort`

Reach - how many customers/transactions/jurisdictions are affected.
Impact_adj - transformed I (or ALE/loss p95).
Confidence - reliability of ratings (0. 5/0. 75/1. 0).
Effort - man-weeks/cost.
RICE sorting → quick wins up.

6. 2 risk-adjusted WSJF

`WSJF = Cost of Delay / Job Size`, где

`Cost of Delay = Risk Reduction + Time Criticality + Business Value`.

Risk Reduction is the expected decline in Residual/ALE.
Time Criticality - deadlines of regulators/audits.
Business Value - income/savings, customer confidence.

6. 3 Regulatory priority

If the risk is related to licenses/law and there is a hard deadline, it automatically falls into Critical/High, regardless of the "economic" scoring.

7) Threshold rules and escalations

Critical: immediate triage, CAPA ≤ 30 days, re-audit in 60-90 days; weekly committee.
High: CAPA ≤ 60 days, follow-up 90 days.
Medium: Inclusion in quarterly plan.
Low: monitoring + "tech debt" slot capability.
KRI thresholds: Amber (warning) and Red (mandatory escalation and CAPA).

8) Roles and RACI

ActivityRACI
Scoring techniqueRisk Office / Compliance EngHead of RiskLegal/DPO, FinanceInternal Audit
Assessment of specific risksRisk OwnersHead of FunctionControl Owners, DataCommittee
Verification of controlsCompliance / Internal AuditHead of ComplianceSecOpsBoard
Prioritization of initiativesCompliance OpsHead of ComplianceProduct/FinanceExec
KRI monitoring/dashboardsCompliance AnalyticsHead of ComplianceData PlatformExec/Board

9) Dashboards

Risk Heatmap: matrix 5 × 5, filters by domain/country/provider.
Risk Funnel: Inherent → Residual → Target.
Top-N by ALE/p95 Loss: quantitative risks.
KRI Watchlist: indicators and thresholds, Amber/Red alarms.
CAPA Impact: expected/actual reduction; progress on timelines.
Waivers: current exceptions, deadlines and compensatory measures.

10) Performance metrics

Risk Reduction Index: ∆ weighted average risk rate (quarter/quarter).
On-time CAPA:% of measures on time (by severity).
Repeat Findings (12 months): proportion of repeated violations.
Evidence Completeness:% risks with full package (100% target for High +).
Prediction Accuracy: discrepancy of estimated and actual losses/frequencies.
Time-to-Triage / Time-to-Plan / Time-to-Target.

11) SOP (standard procedures)

SOP-1: Initialization and scales

Define L/I scales and category thresholds → approve in the Committee → record in the repository (versioning).

SOP-2: Quarterly revaluation

Collection of KRI/incidents → recalculation of L/I/ALE → review by owners → committee prioritization → publication of Roadmap.

SOP-3: Trigger Incident

In case of Critical/High incident - unscheduled recalculation, adjustment of CAPA and priorities.

SOP-4: Quantitative analysis (Top-risks)

Prepare input distributions → Monte Carlo (≥10k runs) → loss curves → Committee decision.

SOP-5: Archive and evidence

Export slices (CSV/PDF) + hash receipts → WORM archive → links in GRC cards.

12) Templates and "as-code"

12. 1 Scoring policy (snippet)


scales:
likelihood:
1: ">3y / p<5%"
3: "annual"
5: "monthly+"
impact_finance:
1: "<€10k"
3: "€100k–€300k"
5: ">€1m"
categories:
critical: "score>=20 or regulatory_deadline<=60d"
high: "score>=12"
tolerance:
privacy_licensing: "min_impact=4"

12. 2 Risk Card (YAML)

yaml id: RSK-PRIV-DSAR-001 title: "DSAR delinquency"
domain: "Privacy"
jurisdictions: ["EEA","UK"]
likelihood: 4 impact: 4 score: 16 model: "matrix_5x5"
ale_eur: 280000 # if controls were considered: ["CTRL-DSAR-SLA, ""CTRL-RET-DEL"]
kri:
- key: "dsar_response_p95_days"
warn: 18 red: 20 residual: 12 target: 6 capa:
- id: CAPA-123 action: "DSAR queue optimization, auto-prioritization, alerts"
due: "2025-02-15"
expected_delta: -6 waiver: null evidence: ["hash://.../metrics. csv","hash://.../ccm_report. pdf"]

12. 3 Prioritization (WSJF example)

yaml initiative: "Automation DSAR queue"
cod:
risk_reduction: 8 time_criticality: 5 business_value: 3 job_size: 5 wsjf: 3. 2

13) Compensatory measures and waivers

If a quick fix is not possible:
  • we introduce compensating controls (manual checks, limits, additional monitoring) with performance metrics;
  • we issue waiver with expiration date, owner and replacement plan;
  • mandatory re-audit in 30-90 days.

14) Antipatterns

"Beautiful matrix" with no connection to KRI/controls/incidents.
Floating scales and "manual tuning" to the desired result.
Lack of versioning of calculations and assumptions.
Rare revisions → map does not reflect reality.
Waivers with no expiry date and no compensatory measures.
Lack of quantitative analysis for top risks.

15) Maturity model (M0-M4)

M0 Ad-hoc: estimates "by the eye," there is no single policy.
M1 Planned: matrix 5 × 5, quarterly updates, basic dashboards.
M2 Managed: communication with KRI/CCM, CAPA linking, WORM evidence.
M3 Integrated: ALE/FAIR/Monte Carlo for top-risks, WSJF/RICE in Roadmap, CI/CD gates.
M4 Continuous Assurance: predictive KRI, auto-recalculation, recommendation priorities and evidence-by-design.

16) Related wiki articles

Heat risk map

Risk-Based Audit (RBA)

KPIs and compliance metrics

Continuous Compliance Monitoring (CCM)

Remediation Plans (CAPAs)

Policy and compliance repository

Compliance Roadmap

External audits by external auditors

Total

Risk scoring and prioritization is an engineering discipline, not an art: stable scales and policies, provable data, quantitative methods for top risks, explicit thresholds and escalations, and a direct link to the CAPA and roadmap. This approach makes decisions predictable, accelerates approvals, and reduces the overall risk of the business.

Contact

Get in Touch

Reach out with any questions or support needs.We are always ready to help!

Start Integration

Email is required. Telegram or WhatsApp — optional.

Your Name optional
Email optional
Subject optional
Message optional
Telegram optional
@
If you include Telegram — we will reply there as well, in addition to Email.
WhatsApp optional
Format: +country code and number (e.g., +380XXXXXXXXX).

By clicking this button, you agree to data processing.