GH GambleHub

Key figure hierarchy

Key figure hierarchy

The hierarchy of indicators connects the strategic goal of the company with the daily decisions of the teams. This is a managed "tree" from North Star Metric (NSM) and business totals to drivers, operational KPIs and guardrails (security restrictions). Below is how to design, document and operate such a hierarchy.

1) Why hierarchy is needed

Single focus: all teams pull in one direction, metric conflicts are minimal.
Transparent causality: it is clear why NSM moves (driver tree).
Speed of decisions: local metrics are directly related to the business effect.

Sustainability: guardrails prevent optimization "side effects."

2) Layers and terms

North Star Metric (NSM): main value metric (e.g. "Monthly Active Paying Users").
Business results: revenue, margin, GGR, market share, NPS - associated, but not always NSM.
Driver tree: NSM decomposition into factors: volume × conversion × frequency × average check, etc.
Team KPIs: indicators that are actually managed by the team (funnels, latency, PR-AUC, etc.).
Guardrails: restrictions: FPR≤x%, p95 latency≤y, the share of zhalob≤z - "brakes" with growth.
Process metrics: release speed, data quality, SLO updates - support the system.

💡 Rule: for each layer - owner, formula, source, refresh rate, SLO freshness.

3) How to choose North Star

Criteria: reflects value for the user, correlates with long-term income, is sensitive to product changes, understandable and resistant to manipulation.

Examples:
  • B2C product: 'WAU of payers' or 'Sessions with successful target activity/month'.
  • Marketplace: 'Completed successful deals/month'.
  • ML platform: 'Percentage of requests served by model with SLA ≤ X ms and calibration in tolerance'.

4) Driver Tree

1. Define the NSM formula.
2. Decompose into multiplicative/additive factors.
3. Add a layer of steerable arms.
4. Link sheet metal metrics to commands.
5. For each node - owner and target levels.

Example (universal template):

NSM: Активные платящие (MAPU)
= Активные пользователи × Доля платящих × Средняя частота платежей × Средний чек
├─ Активные пользователи = Трафик × Активация × Удержание
│  ├─ Трафик = Органический + Платный (CR кампаний, CAC)
│  ├─ Активация = Конверсия онбординга (шаги воронки)
│  └─ Удержание = D7/D30, когорты
├─ Доля платящих = Propensity-модели + Офферы (guardrail: RG/фрод)
├─ Частота = Миссии/квесты, контент-ротация
└─ Средний чек = Прайсинг, бандлы (guardrail: жалобы/возвраты)

5) Target cascading (OKR/KPI)

Enterprise level: NSM + 3-5 drivers (year/quarter).
Features/Products: driver subtrees (target Δ).
Commands: KPI on tree sheets; target setting in OKR (Commit/Stretch) format.
Individually: task metrics (through contribution to team KPI).

Tips:
  • One level - ≤7 metrics (overload kills focus).
  • All targets are bound to a unique tree node (without duplication).
  • For conflicting couples, explicit priority and/or compromise through guardrails.

6) Guardrails: how to set "sides"

Assign to risk areas:
  • Quality/experience: p95 latency, crash-free%, fault tolerance.
  • Ethics/compliance: anti-fraud FPR, share of complaints, RG indicators.
  • Data: Freshness, Completeness, PSI drift.
  • Finance: margin ≥ X, chargeback rate ≤ Y.

Guardrails - mandatory line in each OKR/metric passport; triggering blocks rollout.

7) Formulas, sources and semantic layer

All tree metrics are defined in the semantic layer (single definitions, rollups, calendar, currencies/timezones).
Versioning: 'METRIC _ NAME _ vN'; any formula edit = new version + backfill/reconciliation.
Passport metrics (short): code, definition, formula/SQL, sources, granularity, default segments, units/currency, SLO freshness and availability, owners, guardrails, change history.

8) Measurement frequencies and SLO

Operational metrics: min/h (SLO lag ≤ 10-15 min).
Product/Marketing: Day/Week (Mandatory MA/Seasonal Normalization).
Financial: week/month (reconciliations with accounting).
ML production metrics: stream + deferred offline assessments (OOT).

For each metric on the dashboard, show: "updated X min back" and planned SLO.

9) Dashboards and reviews

Executive layer: NSM + key drivers, alert-widgets guardrails.
Domain layer: funnels, cohorts, contribution of factors (waterfalls/decompositions).
Ops/ML layer: SLA, errors, drift, calibration, latency p95/p99.
Rituals: daily (operating system), weekly (tactics), monthly/QBR (strategy/OKR).
On reviews, use the tree: show the node, deviation, hypotheses, plan.

10) Responsibility Mapping (RACI)

For each tree node:
  • Responsible: KPI owning team.
  • Accountable: Domain/Product Leader.
  • Consulted: analytics/data/compliance.
  • Informed: related teams/top management.

Scorecard conflicts are resolved at the shared tree level with Accountable.

11) Examples of "mini-trees" by domain

Marketing:
  • 'Attracted quality users = Traffic × Share of qualified leads × CR in registration × Brand share'
  • Guardrails: SAS≤targeta, Brand Safety, spam rate.
Product/Engagement:
  • 'WAU = DAU × Stickiness × frequency'
  • Guardrails: crash-free ≥ 99. 5%, complaints ≤ threshold.
Monetization:
  • 'Revenue = Active Paying × Payment Frequency × Average Check'
  • Guardrails: refund%, chargeback rate, RG indicators.
ML/scoring (example of antifrode or RG risk):
  • 'Business effect = Overlap loss − False lock costs'
  • Key KPIs: PR-AUC, Recall@FPR≤x%, Coverage; Guardrails: latency p95, proportion of appeals.

12) Anti-patterns

Inconsistent definitions: "Retention" is considered differently - treated with a semantic layer and versions.
Proxy Optimization: Click Growth without NSM Growth - Protect via NSM and guardrails.
Too many metrics: "airplane panel" - leave ≤7 on the level.
No owner: The "draw" metric is rapidly degrading.
Quiet formula edits: always - new version + changelog.

13) Artifact patterns

A. Tree Node Passport

Node: 'RETENTION _ D30'

Tree Role NSM Driver

Formula (semantic object): reference/code

Segments: Country/Channel/Platform

Owner: Product Analytics

Goals (quarter): Commit/Stretch

Guardrails: churn complaints ≤ X, data freshness ≤ 1 h

Risks/assumptions: seasonality, traffic mix

Date/Version: 2025-Q4, v2

B. Target cascade map (example)

NSM (company): MAPU + 8% YoY

Product: Retention D30 + 2 p.p.

Team Onboarding: Activated registration + 5 pp

Marketing: Quality leads + 12% at CAC ≤ target

Monetization: ARPPU + 6% at refund% ≤ threshold

C. Deviation runbook

Event DAU drawdown − 10% DoD

Actions: check of freshness of data → incidents of releases → channels of traffic → regions/platform → RCA and plan

14) Hierarchy implementation checklist

  • NSM defined and its relationship to value/revenue
  • Built and documented driver tree (owners, formulas, sources)
  • For each node - targets, guardrails, refresh rate and SLO
  • Metrics are established in the semantic layer, versions and dictionary of terms are consistent
  • Dashboards by layers and review rituals (daily/weekly/monthly/QBR)
  • RACI is fixed; KPI conflicts and priorities described
  • Runbooks for type deviations; alerts on NSM/guardrails
  • Tree revision plan once a quarter; removing out-of-date nodes

Result

The hierarchy of indicators is the NSM → driver tree → command KPIs → guardrails, fixed in the semantic layer and connected to the control rhythms. This design makes goals transparent, responsibility explicit, and change measurable and controllable.

Contact

Get in Touch

Reach out with any questions or support needs.We are always ready to help!

Start Integration

Email is required. Telegram or WhatsApp — optional.

Your Name optional
Email optional
Subject optional
Message optional
Telegram optional
@
If you include Telegram — we will reply there as well, in addition to Email.
WhatsApp optional
Format: +country code and number (e.g., +380XXXXXXXXX).

By clicking this button, you agree to data processing.