SLO-burn alerts on Payments/Bets
Operational Roadmap
1) Why do you need it
The operational roadmap (Ops Roadmap) turns the disparate tasks of SRE/platform/support and domain teams into a transparent plan: what effect on SLO/cost/incidents we will get in each quarter and at what cost (people, time, budget). This reduces chaos, streamlines technical debt and accelerates the delivery of value to businesses.
Objectives:- Combine initiatives around measurable outcomes (SLO, MTTR, Cost/RPS, Risk).
- Agree on priorities between the platform, domains and external providers.
- Budget resources and fix "what we are not doing" (explicit trade-offs).
- Keep a single truth about execution and risks.
2) Road map principles
1. Outcome-first: Each initiative is tied to a outcome metric (not "implement X," but "reduce MTTR by 20%").
2. SLO-aware: Initiatives affecting SLOs of critical pathways (deposit/bet/games/CCL) are higher in priority.
3. Data-driven: based on incidents, post-mortems, alerts, Capacity/FinOps panels.
4. Time-boxed & reversible: small increments, hypothesis testing, quick rollback.
5. Single source of truth: a single artifact, regular reviews and public statuses.
6. No hidden work: off the map - only "fires" according to the regulations.
3) Roadmap frame: levels and artifacts
Vision (12-18 months): 3-5 operational topics (Reliability, Scale, Cost, Security, Automation).
Pillars (6-12 months): blocks of initiatives by topic (e.g. "SLO-coverage of 100% critical paths," "Active-Active in 2 regions").
Quarterly plan (Q): specific initiatives with metrics, owners, dependencies, budget.
Iterations (2-3 weeks): tasks/epics and actual progress.
ID: OPS-23
4) Prioritization: How to compare the incomparable
4. 1 RICE (Reach, Impact, Confidence, Effort)
Reach: affected users/transactions/geo.
Impact: expected contribution to SLO/MTTR/Cost.
Confidence: Confidence in estimates (data/pilots).
Effort: man-weeks/calendar window/dependencies.
4. 2 WSJF (Scaled)
Cost of Delay = (SLO Risk + Revenue Impact + Compliance + Incident Rate)
/ Job Size = duration/force.
Suitable for mixed initiatives (technical debt, security, platform features).
The rule: initiatives with high SLO risk and high cost of delay come first, even if the effect is "invisible" on UI.
5) Relationship with OKR, SLO and incidents
Platform-level OKR:
KR1: "Reduce Change Failure Rate from 18% to 12% by the end of Q2."
KR2: "Increase Pre-Incident Detect Rate from 35% to 60%."
SLO-matrix: for each domain - target p95/p99/Success Rate/Availability.
Incident analytics: the top 3 reasons for the last quarter should have counteraction initiatives in the current one.
6) Resource and budget planning
FTE-matrix: by squads and competencies (SRE, Observability, Data, Integrations).
Provider calendar: maintenance/quota windows (impact on dates).
CapEx/OpEx: licenses/cluster extensions vs command hours.
Buffer: ~ 15-20% for unplanned "fires" and regulatory tasks.
What-don't-do policy: A list of rescheduled/postponed initiatives with reasons.
7) Managing dependencies and risks
Dependency map: who blocks whom (service/provider/data/command).
Risk register: risk, probability/impact, owner, mitigation plan/plan B.
Change freeze: periods of prohibition of major changes (prime time events/tournaments).
Ficheflags/canaries: Mandatory for initiatives affecting traffic.
8) Quarterly cycle (rhythms)
Q-0 (preparation, 2 weeks): data collection (SLO, incidents, costs), revision of topics, preliminary prioritization.
Q planning: protection of initiatives by owners, reconciliation of resources/risks, fixing the Q plan and "not doing" the list.
Weekly sync: status, blockers, adjustments; maximum 30 minutes.
Monthly review: checking effects on metrics, possible re-scope.
Q retro: compare plan/fact, update principles/patterns.
9) Roadmap view formats
Outcome View: grouped by purpose (SLO, Cost, Risk).
Domain View: Payments/Bets/Games/KYC/Platform.
Timeline View: quarterly, with dependency and frieze markers.
Budget View: FTE/CapEx/OpEx by Initiative and Topic.
Example of a quarterly slice (summary):
Initiative Outcome Metrics Term Owner Risk
-------------------- ----------------------- -------------------- ----- ------------- -------
Active-Active Games RTO≤5 min Availability 99. 95% Q1–Q2 platform-sre High
SLO-burn на Payments − 30% of late incidents Pre-Incident↑, MTTR↓ Q1 observability Average
Kafka Lag Guardrails − 50% of lag storms Lag p95↓, DLQ↑ Q1 streaming Average
FinOps Right-sizing −15% cost/RPS Cost/RPS↓ Q2 finops Low
10) Roadmap Success Metrics (KPIs)
Delivery Predictability: percentage of initiatives completed on time (target ≥ 80%).
SLO Coverage:% of critical paths with active SLOs/alerts.
Incident Trend: − X% of P1/P2 QoQ incidents
Change Failure Rate: Target decline by quarter.
Cost Efficiency: Cost/RPS, Cost/transaction - downward trend.
Risk Burn-down: the number of "red" risks and their total weight.
Stakeholder NPS: satisfaction of domain teams with the quality of the Roadmap.
11) Roadmap launch checklist
[] Defined themes/pillars and 3-5 target outcomes per year.
[] Catalog of initiatives linked to metrics and owners.
[] Prioritization methodology (RICE/WSJF) and scales adopted.
[] Checked resources: FTE, provider windows, budgets.
[] Fixed Q-plan + "not doing."
[] Set up Outcome/Domain/Budget panels, alerts by shifts.
[] Review Schedule: weekly/monthly/quarterly.
12) Anti-patterns
List of tasks without outcomes: "make X" instead of "achieve Y by metric."
Hidden initiatives and private arrangements outside of a single artifact.
Eternal epics: no time-box, no verifiable milestones.
Priority "in terms of volume": resources are spent on the "loudest" request, and not on the most valuable one.
No "what not to do": expectations are unmanageable, trust is falling.
Lack of a link with incidents/SLO: "cosmetic" improvements instead of real ones.
13) Templates (fragments)
Initiative Template (YAML):
yaml id: OPS-42 title: "Guardrails for release canaries"
theme: "Reliability"
quarter: "2025-Q1"
owner: "platform-release"
stakeholders: ["payments", "bets", "games"]
outcome: "Reduce regressions after releases by 40%"
metrics:- name: change_failure_rate target: "<= 12%"
- name: post_deploy_regression_rate target: "-40% QoQ"
- slo_impact: ["api_p99 <= 300ms@99. 9", "availability >= 99. 95%"]
- reach: 5000000 # transactions/QoQ impact: 3. 0 confidence: 0. 7 effort: 6 dependencies: ["observability-baseline", "feature-flags-core"]
- name: "false gates"
- mitigation: "baseline/tuning, pilot for 10% of traffic"
- name: design eta: "2025-01-20"
- name: pilot-10%
- eta: "2025-02-10"
- name: rollout-100%
- eta: "2025-03-05"
Quarterly report template (Markdown):
Q1 Ops Roadmap - Report
Outcome: SLO Coverage 92% (+ 7 pp), MTTR − 18%, Cost/RPS − 9%
Completed: 8/10 initiatives (80%)
Shifts: OPS-31 → Q2 (PSP-X dependency)
Incidents: P1 = 2 (− 1 QoQ), main reasons: retrays on provider timeouts
Follow-ups: tuning breakers, reserve quotas PSP-Y
14) Integration with processes
Incident Management: Each postmortem → an initiative/improvement ticket in the Roadmap.
Changes/releases: Major initiatives only come with flags/canaries.
Capacity/FinOps: once a month synchronization by headroom and cost trends.
Safety/compliance: quarterly control points for requirements and audits.
15) 30/60/90 (fast start)
30 days: collect incident/metric base, form topics, describe 10-15 initiatives in YAML format, select RICE/WSJF, fix Q-plan.
60 days: launch Outcome/Domain/Budget panels, conduct the first mid-quarter review, adjust data priorities.
90 days: summarize Q-results, update principles and scales, re-mark annual pillars.
16) Communications and Transparency
Monthly review for stakeholders: 30 minutes, focus on outcomes and risks.
Asynchronous updates: short entries with before/after metrics.
Single Roadmap channel: statuses, changes, priority decisions.
Red card rule: Any team can initiate a priority review by attaching data (SLO/incident/cost).
17) FAQ
Q: What if everything is "on fire" and there is no time on the Roadmap?
A: Include a "fire buffer" of 15-20% and a minimum Q-plan of 3 initiatives that cover the main causes of incidents. Any new "hot" work is only through reassembling priorities.
Q: How to prove the value of "invisible" initiatives (observability, autogates)?
A: Count Change Failure Rate, MTTR, Pre-Incident Detect Rate, pullbacks and "nightpages." Show before/after dynamics.
Q: How to deal with technical debt?
A: Debt is also an initiative with outcome: "− X% of class N incidents," "− Y% cost/RPS," "+ Z pp. SLO Coverage». Without a measurable outcome, debt doesn't make it into the plan.