Design of analytical dashboards
Design of analytical dashboards
A good dashboard is not a "set of graphs," but a decision tool. It links business goals, correct data and understandable UX, and also lives by the rules: SLO updates, quality control, versioning and transparent KPI mathematics.
1) Goals and audience
Target type: research (diagnostics), monitoring (operational control), explanation (insight → solution), communication (meeting/presentation).
Audiences:- Executive: NSM and 3-5 drivers, high-level trends, warnings.
- Product/marketing: funnels, cohorts, segments, ROMI.
- Operations/ML/Infra: SLA, errors, latency, model/data drift.
- The wording of the questions: "How do we know what to do?" → list of triggers/thresholds.
2) KPI and metrics dictionary
Select 5-7 key KPIs, for each: definition, formula, source, lag, segmentation.
Divide by North Star, drivers, guardrails (limits: FPR≤x%, p95 latency≤y).
Create a glossary of terms (formula versions and last edited date) and display it on the dashboard.
3) Data sources and model
Source of truth (SoT): a single showcase/model (star/snowflake) under the dashboard.
Freshness and lag: display "updated N min back" and the expected SLO (for example, "every 10 min, tolerance ± 5 min").
Quality: completeness, consistency, deduplication, timezone-uniformity.
No leaks: point-in-time correctness for retrospectives and ML metrics.
4) Information architecture
Page layout: rule "Z "/" F, "3-6 cards" on the first screen."
Hierarchy: top NSM + status; below - drivers; further - detailing/diagnostics.
Drill-down: from KPI tile → trend → segmentation → detailed tables/events.
Navigation: tabs by domains (Product, Marketing, Operations), "bread crumbs," unified filters.
5) Select visualizations
Trends: lines/area; for savings - stacked/100%.
Category comparisons: horizontal bars (long tags).
Distributions: histogram/box/violin.
Funnels: turn-based bars + delta signings.
Correlations: scatter/heatmap.
Cohorts: D7/D30-backlit heatmap.
Anomalies: lines with confidence corridor, event/release markers.
Anti-patterns: 3D, overloaded legends, double axes unnecessarily.
6) UX and interactivity
Filters: period, country/channel/platform, experiments; show active filters with an explicit badge.
States: "load," "empty," "error," "partially updated."
Annotations: events (releases, campaigns, incidents) → clickable notes.
Export: PNG/PDF/CSV; saved blizzards and "subscriptions" to the mailing list.
Micro-copywriting: title = insight, subheading = how to read a graph.
7) Performance and SLO
Response time: p95 <2-3 s for interactive filters.
Optimization: preaggregations in DWH, incremental updates, layer cache, downsampling long series.
Restrictions: category limit (graph ≤12), table pagination, lazy-load.
Observability: rendering/error metrics, query logs, degradation alerts.
8) Availability and localization
Text contrast ≥ 4. 5:1, color-blind palettes; duplicate color with shape/stroke.
Alt texts, voiced signatures, tab navigation.
Localization of numbers/dates/currencies, 24-hour format, thousands separators.
Mask the PII, aggregate to a safe level.
9) Security and access rights
Roles and segments: row-level security (country, brand, partner).
Masking: e-mail/phone → partial visibility; check of discharges.
Activity Log - Who opened/exported/modified filters (for audit).
Secrets and tokens: storage outside the client part, key rotation.
10) Governance and versions
Dasboard versioning: 'dash _ product _ v7', changelog, release date.
Metrics: formula versions (v1→v2) with auto-recalculation of history/peremapping.
Review: visual code-review (correctness of graph type, units, zero point), date-review (SQL/logic).
Owners: owner product, data steward, platform engineer.
11) Release and operation processes
1. Design brief: goal, audience, top issues, KPI, sources, restrictions.
2. Prototype (low-fi → hi-fi): wireframe → click layout with pseudo-data.
3. Data: showcase/preaggregation, quality tests (freshness, completeness).
4. Assembly: a single design system (colors, grids, fonts, legends).
5. Review/Pilot: with 5-10 target users; UX/performance edits.
6. Release: version tag, instruction, training video/notes.
7. Monitoring: use (clicks/view/exports), SLO alerts, feedback collection.
8. Revision: quarterly audit of KPIs, removal of "dead" cards.
12) Card templates
KPI tile
Title: Retention D30
Value + trend (YoY/DoD), sparkline, color indication vs target.
Basement: source/updated X min back/formula version.
Driver Diagnostics
Stack bar by segment (country/channel) + top contribution table.
"Show RCA" button: decomposition by factors (volume, price, mix).
Anomalies/Incidents
Line with confidence intervals, event markers, filter by incident type.
Quick action: "create a ticket "/" add a comment. "
13) Frequent mistakes and how to avoid them
Too many graphs: leave the main thing, the rest - in drill-down.
Inconsistent formulas - Enter dictionary and KPI versioning.
Two Y-axes without explanation: separate panels or normalize scales.
No data status: Always show freshness and SLOs.
Color chaos: 1 accent color + 1-2 auxiliary, a single palette.
14) Pre-publication checklist
- Dashboard goals and audience documented
- KPIs have formulas, sources, and versions; displays freshness
- Hierarchy: NSM from top, then drivers and diagnostics
- The correct schedule type is selected; zero point/units specified
- Filters and event annotations work; saved blizzards configured
- p95 response time ≤ SLO; preaggregation/cache enabled
- Availability/localization verified; PII disguised
- Roles/rights and RLS are configured; access logs are enabled
- Version/changelog and owners specified; there is an incident runibook
Total
Strong dashboard are goals → metrics → data → UX → operation. Focus on NSM and drivers, keep the metrics dictionary in order, ensure performance and availability, fix versions and SLOs. Then the visualizations turn into a management tool, and not into a museum of graphs.