GH GambleHub

Circuit Performance Comparison

(Section: Ecosystem and Network)

1) Why and what we compare

The goal is to create a reproducible and neutral way to compare the performance of different chains (L1, L2, app-chain, validium/rollup) taking into account:
  • Speeds and delays: inclusion, finalization, variability.
  • Economics: cost of transactions and data, stability of commissions.
  • Stability: reorgs, showers, degradation under load.
  • Data availability: DA bandwidth and byte cost.
  • Operating systems: requirements for nodes, state size, customer diversification.

The result is consolidated KPIs that allow you to select chains/domains for specific scenarios (payments, games/micro-events, bridges, DA/publications).

2) Taxonomy of metrics (core)

2. 1 Throughput and latency

Sustained TPS/QPS

Peak TPS (short peak without error/drop)

Time-to-Inclusion (TTI) p50/p95/p99

Time-to-Finality (TTF) p50/p95/p99

Block Utilization%

Variance/Jitter of delays (σ, CV)

2. 2 Quality and sustainability

Success Rate (% of successful tx/events)

Reorg/Orphan Rate (frequency and depth)

Liveness SLO Hit

Degradation Grace (controlled degradation instead of fail)

2. 3 Economy and DA

Fee p50/p95/p99 (in native currency and in USD)

Cost-per-kB (DA) - publication price of 1 kB of data

Cost-per-Tx Class - "transaction type" price: simple transfer, contract call, large calldata

Fee Volatility Index

2. 4 Nodes and status

Hardware Footprint (CPU/RAM/SSD/network for validator/archive node)

State Growth

Client Diversity Index

Sync Time

2. 5 L2 specificity

Batch TPS (at the Sentenser), Batch Size (kB)

Time-to-Batch Inclusion и Time-to-Prove (ZK) / Challenge Window (optimistic)

DA Throughput (МБ/с) и DA Failure Rate

Settlement Latency (L2→L1 finalization)

3) Measurement procedure (neutral and reproducible)

1. Test Use Profiles (TUP):

TUP-Pay: small transfers (N = 70% simple, 30% token).
TUP-Game: short events with calldata (up to 2-8 kB).
TUP-DEX: Mid-gas and surge contracts.
TUP-DA: large publications (50-250 kB batchami).

2. Load layers: background 60-80% of target SLO + pulses 120-160% for 5-10 minutes every 30-60 minutes.

3. Geography and network: at least 3 regions, RTT matrix, jitter/loss injections (0. 5–2%).

4. Client diversification: at least 2 node clients per circuit (if available), identical versions.

5. Telemetry collection: correct correlation (trace-ID), time synchronization (NTP/PTP), fixing configs.

6. Finalization windows: explicit setting of the dispute K/window; Read TTF taking into account circuit rules.

7. Error semantics: failure taxonomy (gas/nonce/limit/DA-file/overload), exclude "expected" errors from Success Rate or highlight separately.

4) Normalization and anti-bias

Cost Normalization: USD по курсу на `observed_at`; `fee_usd = fee_native × price_usd_at_t`.

Gas/Weight Equivalence: comparison by "operation classes" rather than "raw gases."

Hardware-Adjusted TPS: `TPS_per_$ = Sustained_TPS / (Monthly_Node_Cost_USD)`

Fair DA Compare: price per 1kB and p95 publication delay.

Volatility Windows: Weekly/monthly windows, median and IQR instead of "one-off records."

Cold vs Warm: warming up caches; measurements after stabilization.
MEV/Peak commissions: exclude "market anomalies" or highlight a separate metric.

5) Summary KPIs (totals)

Core Performance Score (CPS) - 0.. 100, weight sum:
  • Throughput (30%), Finality (25%), Cost (20%), Stability (15%), Uptime/Liveness (10%).
  • Weighting factors are set up under the scenario (for example, for ↑Finality/Cost payments, for ↑Throughput/Stability/DA games).

Effective Throughput @ SLO - stable TPS subject to 'TTF _ p95 ≤ X', 'Success ≥ Y%', 'Fee _ p95 ≤ Z'.
Cost-to-Serve per 1k Ops - the total cost of processing 1000 class operations (including DA/settlement).
Finality SLA Hit% - the share of operations finalized in the target window.

6) SLI/SLO for comparison

Examples of SLOs (scripted):
  • Payments: `TTF_p95 ≤ 10s`, `Success ≥ 99. 7%`, `Fee_p95 ≤ $0. 01`.
  • Games/Events: `TTI_p95 ≤ 500ms`, `TTF_p95 ≤ 3s`, `Success ≥ 99. 5%`, `DA_p95 ≤ 1s`.
  • DA/Publishing: `Cost_per_kB ≤ $0. 0005`, `Publish_p95 ≤ 2s`, `Finality_p95 ≤ 60s`.
  • L2 Settlement: 'Settle _ p95 ≤ 10m' (ZK )/" challenge window "for optimistic.

7) Dashboards (reference layouts)

Perf Lens (real time/hour): TTI/TTF p50/p95/p99, Block Utilization, Success Rate, Fee p95, Error taxonomy.
Cost & DA: Cost/kB, Fee-volatility, DA throughput/latency, отказ DA.
Stability: Reorg Rate, Liveness SLO Hit, Burn-rate errors, uptime sentenser (L2).
Capacity Planning: Sustained vs Peak TPS, Hardware-Adjusted TPS, State Growth.

8) Data schema and logic (pseudo-SQL)

Raw benchmark events

sql
CREATE TABLE bench_events (
id TEXT PRIMARY KEY,
chain_id TEXT, layer TEXT,     -- L1    L2    app scenario TEXT,           -- payments    game    dex    da sent_at TIMESTAMPTZ,
included_at TIMESTAMPTZ,
finalized_at TIMESTAMPTZ,
size_bytes INT,
status TEXT,            -- success    fail_gas    fail_da    fail_overload...
fee_native NUMERIC, fee_usd NUMERIC,
region TEXT, client TEXT, node_profile TEXT
);

Metric Kernel Aggregation

sql
WITH base AS (
SELECT,
EXTRACT(EPOCH FROM (included_at - sent_at)) AS tti_s,
EXTRACT(EPOCH FROM (finalized_at - sent_at)) AS ttf_s
FROM bench_events
WHERE status LIKE 'success%'
)
SELECT chain_id, scenario,
PERCENTILE_CONT(0. 5) WITHIN GROUP (ORDER BY tti_s) AS tti_p50,
PERCENTILE_CONT(0. 95) WITHIN GROUP (ORDER BY tti_s) AS tti_p95,
PERCENTILE_CONT(0. 95) WITHIN GROUP (ORDER BY ttf_s) AS ttf_p95,
AVG(fee_usd) AS fee_avg_usd,
100. 0 SUM(CASE WHEN status='success' THEN 1 ELSE 0 END) / COUNT() AS success_rate
FROM bench_events
GROUP BY chain_id, scenario;

Effective Throughput @ SLO Score

sql
SELECT chain_id, scenario,
COUNT() / NULLIF(EXTRACT(EPOCH FROM (MAX(sent_at) - MIN(sent_at))),0) AS tps_effective
FROM bench_events
WHERE status='success'
AND EXTRACT(EPOCH FROM (finalized_at - sent_at)) <=:ttf_p95_slo
AND fee_usd <=:fee_p95_slo
GROUP BY chain_id, scenario;

9) Composite index (calculation example)

yaml weights:
throughput: 0. 30 finality:  0. 25 cost:    0. 20 stability: 0. 15 liveness:  0. 10

scoring:
throughput: normalize(Sustained_TPS, p10, p90)
finality:  invert(normalize(TTF_p95, p10, p90))
cost:    invert(normalize(Fee_p95_usd, p10, p90))
stability: invert(normalize(Var_TTF, p10, p90) + normalize(ReorgRate, p10, p90)/2)
liveness:  SLO_hit_pct
💡 'normalize (x, p10, p90)' - linear transformation to [0,1] by percentiles; 'invert (y) = 1 − y'.

10) L2 and inter-chain features

Optimistic L2: indicate "double" TTF - before L2-inclusion and before the end of the challenge-window.
ZK L2: divide the publishing time into L1 and the generation/verification time of the proof; take into account the fault tolerance of the provers.
Validium/DA outsource: DA metrics are required (throughput/cost/failure), otherwise the comparison is incorrect.
Cross-chain operations: read TTF E2E for bridge scenarios (istochnik→tsel), taking into account K/DA/challenge.

11) Anti-comparison patterns (what to avoid)

Compare the "record peak" of one chain with the "average" of the other.
Ignore data costs and commission volatility.
Ignore finalization (compare "inclusion" as "finality").
Shoot metrics on a "warm" node and transfer to a cold one.
Mix different classes of operations without normalization.
Do not commit client versions/configs - reproducibility is lost.

12) Test configurations and parameters (pseudo-YAML)

yaml benchmark:
scenarios:
- name: payments mix: { simple_transfer: 0. 7, token_transfer: 0. 3 }
slo: { ttf_p95_s: 10, success_pct: 99. 7, fee_p95_usd: 0. 01 }
- name: game mix: { small_event_2kb: 0. 6, medium_event_8kb: 0. 4 }
slo: { tti_p95_ms: 500, ttf_p95_s: 3 }
- name: da mix: { batch_50kb: 0. 5, batch_250kb: 0. 5 }
slo: { publish_p95_s: 2, cost_kb_usd: 0. 0005 }
load:
background_utilization_pct: 70 spikes: { multiplier: 1. 4, duration_min: 10, period_min: 45 }
regions: [eu-central, us-east, ap-south]
network_faults: { loss_pct: 1. 0, jitter_ms: 50 }
node_profiles:
validator: { cpu: "16c", ram_gb: 64, ssd_nvme_tb: 2, bw_gbps: 1 }
archive:  { cpu: "32c", ram_gb: 128, ssd_nvme_tb: 8, bw_gbps: 2 }

13) Reporting and visualization

Summary table by scenario: Effective TPS, TTI/TTF p95, Fee p95, Cost/kB, Success%.
Radar chart (per script): Throughput/Finality/Cost/Stability/Liveness.
Time series: Fee-volatility, DA latency, Reorg spikes.
Cost × to-Serve and TTF chain-to-class matrix.

14) Processes and roles

Benchmark Owner: methodology/tools, version control.
Infra Owner: nodes, clients, configs, regions.
Data/BI: aggregations, validation, SLO dashboards.
Security/Compliance: control of privacy and correctness of logs.
Governance: publishing results, changing index weights.

15) Playbook benchmark incidents

Drift of configs/versions: immediately stop the series, commit snapshot, restart with correct parameters.
Network anomalies (outside the planned ones): marking the window as "contaminated," repeating the series.
DA/prover failure: single out a separate incident, repeat the DA/ZK sub-series.
Unexpected price volatility: fix the median USD window, attach a range.

16) Implementation checklist

1. Approve scenarios (TUP) and summary index weights.
2. Record host/client configs, regions, and network conditions.
3. Implement telemetry collection with correlation and time synchronization.
4. Set up normalization of fee/DA/operation classes.
5. Agree on SLI/SLO and dashboard layouts.
6. Conduct a pilot run, verify reproducibility, calibrate loads.
7. Publish reports with full application of configs, versions and dates.

17) Glossary

TTI/TTF - time to switch on/finalization.
DA - Data Availability layer.
Sustained/Peak TPS - sustained/peak throughput.
Liveness - the network's ability to confirm blocks/batches.
Challenge Window - a challenge window in optimistic rollups.
State Growth - an increase in the size of the network state.
Hardware-Adjusted TPS - throughput, taking into account the cost of the node.

Bottom line: the correct comparison of chain performance is not a "who is more TPS" race, but a discipline: uniform scenarios, honest normalization of cost and data, accounting for finalization and stability, transparent configs and reproducible tests. Following this framework, the ecosystem receives comparable, decision-making metrics - from choosing a site for a product to planning inter-chain architectures.

Contact

Get in Touch

Reach out with any questions or support needs.We are always ready to help!

Telegram
@Gamble_GC
Start Integration

Email is required. Telegram or WhatsApp — optional.

Your Name optional
Email optional
Subject optional
Message optional
Telegram optional
@
If you include Telegram — we will reply there as well, in addition to Email.
WhatsApp optional
Format: +country code and number (e.g., +380XXXXXXXXX).

By clicking this button, you agree to data processing.