Partner Performance
1) Why measure performance
Partners (studios, affiliates, node/bridge/DA operators, inference providers, payment providers, etc.) create value unevenly. Formalized performance model:- Links quality → volume/quota → disbursement
- reduces costs and disputes, accelerates onboarding/scaling;
- improves sustainability (early detection of degradation and risks).
2) Roles and contracts
Creators/Studios: Content, RTP/Math, Retention.
Affiliates/Aggregators: traffic, funnel, attribution.
Infrastructure: nodes/validators, bridges, DA, POP/edge, GPU inference.
Operators/Platforms: billing, CCM/compliance, support.
Orchestrators/Routers: utility routing, QoS, quotas.
Audit/Regulyator/治理: methodology, appeals, sunset edits.
Each role is formalized by an RNFT contract: rights/quotas, KPI/SLA, windows of finality and chargebacks, S-pledge and slashing rules, reporting, dispute procedures.
3) Performance metrics framework
Metrics are grouped into 6 baskets, each with weights for the unit:1. Quality of service (Q): success, p95/p99, TailAmplification (p99/p50), DLQ depth, out-of-order/dup%.
2. Revenue and value (V): GR/NR/CM/NP, ARPPU/LTV/NRR, share of repeat revenue.
3. Efficiency/Cost (C): Cost/Req, Cost/GB (DA/egress), GPU-min/req, margin/event.
4. Risk/Incidents (R): incident rate, finality lag/reorg (for bridges), dispute/1k, chargeback%.
5. Compliance/Privacy (P): geo/age/sanctions pass, FPR/FNR moderation, reaction timelines.
6. Reliability and support (S): uptime, MTTR, flap-rate, support response time.
For affiliates additionally: traffic quality (D1/D7/D30, churn, fraud-score), attribution (dedup, micro-contribution).
4) Composite scorings
4. 1 Quality Factor (QF)
[
\text{QF} = f\big(\text{success},,p95,,\text{DLQ},,\text{finality},,\text{retention},,\text{ARPPU}\big)\in [q_{\min},q_{\max}]
]
4. 2 Trust Index (ID)
[
\ text {ID} =\sum _ k W_k\cdot S_k,\quad k\in {\text {quality} ,\text {safety} ,\text {compliance} ,\text {behavior} ,\text {social signals}}
]
4. 3 Final partner speed (PS)
[
\text{PS}=\alpha,\text{QF}+\beta,\text{ID}+\gamma,\widehat{\text{Маржа}};-;\rho,\widehat{\text{Риск}}
]
Parameters (\alpha ,\beta ,\gamma ,\rho) opredelyayutsya治理 and depend on the role/QoS/jurisdiction.
5) Normalization and noise resistance
Valuation windows: 7/30/90 days (by role and risk), EWMA smoothing.
Robustness: winsorization [P1, P99], robust z-score or min-max on [P5, P95].
Confidence-weights: (\omega =\frac {n} {n +\kappa}) for short stories.
Seasonality/peaks: STL decomposition, individual "hot" windows (tournaments/holidays).
6) Reference to quotas and volume
Volume allocation policy (example, per QoS/region):[
\text{Share}_i = \frac{\max(\text{PS}_i,0)}{\sum_j \max(\text{PS}_j,0)}
]
Limiters: day/week caps, fairness (Jain ≥ thresholds), anti-noisy neighbor via WFQ/DRR and token buckets.
7) Payment model and cost allocation
7. 1 Payout
RevShare/NPS/Hybrid: Base share × QF × RiskAdj.
[
\text{Payout}_i=\omega_i\cdot \text{Pool}\cdot \text{QF}_i\cdot \text{RiskAdj}_i
]
7. 2 Costs
Usage/ABC: (\text{Cost}i=\sum_r u{i,r}\cdot \text{Rate}_r + \text{RiskSurcharge}_i)
Waterfall: taxes/returns → costs → insurance S-fund → pools → bonuses.
Clawback: adjustment after finality/chargeback windows.
8) Invariants and SLO/SLA
Order/Idempotency: delivery guarantees, outbox/inbox, idempotency_key.
Finality: challenge windows by bridge/chain.
Compliance: fail-closed, ZK-gaps, data export/retention.
- Infrastructure/bridges: success ≥ 99. 99%, p95 ≤ 200 мс, finality ≤ 3×T_block.
- Affiliates: dispute ≤ 3%, chargeback ≤ 2%, Payback ≤ 90 days, D7 market ≥.
- Studios: D30 retention ≥ target, content defects = 0 (critical).
9) Dashboards and observability
Partner Overview: PS, QF, ID, trends, confidence intervals.
Quality & Tail: p50/p95/p99, TA, DLQ/replay, out-of-order/dup%.
Economy/P&L: GR/NR/CM/NP, Cost/Req, QF-вклад, payout vs forecast.
Risk & Compliance: Incidents, finality lag, dispute/chargeback, FPR/FNR.
Quotas & Fairness: Jain, caps, share-actual vs plan.
Support/MTTR: uptime, response, flap-rate.
10) Anti-gaming and collusion protection
Dedup/Server-side attribution, event signatures, one-time tokens.
Golden-set quality checks and hidden control tasks.
Device/Graph analysis to identify affiliate/bot farm rings.
Blind-run parts of scales/windows to eliminate "fit."
Sybil Gate: Minimum S-pledge/badge for meaningful volumes.
11) 治理 and public reporting
Registry of parameters: weights (\alpha ,\beta ,\gamma ,\rho), QF/RiskAdj corridors, SLO thresholds, method versions.
Proposals and sunset edits: temporary changes with auto-rollback.
Appeals/Disputes: SLA for review, independent data audit.
Public reports: aggregates by partner groups, without personal data; badge proofs (e.g. SLA≥99. 9%/90d).
12) Implementation playbook (in steps)
1. Mapping roles and value streams. Where is GR/NR, what are the cost/risk drivers.
2. Data schema and tracing. DID/VC, ULID/trace-id, server postbacks, dedup.
3. Metrics methodology. Define Q/V/C/R/P/S baskets, windows, normalization.
4. Scoring QF/ID/PS. Initialize weights, set corridors and anti-aliasing.
5. SLO/SLA и RNFT. Fix quotas/caps, S-pledges, finality/chargeback windows.
6. Pilot 1-2 quarters. A/B profiles of weights, holdout-cohorts, retro-calculation.
7. Integration with routing. PS → share/quotas/prices; fairness and anti-noise.
8. Dashboards and alerts. Create panels, error budgets, alerts by deviations.
9. 治理 and reporting. Publish the methodology, perform sunset corrections.
10. Scaling. New regions/chains/partners, revised resource rates.
13) Formulas and landmarks
SuccessRate = 1 − (timeouts+errors)/requests
TailAmplification = p99/p50 (Target: ↓)
Headroom = (cap − current)/cap
Cost/Req = Σ (resource × bid )/successful _ requests
Fairness (Jain) = (Σx)²/(n·Σx²)
Payback (days) = CAC / Avg Daily Gross Margin per user
Dispute Rate/1k, Chargeback% (benchmarks: ≤3% and ≤2%)
QF corridor: ([0. 8; 1. 2]) for stable stimuli
RiskAdj corridor: ([0. 9; 1. 1]) if there are no S-incidents
14) Partner Management Program KPI
Economics: margin/event ↑, Cost/Req ↓, forecast accuracy payout ↑.
Quality: median QF ↑, proportion of partners in the green zone ↑, TA ↓.
Risk: incident/dispute/chargeback ↓, incident MTTR ↓.
Compliance: 100% pass geo/age/sanctions, 0 critical violations.
Operations: uptime/MTTR improved, flap-rate ↓, SLA stability ↑.
Fairness: Jain ≥ the threshold, lack of concentration of the "super partner."
Growth: reduced onboarding time, scale without degradation.
15) Delivery checklist
- Single data model (DID/ULID, signatures, dedup)
- Defined metrics baskets and evaluation windows; smoothing is enabled
- QF/ID/PS scorings with corridors and sunset procedures started
- RNFT contracts: quotas, S-pledge, finality/chargeback windows
- PS → Quota/Routing/Pricing and QF → Payout Integration
- Quality/economic/risk dashboards and alerts in cadence
- Appeals Dispute/Audit and SLA Procedures
- Pilot conducted, scales recalibrated and methodology published
- Anti-gaming signature, blind-run, graph analysis
- Scaling plan by region/chain/role
16) Glossary
RNFT: Relationship/Rights/Limits Contract and KPIs.
QF: quality multiplier in payments.
ID (Trust Index): composite of trust/compliance/behavior.
PS: final partner performance rate.
DLQ/Replay: quarantine and reprocessing messages.
Tail Amplification: p99/p50 - "tail strength" of delays.
WFQ/DRR: fair queue discipline.
Sunset-edit: temporarily changing parameters with auto-rollback.
17) The bottom line
Partner performance is a controlled cycle: we measure → speed → distribute volume and payments → monitor and improve. Connecting RNFT contracts, QF/ID/PS scorings, SLO/SLA and a transparent economy, the ecosystem receives honest incentives, predictable results and sustainable growth - without compromises in order, finality and compliance.