GH GambleHub

Reverse proxy and routing

1) Reverse proxy role

Reverse proxy - the "front line" of the platform: accepts TLS, distributes traffic between upstream, applies security and performance policies. The goal is minimal latency, predictable routing and quick isolation of degrading instances/zones.

2) Layers and protocols

L4: TCP/UDP proxy (SNI-based TLS passthrough, QUIC). Low price, without understanding HTTP.
L7: HTTP/1. 1–2–3, gRPC, WebSocket. Rich routing (host, path, headers, cookies), transformations and cache.

TLS model: terminate on the perimeter (NGINX/Envoy), inside - mTLS/mesh. SNI allows virtual hosts on the same IP.

3) Routing strategies (L7)

1. Host-based: by domain ('api. brand. com '→ cluster' brand-api ').
2. Path-based: `/v1/payments` → `payments-svc`, `/v1/wallets` → `wallets-svc`.
3. Header-based: `X-Region: eu-central`, `X-Tenant: 42`, `User-Agent`/`Accept`.
4. Cookie-based: A/B tests, sticky sessions.
5. Weighted/Canary: percentage of traffic to the new version (1-5% → 100%).
6. Geo/ASN: by country/ASN sent to the nearest POP/region.
7. Consistent hashing: binding a key (user_id/tenant_id) to an instance → cache locality/stickiness.
8. Shadow/Mirroring: copy traffic to the "shadow" upstream without affecting the response (for regression tests).

4) Balancing and fault tolerance

Algorithms: round-robin, least-request, random, ring-hash (consistent).
Health-checks: active (HTTP/TCP) + passive (by codes/timeouts).
Outlier ejection: temporarily "knock out" a host with increased error/latency.
Retries: limited, with per-try timeout and jitter; do not retract unsafe methods without idempotency.
Connection pooling: keep warm pools to upstream, limit highs.

5) Perimeter performance

Caching: by key (method + host + path + Vary), 'ETag/If-None-Match' conditions, TTL and stale-while-revalidate.
Compression: brotli/gzip for text responses.
HTTP/2/3: multiplexing, header-compression; Verify WAF/IDS compatibility.
Request coalescing - Collapse concurrent requests for the same cache key.

6) Security on the proxy

TLS: 1. 2 + (better than 1. 3), OCSP stapling, HSTS.
WAF/bot filters: before routing to app.
CORS/CSP/Fetch-Metadata: as per policies.
Header-гигиена: `X-Forwarded-For/Proto`, `Forwarded`, `traceparent`; header-injection and oversize protection.
Body/headers limits: early 413/431 for DoS patterns.
mTLS for partner integrations and internal APIs.

7) Deploy schemes: canary/blue-green/versions

Weighted routing на level-7 (1%, 5%, 25%, 50%, 100%).
Header-gate: enable the feature by flag/header (internal/testing).
Blue-green: full DNS/route switching, fast rollback.
Shadow: parallel run of the new version with the entry of metrics/logs.

8) Sticky sessions and hash routing

Cookie-stickiness (`Set-Cookie: SRV=shard-a; Path=/; HttpOnly ') for stateful loads.
Ring-hash/consistent by 'user _ id/tenant _ id' - reduce cache cross-disabilities.
Caution: avoid "eternal" stickiness for write-loads → hot-spot; Use a quota tenant.

9) Regional and geo-routing

Anycast + geo-DNS to select the nearest POP.
Header-override (for example, 'X-Region') for tests and debugging.
Coordinate with legally required data localization (route by region/jurisdiction).

10) Observability and control

RED metrics: RPS, error-rate (by class), latency p95/p99 per-route/cluster.
Outlier/health: number of edits/recloses, slow-call-rate.
Logs: structured, without PII; correlation 'trace _ id '/' span _ id'.
Tracing (OTel): spans ingress→router→upstream; exemplars on p99 graphs.

11) Configuration examples

11. 1 NGINX: host/path/weighted + кэш

nginx map $http_x_canary $canary { default 0; "1" 1; }
upstream app_v1 { least_conn; server 10. 0. 0. 1:8080 max_fails=3 fail_timeout=10s; }
upstream app_v2 { least_conn; server 10. 0. 0. 2:8080; }

server {
listen 443 ssl http2;
server_name api. example. com;

Кэш proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=apicache:256m max_size=10g inactive=10m use_temp_path=off;

location /v1/ {
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Request-ID $request_id;
proxy_read_timeout 300ms; proxy_connect_timeout 100ms;

Weighted: 5% on v2 if canary = 1, otherwise 0%
set $backend app_v1;
if ($canary) { set $backend app_v2; }
proxy_pass http://$backend;
}

Static with cache location/assets/{
proxy_cache apicache;
proxy_cache_valid 200 10m;
add_header Cache-Control "public, max-age=600";
proxy_pass http://static_cluster;
}
}

11. 2 Envoy: header-routing, canary, outlier-ejection, mirroring

yaml static_resources:
clusters:
- name: svc_v1 type: STRICT_DNS lb_policy: LEAST_REQUEST outlier_detection:
consecutive_5xx: 5 interval: 5s base_ejection_time: 30s max_ejection_percent: 50
- name: svc_v2 type: STRICT_DNS lb_policy: LEAST_REQUEST
- name: mirror_svc type: STRICT_DNS

listeners:
- name: https filter_chains:
- filters:
- name: envoy. filters. network. http_connection_manager typed_config:
route_config:
virtual_hosts:
- name: api domains: ["api. example. com"]
routes:
- match:
prefix: "/v1"
headers:
- name: "X-Region"
exact_match: "eu"
route:
cluster: svc_v1 timeout: 350ms retry_policy:
retry_on: connect-failure,reset,5xx num_retries: 1 per_try_timeout: 200ms request_mirror_policies:
- cluster: mirror_svc runtime_key: mirror. enabled
- match: { prefix: "/v1" }
route:
weighted_clusters:
clusters:
- name: svc_v1 weight: 95
- name: svc_v2 weight: 5

11. 3 Traefik: rules + middleware

yaml http:
routers:
api:
rule: "Host(`api. example. com`) && PathPrefix(`/v1`)"
service: svc middlewares: [hsts, compress]
middlewares:
hsts:
headers:
stsSeconds: 31536000 stsIncludeSubdomains: true compress:
compress: {}
services:
svc:
weighted:
services:
- name: v1 weight: 95
- name: v2 weight: 5

11. 4 Kubernetes: Ingress + manifesto for canary (NGINX Ingress)

yaml apiVersion: networking. k8s. io/v1 kind: Ingress metadata:
name: api annotations:
nginx. ingress. kubernetes. io/canary: "true"
nginx. ingress. kubernetes. io/canary-weight: "5"
spec:
rules:
- host: api. example. com http:
paths:
- path: /v1 pathType: Prefix backend:
service:
name: svc-v1 port: { number: 8080 }

12) Transformations and compatibility

Normalization of headers/paths, 'Location' census, 'Cache-Control' control.
gRPC ↔ HTTP/JSON via translators (grpc-json-transcoder).
WebSocket/HTTP2 upgrades - Make sure the proxy is skipping'Upgrade '/'Connection '.

13) Testing and chaos scenarios

Loading: bursts, long plateaus, "long" bodies (slow-POST).
Delay/loss injection to upstream → retries/timeout/outlier check.
Canary metrics: p95/p99, error-rate of the new version vs the old; automatic rollback by SLO.
Shadow: comparison of responses (sampling) and side-by-side logic.

14) Antipatterns

Global retreats excluding idempotency and deadline → doubles and storm.
Sticky sessions without hot shards control → load skew.
Lack of health-checks/outlier-ejection → "rotten" instances in the pool.
Unlimited headers/bodies → the simplest DoS.
Mixing transformations and security without schema version → unexpected regressions.
A single global key cache without a'Vary '→ incorrect responses.

15) Specifics of iGaming/Finance

Regionality: routing by player/brand jurisdiction; isolation of payment zones.
Critical routes (deposits/outputs): short timeouts, one repetition, idempotency; individual clusters.
PSP/KYC: dedicated upstream pools, strict retry/timeout policies, circuit-breaker, geo-pins.
AB channels: safe experiments with payments/limits only for the read path; write - through flags and small percentages.

16) Prod Readiness Checklist

  • TLS 1. 2+/1. 3, OCSP stapling, HSTS; correct'X-Forwarded- '.
  • Clear routing rules: host/path/header/cookie; documentation.
  • Health-checks, outlier-ejection, per-try timeout, restricted retrays.
  • Weighted/canary + shadow; auto-rollback by SLO/alert.
  • Cache/compression/ETag; body/headers limits; request coalescing.
  • Logs/trails with 'trace _ id'; RED + outlier/health metrics; dashboards per-route/cluster.
  • WAF/bot filters/CORS; oversize and slow-POST protection.
  • Sticky/consistent hashing where needed; hot-shard control.
  • Configs are versioned, migrations are secure, secrets in KMS/Vault.

17) TL; DR

Terminate TLS on the perimeter and route to L7 via host/path/header/cookie. For releases - weighted canary and shadow; for stability - health-checks, outlier-ejection, limited retries with per-try timeout. Use cache, compression and consistent hashing where it improves p95. Measure RED signals and cluster status, hold WAF and size limits. For critical payment routes - separate clusters, short SLAs and strict management of retras/idempotency.

Contact

Get in Touch

Reach out with any questions or support needs.We are always ready to help!

Start Integration

Email is required. Telegram or WhatsApp — optional.

Your Name optional
Email optional
Subject optional
Message optional
Telegram optional
@
If you include Telegram — we will reply there as well, in addition to Email.
WhatsApp optional
Format: +country code and number (e.g., +380XXXXXXXXX).

By clicking this button, you agree to data processing.