GH GambleHub

Pagination and cursors

1) Why pagination is needed

Pagination limits the amount of data transmitted and rendered by the client, reduces the load on storage/networks, and sets a deterministic way to "walk" through the collection. In real systems, pagination is not only 'page = 1 & limit = 50', but a set of protocol contracts and consistency invariants.

Typical objectives:
  • Latency and memory control per request.
  • Stable navigation when changing a dataset (insert/delete).
  • The ability to resume from a place (resumption).
  • Caching and preload (prefetch).
  • Protection against abuse (rate limiting, backpressure).

2) Pagination models

2. 1 OFFSET/LIMIT (paged)

The idea: "skip N lines, return M."

Pros: simplicity, compatible with almost any SQL/NoSQL.

Cons:
  • Linear degradation: large OFFSETs result in a full scan/skip-cost.
  • Instability during insertions/deletions between requests (offsets "float").
  • It is difficult to ensure accurate "renewability."
SQL example:
sql
SELECT
FROM orders
ORDER BY created_at DESC, id DESC
OFFSET 1000 LIMIT 50;

2. 2 Cursor/Keyset/Seek-pagination

The idea: "get on with the K key." The cursor is the position in the sorted set.

Pros:
  • O (1) access to continue with index.
  • Stability during collection changes.
  • Best latency in deep "pages."
Cons:
  • We need strictly defined, unique and monotonous sort keys.
  • More difficult to implement and debug.
SQL example (seek):
sql
-- Resumption after steam (created_at, id) = (:last_ts,:last_id)
SELECT
FROM orders
WHERE (created_at, id) < (:last_ts,:last_id)
ORDER BY created_at DESC, id DESC
LIMIT 50;

2. 3 Continuation tokens

The idea: the server returns an opaque token in which the "position" is encoded (and possibly the state of the shards/filters). The client does not understand the internals and simply returns a token for the next page.
Pros: flexibility, the ability to change the scheme without breaking the API.
Cons: token lifetime management, compatibility with deposits.

2. 4 Time and logic cursors

Time-based: "all records up to T," cursor - time stamp (suitable for append-only threads).
Log-sequence/offset-based: cursor - offset in the log (Kafka offset, journal seq).
Global monotonic IDs: Snowflake/UUIDv7 as sortable keys for stable seek.

3) Designing courses and tokens

3. 1 Good Cursor Properties

Opaque-The client is format independent.
Authorship/integrity: HMAC signature to prevent spoofing/manipulation.
Context: includes sorting, filters, schema version, tenant/shard.
Lifetime: TTL and "non-replay" when changing indexes/access rights.
Size: compact (<= 1-2 KB) suitable for URL.

3. 2 Token format

The recommended stack: JSON → compression (zstd/deflate) → Base64URL → HMAC.

Payload structure (example):
json
{
"v": 3 ,//token version
"sort": ["created_at:desc","id:desc"],
"pos": {"created_at":"2025-10-30T12:34:56. 789Z","id":"987654321"},
"filters": {"user_id":"42","status":["paid","shipped"]},
"tenant": "eu-west-1",
"shards": [{"s ": "a, "" hi":"..."}] ,//optional: cross-shard context
"issued_at": "2025-10-31T14:00:00Z",
"ttl_sec": 3600
}

'mac = HMAC (secret, payload) 'is added at the top and everything is encoded into one string token.

3. 3 Safety

Sign (HMAC/SHA-256).
Optionally encrypt (AES-GCM) in the presence of sensitive values ​ ​ (PII).
Server validation: version, TTL, user authority (RBAC/ABAC).

4) Consistency and invariants

4. 1 Stable sorting

Use full determinism: 'ORDER BY ts DESC, id DESC'.
Sort key must be unique (add 'id' as tiebreaker).
The index must match the covering index.

4. 2 Snapshots and isolation

For non-jumbo pages, use read-consistent snapshot (MVCC/txid).
If the snapshot is impractical (expensive/a lot of data), formulate a contract: "the cursor returns elements strictly before the position." This is natural for news feeds.

4. 3 Insertions/deletions between pages

Seek-model minimizes "duplicates/omissions."

Document deletion/modification behavior: rare "holes" between pages are allowed, but not "back in time."

5) Indexing and ID schemes

Composite indexes are strictly in sort order: '(created_at DESC, id DESC)'.
Monotone IDs: Snowflake/UUIDv7 give order in time → speed up seek.
Hot keys: distribute by shard-key (for example, 'tenant _ id', 'region') and sort inside the shard.
ID generators: avoid collisions and "clock skew" - time synchronization, "regression" during NTP jumps.

6) Cross-shard pagination

6. 1 Schemes

Scatter-Gather: parallel requests to all shards, local seek courses, then k-way merge on the aggregator.
Per-Shard Cursors: The token contains positions on each shard.
Bounded fan-out-Limit the number of shards per step (rate limiting/timeout budget).

6. 2 Tokens for multi-shard

Store array '{shard _ id, last_pos}'. In the next step, resume for each active shard and hold again, giving the globally sorted page.

7) Protocol contracts

7. 1 REST

Request:

GET /v1/orders? limit=50&cursor=eyJ2IjoiMyIsInNvcnQiOiJjcmVh... (opaque)
Answer:
json
{
"items": [ /... / ],
"page": {
"limit": 50,
"next_cursor": "eyJ2IjozLCJwb3MiOiJjcmVh...==",
"has_more": true
}
}
Recommendations:
  • 'limit'with an upper bound (for example, max = 200).
  • 'next _ cursor'is missing if'has _ more = false'.
  • GET idempotence, cachability of responses without 'next _ cursor' (first page with fixed filters and snapshot).

7. 2 GraphQL (Relay approach)

Typical 'connection' contract:
graphql type Query {
orders(first: Int, after: String, filter: OrderFilter): OrderConnection!
}

type OrderConnection {
edges: [OrderEdge!]!
pageInfo: PageInfo!
}

type OrderEdge {
node: Order!
cursor: String! // opaque
}

type PageInfo {
hasNextPage: Boolean!
endCursor: String
}

'cursor'must be opaque and signed; do not use "raw Base64 (id)" without HMAC.

7. 3 gRPC

Use 'page _ size' and 'page _ token':
proto message ListOrdersRequest {
string filter = 1;
int32 page_size = 2;
string page_token = 3; // opaque
}

message ListOrdersResponse {
repeated Order items = 1;
string next_page_token = 2; // opaque bool has_more = 3;
}

7. 4 Threads and WebSockets

For continuous tapes: cursor as "last seen offset/ts."

Support 'resume _ from' during reconnection:
json
{ "action":"subscribe", "topic":"orders", "resume_from":"2025-10-31T12:00:00Z#987654321" }

8) Caching, Preload, CDN

ETag/If-None-Match for the first page with stable filters.
Cache-Control with a short TTL (for example, 5-30 s) for public lists.
Prefetch: return 'next _ cursor' and hints ('Link: rel = "next"'), the client can preload the next page.
Variations: Consider 'filter/sort/locale/tenant' as the key of the cache part.

9) Load management and limiting

Upper bound'limit ', e.g. 200.
Server-side backpressure: if the request time is> budget, reduce the 'limit' in the response (and explicitly tell the client the actual page size).
Rate limits per user/token/tenant.
Timeout/Retry: exponential pause, idempotent repeated requests.

10) UX aspects

Scrolling against numbering: infinite scrolling → cursors; number pages → offset (but explain the inaccuracy when updating data).
Return to place button: Store the client cursor stack.
Blank pages: If 'has _ more = false', do not show the More button.
Stable boundaries: show the exact 'total' only if it is cheap (otherwise it is an approximate 'approach _ total').

11) Testing and edge cases

Checklists:
  • Stable sorting: Items with the same 'ts' do not "blink."
  • Inserts/Deletes - Duplicates do not appear at the page intersection.
  • Change filters between pages: Token must be rejected as obsolete/incompatible.
  • Token TTL: Valid error after expiration.
  • Great depth: Latency does not grow linearly.
  • Multishard: correct merge-order, absence of starvation "slow" shards.
Property-based test example (pseudo code):
python
Generate N entries with random inserts between calls
Verify that all pages are merged = = whole ordered fetch

12) Observability and SLO

Metrics:
  • 'list _ request _ latency _ ms' (P50/P95/P99) by page length.
  • 'search _ index _ hit _ ratio '(the proportion of requests left by the covering index).
  • 'next _ cursor _ invalid _ rate '(validation/TTL/signature errors).
  • 'merge _ fanout '(number of involved shards per page).
  • 'duplicates _ on _ boundary'and'gaps _ on _ boundary' (detection on client telemetry).
Logs/tracing:
  • Correlate 'cursor _ id' in the logs, mask payload.
  • Tag spans: 'page _ size', 'source _ shards', 'db _ index _ used'.
SLO example:
  • Availability: 99. 9% on 'List' methods.
  • Latency: P95 <200 ms for 'page _ size <= 50' in a local charge.
  • Token error: <0. 1% of the total number of calls.

13) Migrations and interoperability

Enable 'v' in the token and support older versions of N weeks.
When changing sort keys - send a "soft" error '409 Conflict' with a prompt to perform a fresh listing without a cursor.
Catastrophic case (roar of all tokens): change 'signing _ key _ id' and reject old ones.

14) Implementation examples

14. 1 Token generation (pseudocode)

python payload = json. dumps({...}). encode()
compressed = zlib. compress(payload)
mac = hmac_sha256(signing_key, compressed)
token = base64url_encode(mac + compressed)

14. 2 Token validation

python raw = base64url_decode(token)
mac, compressed = raw[:32], raw[32:]
assert mac == hmac_sha256(signing_key, compressed)
payload = json. loads(zlib. decompress(compressed))
assert now() - payload["issued_at"] < payload["ttl_sec"]
assert payload["filters"] == req. filters

14. 3 Seek query with composite key

sql
-- Page # 1
SELECT FROM feed
WHERE tenant_id =:t
ORDER BY ts DESC, id DESC
LIMIT:limit;

-- Next pages, continued after (ts0, id0)
SELECT FROM feed
WHERE tenant_id =:t
AND (ts <:ts0 OR (ts =:ts0 AND id <:id0))
ORDER BY ts DESC, id DESC
LIMIT:limit;

15) Safety and compliance

Do not include raw fields in tokens from which PII can be derived.
Sign and limit TTL.
Try to make tokens intolerable between users (write 'sub/tenant/roles' in payload and check during validation).
Log only token hashes.

16) Frequent errors and anti-patterns

Base64 (id) as a cursor: easy to fake/pick up, breaks the contract when changing the sort.
No tie-breaker: 'ORDER BY ts DESC' without 'id' → duplicates/jumps.
Change filters between pages without invalidating the token.
Deep OFFSET: Slow and unpredictable.
Tokens without version and TTL.

17) Mini checklist implementation

1. Define the sort and add a unique tie-breaker.
2. Create a spanning index for this order.
3. Select model: seek + opaque token.
4. Implement token signing (and, if necessary, encryption).
5. Lay down the TTL and versioning.
6. Formulate and document 'has _ more', 'next _ cursor' contracts.
7. Consider a cross-shard scheme (if necessary) and k-way merge.
8. Add metrics, alerts, and SLOs.
9. Cover the property-based page borders with tests.
10. Describe the migration strategy for tokens.

18) Brief recommendations for choosing an approach

Directories/searches where "page number" and approximate total are important: let's say 'OFFSET/LIMIT' + cache; report that total is approximate.
Feeds, analytics, deep lists, high RPS: cursor/seek only.
Shardy/distributed collections: per-shard cursors + merge token.
Threads/CDC: cursors as offsets/ts with resume.

19) Example of API agreement (summary)

`GET /v1/items? limit=50&cursor=...`

The answer always includes' page. limit`, `page. has_more', optional'page. next_cursor`.
The cursor is opaque, signed, with TTL.
Sort is deterministic'ORDER BY created_at DESC, id DESC '.
Set change behavior-Items do not "go back" relative to the cursor.
Metrics and errors are standardized: 'invalid _ cursor', 'expired _ cursor', 'mismatch _ filters'.

This article provides architectural principles and ready-made patterns to design pagination that remains fast, predictable, and secure even in big data, shardy, and actively changing recordsets.

Contact

Get in Touch

Reach out with any questions or support needs.We are always ready to help!

Start Integration

Email is required. Telegram or WhatsApp — optional.

Your Name optional
Email optional
Subject optional
Message optional
Telegram optional
@
If you include Telegram — we will reply there as well, in addition to Email.
WhatsApp optional
Format: +country code and number (e.g., +380XXXXXXXXX).

By clicking this button, you agree to data processing.