Telemetry Batching / Flush
The metagame analytics pipeline queues events in memory and flushes them to the Supabase player_events table in batches. The design exists so a busy session — dozens of UI interactions per minute — doesn’t fire one network round-trip per event, and so events captured during page teardown still survive the unload.
Queue
Events live in a module-local eventQueue: Event[] inside src/metagame/services/analytics.ts. trackEvent(name, properties) pushes an { event_type, properties } row onto the queue. Nothing else writes to the queue — every call site goes through trackEvent.
An event is just a typed name plus an optional properties bag. No timestamp is attached client-side; the row’s created_at is set by Postgres on insert.
Flush triggers
Four conditions force a flush:
- Size cap. When the queue reaches
MAX_BATCH_SIZE = 50events,trackEventcallsflush()synchronously instead of scheduling the timer. This is the safety valve for high-frequency bursts (e.g. mod-grid drag events firing every frame): the queue can’t grow without bound between timer ticks. - Timer. Every
trackEventcall (below the 50-cap) schedules asetTimeoutforFLUSH_INTERVAL = 10000ms. The timer is debounced —scheduleFlush()clears any pending timer before installing a new one, so a steady trickle of events doesn’t accumulate stale timers. When the timer fires,flush()runs. - Visibility change. A module-level
document.addEventListener('visibilitychange', ...)listener firesflushAnalytics()wheneverdocument.hiddenbecomes true (tab backgrounded, app switched, screen locked on mobile). The page isn’t unloading, so asyncfetchis fine here. - Page unload. The
beforeunload/pagehidepath callsflushAnalyticsSync(), which routes throughflushSync()and usesnavigator.sendBeaconinstead of fetch.
Async flush path
flush() drains up to MAX_BATCH_SIZE events from the queue via eventQueue.splice(0, MAX_BATCH_SIZE) and inserts them through the Supabase JS client (supabase.from('player_events').insert(...)). Errors are logged but not retried — analytics is non-critical, and a retry queue would conflict with the run-level telemetry queue in engine/telemetry/sender.ts (a separate pipeline with its own batching and localStorage retry).
If the queue still has events after the splice (the cap was hit and there’s overflow), flush() reschedules itself via scheduleFlush() to drain the rest on the next tick.
Unload path — why sendBeacon
fetch() during page teardown is unreliable: the browser may cancel the request before the response arrives, especially on mobile when the user swipes away the tab. navigator.sendBeacon is the only API the browser guarantees to honor during unload — it’s queued by the OS and sent after the page is gone.
flushSync() builds the same JSON body fetch would have sent (an array of { event_type, properties } rows), wraps it in a Blob with type: 'application/json', and posts to:
${SUPABASE_URL}/rest/v1/player_events?apikey=${SUPABASE_KEY}
The apikey goes in the query string, not a header. This is a sendBeacon constraint: it can set Content-Type (via the Blob’s type) but cannot set arbitrary headers. Supabase PostgREST accepts the API key as a query param as well as a header, so the query-param form is what the unload path uses.
If navigator.sendBeacon returns false (the payload was too large for the browser’s beacon quota, typically 64 KB), flushSync falls back to fetch with keepalive: true, which is the next-most-reliable option during unload. Very old browsers without sendBeacon at all fall through to the same keepalive fetch path.
Why this is separate from engine/telemetry/sender.ts
The engine’s sender.ts ships run-level telemetry (the giant telemetry_runs row with FPS histograms, kill logs, weapon stats) on a faster 2-second cadence and uses fetch with keepalive: true end-to-end. It can’t use sendBeacon because PostgREST run inserts use Prefer: return=representation or other custom headers that sendBeacon can’t set, and the failure path queues to localStorage for the next session to retry. The metagame analytics pipeline here is the smaller, higher-frequency UI-event surface — sendBeacon fits because the headers needed are minimal and the events are individually disposable.
Related
telemetry-events.md— full event-name catalog and call sitestelemetry-sampler.md— the engine’s heartbeat/crash/ring-buffer sampler (separate batching pipeline)