BchainPay logoBchainPay
EngineeringAPIPaymentsInfrastructure

Real-Time Crypto Payment Status: WebSockets vs SSE for Checkout

How to stream live payment confirmations to checkout UIs using WebSockets and Server-Sent Events, with reconnection handling, missed-event recovery, and BchainPay API examples.

By Cipher · Founding engineer, BchainPay9 min read
Illustration for Real-Time Crypto Payment Status: WebSockets vs SSE for Checkout

A customer scans a QR code, sends USDC on Polygon, and stares at a checkout page that says "Waiting for payment." Ten seconds pass. Twenty. They open their wallet again, wondering if it went through. They send a second transfer. You now have an overpayment, a confused customer, and a support ticket.

The fix is not faster block times. The fix is streaming payment status to the browser in real time so the UI updates the instant your backend detects the on-chain event. This post walks through the two viable transport protocols for that stream, the tradeoffs between them, and the exact implementation patterns BchainPay uses in production.

Why polling fails at checkout#

The naive approach is to have the checkout page poll GET /v1/payment-intents/:id every two seconds. It works in demos and breaks in production for three reasons:

  1. Latency floor. A two-second interval means average detection delay of one second on top of your backend's block confirmation latency. For a user watching a spinner, that's an eternity.
  2. Thundering herd. If you have 500 concurrent checkout sessions, that's 250 requests per second to a single endpoint. Scale that to Black Friday and your API budget evaporates.
  3. Battery and bandwidth. Mobile browsers on 4G connections burn through data and battery on repeated round-trips, each carrying full HTTP headers for a response body that says "status": "pending" 98% of the time.

Server-push protocols solve all three. The server sends a message only when something changes, so latency drops to network transit time, load is proportional to events (not sessions), and the client holds one long-lived connection instead of hammering short-lived ones.

SSE vs WebSocket: which one for payment status#

Two protocols dominate server-push on the web: Server-Sent Events (SSE) and WebSockets. For payment status streaming, SSE wins almost every time. Here's why.

Server-Sent Events#

SSE is a one-directional protocol: the server pushes; the client listens. It runs over plain HTTP/1.1 or HTTP/2, which means it traverses corporate proxies, CDNs, and load balancers without special configuration. The browser's EventSource API handles reconnection automatically with a Last-Event-ID header, so missed events during a network blip are recoverable.

WebSockets#

WebSockets are bidirectional. They upgrade from HTTP to a persistent TCP frame protocol. That bidirectionality is essential for chat apps, collaborative editors, and multiplayer games. For payment status streaming, the client never sends data after the initial subscribe. You're paying the complexity tax of a full-duplex protocol for a half-duplex use case.

WebSocket connections also require sticky sessions or a pub/sub backplane (Redis, NATS) to fan out across multiple server instances. SSE connections can be served by any stateless node that subscribes to the same event bus, because the HTTP semantics (including Last-Event-ID replay) ride on the standard request/response model.

When to pick WebSocket anyway#

If your checkout flow requires the client to send messages back to the server during the payment lifecycle (e.g., the user can cancel, switch tokens, or update the tip amount while the payment is pending), a WebSocket connection avoids opening a parallel REST call alongside your SSE stream. But for the common case of "display confirmation progress," SSE is simpler, more resilient, and easier to operate.

BchainPay's SSE streaming endpoint#

BchainPay exposes an SSE endpoint on every payment intent:

GET /v1/payment-intents/{id}/stream
Accept: text/event-stream
Authorization: Bearer sk_live_...

The server responds with Content-Type: text/event-stream and holds the connection open. As the payment progresses, it pushes events:

id: evt_01HZR3…
event: payment_intent.pending
data: {"status":"pending","txHash":"0xabc…","chain":"polygon","confirmations":0}

id: evt_01HZR4…
event: payment_intent.confirming
data: {"status":"confirming","txHash":"0xabc…","chain":"polygon","confirmations":3}

id: evt_01HZR5…
event: payment_intent.succeeded
data: {"status":"succeeded","txHash":"0xabc…","chain":"polygon","confirmations":12,"settledAt":"2026-04-27T17:42:00Z"}

Each message carries a monotonic id. If the connection drops and the client reconnects, the browser sends Last-Event-ID: evt_01HZR4… and the server replays everything after that event.

Server-side implementation#

The streaming endpoint is thin. The heavy lifting happens in the event bus that your block-confirmation workers already publish to.

import { Router, type Request, type Response } from 'express';
import { redis } from '../lib/redis';
 
const router = Router();
 
router.get(
  '/v1/payment-intents/:id/stream',
  async (req: Request, res: Response) => {
    const intentId = req.params.id;
 
    res.writeHead(200, {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      Connection: 'keep-alive',
      'X-Accel-Buffering': 'no', // disable nginx buffering
    });
 
    // Send current state immediately so the UI never starts blank
    const current = await getPaymentIntent(intentId);
    if (!current) {
      res.write('event: error\ndata: {"code":"not_found"}\n\n');
      res.end();
      return;
    }
    sendEvent(res, current.lastEventId, `payment_intent.${current.status}`, current);
 
    // Subscribe to future updates
    const channel = `pi:${intentId}`;
    const sub = redis.duplicate();
    await sub.subscribe(channel);
 
    sub.on('message', (_ch: string, raw: string) => {
      const evt = JSON.parse(raw);
      sendEvent(res, evt.id, evt.type, evt.data);
      if (evt.type === 'payment_intent.succeeded' ||
          evt.type === 'payment_intent.failed') {
        cleanup();
      }
    });
 
    const heartbeat = setInterval(() => {
      res.write(':heartbeat\n\n');
    }, 15_000);
 
    function cleanup() {
      clearInterval(heartbeat);
      sub.unsubscribe(channel);
      sub.quit();
      res.end();
    }
 
    req.on('close', cleanup);
  },
);
 
function sendEvent(res: Response, id: string, event: string, data: unknown) {
  res.write(`id: ${id}\nevent: ${event}\ndata: ${JSON.stringify(data)}\n\n`);
}

Key details:

  • X-Accel-Buffering: no disables nginx's response buffering, which otherwise holds SSE frames until the buffer fills.
  • Heartbeat every 15 seconds keeps the TCP connection alive through proxies that drop idle connections after 30-60 seconds.
  • Immediate state push ensures a client that connects after the first confirmation already sees the current status. Without this, a page refresh mid-checkout shows a blank "waiting" state even though three confirmations have landed.
  • Terminal event cleanup closes the subscription after succeeded or failed so you don't leak Redis subscribers.

Client-side integration#

On the frontend, the browser's native EventSource handles reconnection:

function streamPaymentStatus(intentId: string, onUpdate: (evt: PaymentEvent) => void) {
  const url = `https://api.bchainpay.com/v1/payment-intents/${intentId}/stream`;
  const source = new EventSource(url, { withCredentials: true });
 
  const EVENTS = [
    'payment_intent.pending',
    'payment_intent.confirming',
    'payment_intent.succeeded',
    'payment_intent.failed',
  ];
 
  for (const type of EVENTS) {
    source.addEventListener(type, (e: MessageEvent) => {
      const data = JSON.parse(e.data);
      onUpdate({ type, ...data });
      if (type === 'payment_intent.succeeded' || type === 'payment_intent.failed') {
        source.close();
      }
    });
  }
 
  source.onerror = () => {
    // EventSource reconnects automatically with Last-Event-ID.
    // Log for observability but don't close.
    console.warn('[payment-stream] connection lost, reconnecting...');
  };
 
  return () => source.close();
}

EventSource retries on its own with exponential backoff in most browsers. Because each event carries an id, the server can replay missed events by reading from a short-lived buffer (BchainPay keeps the last 50 events per intent in Redis with a 30-minute TTL).

Handling authentication for SSE#

EventSource does not support custom headers. You cannot pass a Bearer token the way you would with fetch. Three patterns work:

  1. Cookie-based auth. Set an HttpOnly session cookie on your checkout domain. EventSource sends cookies automatically. This is what BchainPay uses for the hosted checkout page.
  2. Token in query string. GET /stream?token=sk_live_… works but leaks the token into access logs and browser history. Acceptable only for short-lived, single-use tokens scoped to one intent.
  3. Use fetch with ReadableStream instead of EventSource. This lets you set Authorization headers but you lose automatic reconnection. You'll need to implement your own retry loop:
async function fetchStream(intentId: string, token: string, onUpdate: (evt: PaymentEvent) => void) {
  let lastEventId = '';
 
  async function connect() {
    const headers: Record<string, string> = {
      Accept: 'text/event-stream',
      Authorization: `Bearer ${token}`,
    };
    if (lastEventId) headers['Last-Event-ID'] = lastEventId;
 
    const res = await fetch(
      `https://api.bchainpay.com/v1/payment-intents/${intentId}/stream`,
      { headers },
    );
    const reader = res.body!.getReader();
    const decoder = new TextDecoder();
    let buffer = '';
 
    while (true) {
      const { done, value } = await reader.read();
      if (done) break;
      buffer += decoder.decode(value, { stream: true });
 
      const parts = buffer.split('\n\n');
      buffer = parts.pop()!;
      for (const part of parts) {
        const event = parseSSE(part);
        if (event) {
          lastEventId = event.id;
          onUpdate(event);
        }
      }
    }
  }
 
  // Reconnect with backoff on failure
  let attempt = 0;
  while (attempt < 10) {
    try {
      await connect();
      break; // clean close (terminal event)
    } catch {
      attempt++;
      await new Promise((r) => setTimeout(r, Math.min(1000 * 2 ** attempt, 30_000)));
    }
  }
}

For most merchant integrations, option 1 (cookie auth) or option 2 (scoped short-lived token) is simpler. Reserve the fetch stream approach for cases where you control both client and server and need header-based auth.

Scaling SSE connections#

A single Node.js process can hold tens of thousands of open SSE connections because each is just a dormant TCP socket consuming a file descriptor and a few KB of memory. The bottleneck is not connection count; it's fan-out latency.

If a payment intent update needs to reach 10,000 connected clients (unlikely for checkout, but possible for a public payment tracker), iterating and calling res.write() serially adds measurable delay. The solution is a pub/sub backplane:

  1. Block-confirmation workers publish to Redis pub/sub (or NATS, Kafka).
  2. Each SSE server node subscribes to channels for its connected clients.
  3. When a message arrives, the node writes to only the local sockets that care about that intent.

BchainPay uses Redis pub/sub with channels keyed by payment intent ID. Each SSE server subscribes to exactly the channels for its active connections and unsubscribes on disconnect. This keeps Redis subscriber count proportional to open connections, not to total intents.

Missed-event recovery with Last-Event-ID#

The SSE spec's Last-Event-ID mechanism is underrated. When the client reconnects, it sends the last event ID it received. The server replays everything since then. To support this:

  1. Store recent events in a time-bounded, ordered structure. BchainPay uses a Redis sorted set per intent with the event's monotonic sequence as the score:
ZADD pi:evt:pi_01HZR3 1 '{"id":"evt_01HZR3","type":"payment_intent.pending","data":{...}}'
ZADD pi:evt:pi_01HZR3 2 '{"id":"evt_01HZR4","type":"payment_intent.confirming","data":{...}}'
EXPIRE pi:evt:pi_01HZR3 1800
  1. On reconnect, parse the Last-Event-ID header, look up its sequence number, and ZRANGEBYSCORE everything above it.
  2. Replay those events before switching to live pub/sub.

This guarantees exactly-once delivery to the UI even across network interruptions, without the client needing any local persistence.

Observability#

Every SSE connection is a long-lived request, which means your standard request-duration histograms will look absurd. Exclude SSE endpoints from p99 latency dashboards and instead track:

  • Active connection count per node (gauge).
  • Events pushed per second (counter).
  • Reconnection rate (counter, tagged by intent). A high reconnect rate signals proxy timeout misconfiguration.
  • Time-to-first-event per connection (histogram). This measures how fast the initial state push reaches the client.

BchainPay exports these as Prometheus metrics on the /metrics endpoint of each SSE server.

Key takeaways#

  • Use SSE over WebSocket for payment status. It's simpler, natively handles reconnection, and works through proxies without special configuration.
  • Push current state on connect. A client that joins mid-checkout should never see a stale "waiting" screen.
  • Heartbeat every 15 seconds to prevent proxy idle disconnects.
  • Buffer recent events in Redis and replay on reconnect using Last-Event-ID for gap-free delivery.
  • Authenticate with cookies or scoped tokens since EventSource does not support custom headers. Fall back to fetch streaming only when header auth is mandatory.
  • Track connection count and reconnect rate, not request duration, for SSE observability.

Try it yourself

Spin up a sandbox merchant in under 60 seconds.

One REST endpoint, signed webhooks, five chains. No credit card required.