Skip to content
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
76 changes: 74 additions & 2 deletions pages/lazer/how-lazer-works.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,77 @@
import { Callout } from "nextra/components";

# How Pyth Lazer works

Pyth Lazer is a permissioned service that provides ultra-low-latency price and market data to highly latency-sensitive users.
Pyth Lazer is a permissioned service that provides ultra-low-latency market data to consumers.
It aggregates data from multiple publishers and distributes it to consumers through a multi-tier architecture.

## System Services

The architecture consists of five main services that work together to provide ultra-low-latency data to consumers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's say "types of services" instead of just "services". Each of them has multiple nodes for redundancy (we can mention that here too).


### INSERT DIAGRAM HERE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you want the "components" below to actually be "services" for clarity. This will also allow you to line up the elements in the diagram with the text blocks below. Given that, the subsections below should be: publishers, relayers, message queue, routers, history service

Aggregation logic can be rolled into routers, and you can drop the message transport layer part.

Also, please include who operates the components and the stuff around no reordering etc etc that we discussed

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we mention the operator info for every service?


### Publishers

Publishers are the entities that provide market data to Lazer. They submit updates via authenticated WebSocket connections.
Each publisher is configured with specific permissions defining which feeds they can update.

### Relayers

The Relayer service is the ingestion layer that receives and validates all incoming updates from publishers.

**Key responsibilities:**

- **Authentication**: Validates publisher access tokens and optional Ed25519 signatures.
- **Rate limiting**: Enforces configurable limits on publisher updates.
- **Message forwarding**: Publishes validated updates to an internal message queue (a NATS cluster).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also add something like "validation" or "sanity check". Relayer validates update format and checks if the updates are well-formed by examining IDs, timestamps, prices.


**Douro Labs** operates the relayer service for the Pyth Lazer network.

<Callout type="info">
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jayantk Am I missing something here?

The Douro Labs-operated relayer follows a strict, deterministic processing model:
- **No price dropping**: All validated updates are forwarded to the message queue without dropping any prices.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not entirely true right now. We don't drop updates entirely, but relayer can mark any update as rejected. Its logic is not fully deterministic because we still have a circuit breaker. (We plan to remove it.)

- **FCFS processing**: Updates are processed on a first-come-first-served basis without prioritization.
- **Deterministic operation**: The relayer operates according to its configured logic and does not deviate from it.

This ensures reliable, predictable data flow from publishers to consumers.

</Callout>

### Message Queue

The system uses NATS JetStream as the primary message queue for pub/sub messaging with stream persistence.
This allows the system to be deployed in a multi-datacenter environment and ensures reliable message delivery between services.

**Message ordering**: NATS JetStream guarantees message ordering within a single stream, ensuring that updates from publishers are processed in the order they were received by the Relayer.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Riateche Should we mention this here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's fine.

Note: we have multiple relayers so the final message order of the combined update stream is ultimately determined by NATS.

This ordering guarantee is critical for maintaining consistent feed state across all aggregators.

### Routers

The Router is the real-time distribution layer that serves data to consumers.
It embeds aggregation logic to compute median prices, confidence intervals (using interquartile range), and best bid/ask prices from multiple publisher inputs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and funding rates.


**Key features:**

- **WebSocket streaming**: Provides `/v1/stream` endpoint for real-time price updates
- **HTTP REST API**: Offers `/v1/latest_price` for on-demand price queries
- **Channel types**: Supports real-time and fixed-rate channels (1ms, 50ms, 200ms)
- **Multi-chain support**: Generates on-chain payloads for Solana, EVM, and other chains

#### Aggregation logic

Each Router embeds an aggregator component that consumes publisher updates from NATS and computes aggregated data feeds. The aggregator:

- Computes median values resistant to outlier data from individual publishers.
- Calculates confidence intervals using interquartile range to measure data spread.
- Determines best bid/ask values filtered to ensure market consistency.
- Automatically removes stale publisher data based on configurable timeouts.

### History Service

The History Service provides persistence and historical data queries.
Copy link
Member Author

@aditya520 aditya520 Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have more info regarding history service, and can't find any in the Notion Lazer arch doc.
Can you fill this section if needed.


**Key responsibilities:**

We are working on writing a detailed technical overview how Lazer works. Details will be added here as they are completed.
- Data persistence: Stores all publisher updates, aggregated data, and transactions in ClickHouse.
- Historical queries: Provides REST API for querying historical data.
Loading