Local backtester and paper-trading lab for Drift + Hyperliquid perp strategies.
Reproducible fills · funding-aware margin · CI-gated parity · MIT
⚠️ Drift Status (2026-04+): Drift Protocol is offline post-hack. Flint backtester + paper trader run on Hyperliquid + Pyth live data. Drift devnet remains available for code paths; mainnet support is suspended pending recovery. The fallback chain (HL → Drift → DuckDB cache) auto-recovers when Drift returns.
Flint is a local power-user lab for Drift + Hyperliquid perp strategy work. Three workflows, in order of trust:
- Backtest a strategy against real on-chain candles + funding rates with venue-accurate fills and per-venue margin.
- Paper trade the same strategy against live WebSocket data.
- Reconcile paper fills against actual venue executions (CSV upload) to catch model drift before mainnet.
It is not a hosted SaaS, a general crypto trading bot, an AI agent platform, or a MEV scanner. Single-machine, single-DuckDB-writer, by design.
Every claim below is backed by a checked-in artifact you can reproduce on a clean clone.
| Claim | How to verify |
|---|---|
| Backtest ↔ paper parity within 2% PnL, 80% timing match | python scripts/run_parity_report.py momentum_breakout SOL-PERP 30 → artifacts/parity/{strat}-{date}.md. CI fails on breach (.github/workflows/parity.yml). |
| Same-market opposite-leg multi-venue arb works correctly | python notebooks/multi_venue_funding_arb.py (D-2.1.d structural correctness, per-leg attribution to 1e-6) |
| Cross-venue funding capture | python notebooks/funding_arb.py |
| Perp-vs-spot basis | python notebooks/basis_trade.py |
| Rust ↔ Python parity at 1e-9 | cd rust && cargo test --release → 30 cargo tests + 13 Python parity tests |
| Live fills reconcile to engine fills <10 bps p95 | python scripts/reconcile_fills.py --session <id> --fills-csv my.csv (also as REST + UI panel) |
| Deterministic seeds | pytest tests/test_determinism.py |
| Point-in-time correctness on every provider | python scripts/audit_pit.py |
Status board: TRUST_ARTIFACTS.md. Wave-by-wave delivery log: WAVE_STATUS.md.
- 23 built-in strategies · 26 data providers · 81 REST endpoints · 20 MCP tools · 2122 tests
Counts auto-generated by scripts/update_readme_counts.py.
pip install flint-trading
flint init # download sample data + run demo backtest
flint serve # API + UI at localhost:8000For development:
git clone https://github.com/sohan-shingade/flint.git
cd flint
pip install -e .
pytest tests/ -v # ~10 min, all mockedOptional Rust acceleration (2–3.5× speedup on tight loops):
pip install maturin
cd rust && maturin develop --release
pytest tests/test_rust_python_parity.py # verify 1e-9 parityStrategies are Python classes with one method. Return a Signal, or place orders via ctx.
from flint.strategy.base import Strategy
from flint.models import Signal
from flint.indicators import sma
class GoldenCross(Strategy):
@property
def name(self): return "golden-cross"
def on_candle(self, candle, history, ctx=None):
if len(history) < 50:
return Signal.HOLD
return Signal.BUY if sma(history, 20) > sma(history, 50) else Signal.SELL
def reset(self): passFor full execution control — limit orders, stops, take-profits, multi-market data, funding queries — use ctx:
def on_candle(self, candle, history, ctx=None):
if ctx is None:
return Signal.HOLD
from flint.indicators import rsi, atr
if rsi(history, 14) < 30 and not ctx.positions:
ctx.market_order(candle.market, "long", size=1.0)
ctx.stop_order(candle.market, "short", size=1.0,
trigger_price=candle.close - 2 * atr(history, 14))Optimizable with Optuna by adding a parameters() classmethod. Full API: docs/guides/strategy-authoring.md.
Every backtest produces a full tearsheet: equity curve, drawdown, trade markers, PnL distribution, monthly returns heatmap, complete trade log.
Optimization (Optuna), walk-forward analysis, regime-conditioned reports, and Monte Carlo bootstrap (trade-frequency Sharpe) are all built in. See docs/guides/architecture.md for the execution stack.
Deploy any strategy to paper mode. The same code path that backtests now consumes live WebSocket candles from Hyperliquid (Drift WS resumes when the venue is back online). Full risk guards (max drawdown, daily loss, position size, simulated liquidation), position persistence, resume-on-crash, and live equity stream over WebSocket.
Multi-venue: positions are keyed by (venue, market) so a Drift-long + HL-short on the same market track as two distinct legs with their own funding ledgers (the v1.5.0 D-2.1.d correctness fix).
Core data is free — no API keys, no signup.
| Source | What you get |
|---|---|
| Hyperliquid | OHLCV + funding history (back to 2023-06-08 hourly) — primary live source |
| Pyth Network | Real-time oracle prices for 20 pairs |
| Drift Data API | OHLCV + funding + L2/L3 orderbook (offline post-hack; resumes when Drift returns) |
| CoinGecko / GeckoTerminal | Spot candles + DEX pool candles for non-perp comparisons |
| Jupiter / Raydium / Orca | Swap quotes + AMM pool data |
Optional providers (free API key, no credit card): Birdeye, Helius. Full list with rate limits + PIT declarations: docs/guides/data-providers.md.
Everything caches to a local DuckDB file (./data/flint.duckdb). Nothing leaves your machine unless a provider needs it to. No telemetry, no cloud sync.
Fills go through a 4-tier pipeline: deterministic close → slippage layer → orderbook walk (when depth data is present) → latency + impact + partial-fill simulation. Drift uses the vAMM model with peg multiplier and oracle anchoring. Hyperliquid walks the CLOB with HLP backstop. Both have parity tests against historical data.
Slippage parameters are calibrated from real fills you upload via flint calibrate. The reconciliation tool (scripts/reconcile_fills.py) compares engine fills against venue executions and emits a markdown report with p50/p95/p99 price + timestamp deltas. CI gates this at 10 bps p95.
Details: docs/guides/slippage-models.md.
flint init # one-shot setup + demo backtest
flint serve # web UI + REST API at :8000
flint serve --dev # API only; run `cd ui && npm run dev` for hot-reload
flint data download --market SOL-PERP --days 90
flint backtest run --strategy ma_crossover --market SOL-PERP --days 90
flint deploy --strategy my_strategy --market SOL-PERP # paper trade
flint calibrate --fills my.csv --market SOL-PERP # tune slippageFull reference: docs/reference/cli.md (or flint --help).
- Single-machine. DuckDB is single-writer; one Flint instance per database file. By design.
- Drift mainnet experimental. Pending hack recovery. Devnet works for code paths.
- Fills are approximations of venue execution. The reconciliation tool quantifies the gap; calibration shrinks it. Treat backtest PnL as a strategy-comparison signal, not a forward forecast.
- Optional providers need keys. Birdeye, Helius. Free tiers exist but Flint does not bundle them.
- No CEX live trading. Backtest-only against CCXT-sourced candles for spot reference; live execution is out of scope.
Doc tree follows the Diátaxis split — full
index at docs/README.md. Quick links:
| Topic | Where |
|---|---|
| First backtest in 5 min | docs/tutorials/01-install-first-backtest.md |
| Quickstart guide | docs/guides/quickstart.md |
| Architecture (BacktestContext + 7 managers, PaperContext, Rust engine, replay) | docs/guides/architecture.md |
| Strategy authoring (v1/v2 APIs, indicators, optimization) | docs/guides/strategy-authoring.md |
| Data providers + PIT declarations | docs/guides/data-providers.md |
| Live + paper deployment + risk guards | docs/guides/live-deployment.md |
| Web UI tour | docs/guides/web-ui.md |
| MCP integration (AI tooling — experimental) | docs/guides/mcp-integration.md |
| Slippage models + calibration | docs/guides/slippage-models.md |
Project state: ROADMAP.md · WAVE_STATUS.md · DEFERRED.md · TRUST_ARTIFACTS.md · CHANGELOG.md. AI-dev guide: CLAUDE.md.
Python 3.10+, Rust (PyO3 via maturin) for the hot path, DuckDB for storage, FastAPI + React + Vite for the UI, Optuna for optimization, jupytext for proof notebooks. MIT licensed.

