diff --git a/CHANGELOG.md b/CHANGELOG.md index 96ef07f7..cd182731 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,19 +7,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## [Unreleased] -### Added -- GenAI OTel tracing: collect and visualize AI conversations from OpenTelemetry `gen_ai.*` spans with support for Vercel AI SDK `ai.*` attribute fallbacks; includes conversation view, token usage aggregation, and tool call detail -- Deployment promotion: promote deployments between environments with environment protection settings (required reviewers, branch restrictions) -- On-demand scale-to-zero environments: environments sleep after configurable idle timeout and wake automatically on incoming HTTP requests via proxy integration -- AI usage analytics: per-model token tracking with agent/session context, BYOK vs platform key breakdown -- Vercel AI SDK tracing examples (Node.js) and Python GenAI tracing examples -- AI tracing documentation page -- Funnel card step pipeline: funnel list cards now show a horizontal pipeline of steps with completions count and conversion rate per step (e.g., `page_view 1,234 → signup 890 (72%)`) alongside the existing summary metrics - -### Fixed -- GenAI trace token counts showed as zero: PostgreSQL `SUM(bigint)` returns `numeric` type, causing Sea-ORM `try_get::>` to silently fail; added `::bigint` cast to all SUM expressions -- Funnel edit page always showed "Funnel Not Found": `EditFunnel` used `useParams()` to read `funnelId`, but no matching React Router `` with a `:funnelId` parameter was defined; `funnelId` was always `undefined`, so the funnel lookup always failed; now parsed from the URL and passed as a numeric prop -- Funnel card metrics never loaded (empty cards): `formatDateForAPI` produced `yyyy-MM-dd HH:mm:ss` format but the backend query string parser expects ISO 8601 (`YYYY-MM-DDTHH:MM:SSZ`); deserialization failed on every metrics request; changed to `date.toISOString()` +## [0.0.6] - 2026-03-14 ### Added - Multi-node cluster support: distribute deployments across a control plane and multiple worker nodes connected via WireGuard private networking @@ -41,49 +29,26 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Database migrations: `nodes` table, `node_id` columns on `deployment_containers` and `deployment_config`, `alarms` table - Alarm and monitoring system: `AlarmService` with support for container restarts, OOM kills, high CPU/memory, outages, and deployment failures; alarm cooldown mechanism to prevent duplicate rapid-fire alerts; integration with notification and job queue systems via `AlarmFiredJob` and `AlarmResolvedJob` - `ContainerHealthMonitor`: periodic health checks on all active containers detecting restart count increases, OOM state changes, and resource threshold breaches -- Encryption at rest for environment variables: all values are now encrypted with AES-256-GCM (via `EncryptionService`) before being stored in the database; existing unencrypted rows are transparently decrypted at read time via an `is_encrypted` compatibility flag added in migration `m20260305_000003`; the `WorkflowPlanner` decrypts values before injecting them into deployment containers +- Encryption at rest for environment variables: all values are now encrypted with AES-256-GCM (via `EncryptionService`) before being stored in the database; existing unencrypted rows are transparently decrypted at read time via an `is_encrypted` compatibility flag; the `WorkflowPlanner` decrypts values before injecting them into deployment containers - Container restart count tracking: container detail API now returns `restart_count` from Docker, surfacing container instability in the UI - Downstream connection keepalive limit (Pingora 0.8.0): connections are closed after 1024 requests to prevent slow memory leaks from long-lived keep-alive connections - Upstream write pending time diagnostics (Pingora 0.8.0): `X-Upstream-Write-Pending` response header exposes how long the upstream took to accept the request body; captured in proxy context for observability - Preview environment flag support in environment variable settings UI - -### Changed -- `EnvVarService` (in `temps-environments` and `temps-projects`) now requires `Arc` in its constructor; plugin registration injects it from the service registry -- Upgraded Pingora from 0.7.0 to 0.8.0; proxy service now uses `ProxyServiceBuilder` instead of `http_proxy_service()` for explicit `HttpServerOptions` configuration -- Security headers are now disabled by default for new installations; existing installations with saved settings are unaffected -- External service containers (Postgres, Redis, MongoDB, S3/MinIO, RustFS) now bind to `0.0.0.0` instead of `127.0.0.1`, making them reachable from worker nodes via the private network; only affects newly created containers - -### Fixed -- **Duplicate live visitors**: proxy double-decrypted the visitor cookie — `ensure_visitor_session` decrypted the cookie and passed the plaintext UUID to `get_or_create_visitor`, which tried to decrypt it again; the second decryption always failed silently, causing a new visitor record on every returning page load; now passes the raw encrypted cookie directly -- Static deployment visitor duplication: `ensure_visitor_session` was called for every static file request (JS, CSS, images); concurrent first-visit requests without cookies each created separate visitors; now skips visitor creation for static asset paths -- Proxy returned incorrect `Content-Length` for HEAD responses over HTTP/2, causing clients to wait for a body that never arrives; the header is now stripped for HEAD responses -- Upstream connections could silently fail when reusing stale pooled connections (TCP RST); added explicit connection/read/write/idle timeouts and single automatic retry on connection failure -- Deployment lock contention: replaced PostgreSQL advisory lock with a process-level `tokio::Mutex`, eliminating cross-process lock conflicts and moving container teardown outside the lock scope -- Docker container names are now used instead of Docker network aliases for cross-node environment variable rewriting, fixing service connectivity on remote worker nodes -- Deployment "marking complete" step could hang for the full 60-second timeout when the job queue was busy: the DB poll fallback (which confirms the route table update via database query) was only checked when the queue receiver timed out, but a steady stream of unrelated queue events prevented the timeout from ever firing; the poll now runs on every loop iteration regardless of queue activity -- Remote environment variables are no longer built when no active worker nodes exist, avoiding unnecessary work in single-node deployments -- **Phantom deployments on node drain/failover**: drain and failover previously called `trigger_pipeline` with no branch/tag/commit, creating broken "preview" deployments with empty git context; now uses smart drain logic that retires containers on the draining node when healthy replicas exist on other nodes, and only triggers a full redeploy (with correct git context from the latest successful deployment) when all replicas are on the affected node - -### Added +- GenAI OTel tracing: collect and visualize AI conversations from OpenTelemetry `gen_ai.*` spans with support for Vercel AI SDK `ai.*` attribute fallbacks; includes conversation view, token usage aggregation, and tool call detail +- Deployment promotion: promote deployments between environments with environment protection settings (required reviewers, branch restrictions) +- On-demand scale-to-zero environments: environments sleep after configurable idle timeout and wake automatically on incoming HTTP requests via proxy integration +- AI usage analytics: per-model token tracking with agent/session context, BYOK vs platform key breakdown +- Vercel AI SDK tracing examples (Node.js) and Python GenAI tracing examples +- AI tracing documentation page +- Environment password protection: cookie-based password wall for environments with HMAC-signed cookies, argon2 password hashing, and HTML password form served by the proxy; set via environment settings API with automatic cookie invalidation on password change +- Funnel card step pipeline: funnel list cards now show a horizontal pipeline of steps with completions count and conversion rate per step (e.g., `page_view 1,234 → signup 890 (72%)`) alongside the existing summary metrics - Automatic `CRON_SECRET` injection into deployed containers: the deployment token is now set as `CRON_SECRET` in the container environment on every deployment, and the cron scheduler sends `Authorization: Bearer ` when invoking endpoints — no manual configuration needed - Analytics overview drill-down filters for property breakdowns: `filter_country`, `filter_region`, `filter_browser`, and `filter_os` query parameters on the `/events/properties/breakdown` endpoint enable hierarchical navigation (country → region → city, browser → version, OS → version) - Analytics overview charts: Channels, Devices, Languages, Operating Systems, and UTM Campaigns — each with bar visualization and visitor counts - Drill-down navigation in Browsers, Locations, and Operating Systems charts: click a row to see versions (browsers/OS) or regions/cities (locations) with breadcrumb navigation and back button - OpenAPI schema propagation for external plugins: plugins can return an OpenAPI schema during handshake, which Temps merges into the unified API docs with `/x/{plugin_name}/` path prefixing - `utoipa` OpenAPI annotations on all example plugin handlers (SEO Analyzer, Google Indexing, IndexNow, Lighthouse) with typed request/response schemas -- `AGENTS.md` with codebase guidance, critical rules, and a "Feature Development Workflow" checklist requiring documentation updates alongside code changes - `PropertyBreakdownFilters` struct in `temps-analytics-events` for type-safe drill-down filter propagation through the service layer - -### Changed -- Cron scheduler now sends `Authorization: Bearer ` header alongside `X-Cron-Job: true` when invoking cron endpoints; previously only `X-Cron-Job: true` was sent -- `DatabaseCronConfigService` constructor now requires a `DeploymentTokenService` dependency for retrieving cron secrets -- Locations chart replaced static Country/Region/City tab selector with interactive drill-down: clicking a country shows its regions, clicking a region shows its cities -- Browsers chart now supports click-to-drill into browser versions with back navigation -- `PluginReady` handshake message extended with optional `openapi` field for plugin OpenAPI schemas -- `ExternalPluginProcess` struct extended with `openapi_schema` field -- `ExternalPluginsPlugin` caches OpenAPI schemas at startup for synchronous access during schema merging -- Fixed `clippy::map_flatten` lint in `temps-plugin-sdk` runtime (`map().flatten()` → `and_then()`) - - Server-side domain pagination with search: `list_domains` endpoint now accepts `page`, `page_size`, and `search` query parameters, returning `total` count alongside results; default page size is 20, max 100 - Reusable `DomainSelector` combobox component for searching and selecting domains across the app; uses server-side search with debounce, displays domain status badges, and shows "X of Y" overflow hints - `ProxyLogBatchWriter` for proxy request logging: bounded `mpsc::channel(8192)` with batch INSERT (up to 200 rows per flush, 500ms interval) running on a dedicated OS thread; includes backpressure for HTML responses and graceful shutdown with drain @@ -93,7 +58,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - OpenTelemetry (OTel) ingest and query system (`temps-otel` crate) with OTLP/protobuf support for traces, metrics, and logs; header-based and path-based ingest routes; `tk_` API key and `dt_` deployment token authentication; `OtelRead`/`OtelWrite` permissions; TimescaleDB storage with hypertables; OpenAPI-documented query endpoints for traces, spans, metrics, and logs; web UI with filterable trace list, waterfall span visualization, and setup instructions - `deployment_id` field on deployment tokens, allowing OTel ingest to associate telemetry with specific deployments - `protobuf-compiler` installation in CI workflow for `temps-otel` proto compilation -- External plugin system: standalone binaries in `~/.temps/plugins/` are auto-discovered, spawned, and integrated at boot via stdout JSON handshake (manifest + ready) over Unix domain sockets; Temps reverse-proxies `/api/x/{plugin_name}/*` to each plugin and serves `/api/x/plugins` for manifest listing (#19) +- External plugin system: standalone binaries in `~/.temps/plugins/` are auto-discovered, spawned, and integrated at boot via stdout JSON handshake (manifest + ready) over Unix domain sockets; Temps reverse-proxies `/api/x/{plugin_name}/*` to each plugin and serves `/api/x/plugins` for manifest listing - `temps-plugin-sdk` crate for plugin authors: `ExternalPlugin` trait, `main!()` macro, `PluginContext` (direct Postgres access, data dir), `TempsAuth` extractor, and hyper-over-Unix-socket runtime - `temps-external-plugins` crate following the standard `TempsPlugin` pattern with service layer, utoipa-annotated handler, and OpenAPI schema registration - Frontend dynamic plugin integration: sidebar nav entries (platform, settings, project-level), command palette search, and generic `PluginPage` component at `/plugins/:pluginName/*` — all driven by plugin manifests @@ -110,45 +75,65 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 - Auto-generate `secret_key` for MinIO service creation - Analytics seed data utilities (`scripts/seed-data/`) - Web UI build integration via `build.rs` -- Placeholder dist directory for debug builds -- GitHub Actions release workflow for Linux AMD64 +- GitHub Actions release workflow for Linux AMD64, macOS AMD64, macOS ARM64, and Docker - Release automation script (`scripts/release.sh`) -- Comprehensive development and release documentation - Resource monitoring tab in project sidebar and monitoring settings page with per-environment CPU, memory, and disk metrics - Browse Data button on linked service cards in the project storage page - `status_code_class` query parameter (1xx/2xx/3xx/4xx/5xx) for proxy log stats endpoints - TimescaleDB compression (7-day) and retention (30-day) policies for `proxy_logs` hypertable - `cargo clippy` pre-commit hook enabled to catch lint issues before CI +- Service clusters: HA PostgreSQL via pg_auto_failover (monitor + primary + N replicas), multi-host connection strings with `target_session_attrs=read-write`, cluster member tracking in `service_members` table +- Remote managed service creation on worker nodes via agent API with auto-assigned ports and Docker volume management +- DNS-based email validation service with SMTP verification +- CLI `temps project create` enhanced with `--repo`, `--branch`, `--directory`, `--preset`, `--connection`, and `--yes` flags for non-interactive CI/scripting usage ### Changed +- Embedded userspace WireGuard via defguard/boringtun: replaced shell-out to `wg` and `ip` CLI with pure Rust implementations (`defguard_wireguard_rs` + `x25519-dalek`); eliminates `wireguard-tools` system package dependency entirely +- `EnvVarService` (in `temps-environments` and `temps-projects`) now requires `Arc` in its constructor; plugin registration injects it from the service registry +- Upgraded Pingora from 0.7.0 to 0.8.0; proxy service now uses `ProxyServiceBuilder` instead of `http_proxy_service()` for explicit `HttpServerOptions` configuration +- Security headers are now disabled by default for new installations; existing installations with saved settings are unaffected +- External service containers (Postgres, Redis, MongoDB, S3/MinIO, RustFS) now bind to `0.0.0.0` instead of `127.0.0.1`, making them reachable from worker nodes via the private network; only affects newly created containers +- Cron scheduler now sends `Authorization: Bearer ` header alongside `X-Cron-Job: true` when invoking cron endpoints; previously only `X-Cron-Job: true` was sent +- `DatabaseCronConfigService` constructor now requires a `DeploymentTokenService` dependency for retrieving cron secrets +- Locations chart replaced static Country/Region/City tab selector with interactive drill-down: clicking a country shows its regions, clicking a region shows its cities +- Browsers chart now supports click-to-drill into browser versions with back navigation +- `PluginReady` handshake message extended with optional `openapi` field for plugin OpenAPI schemas +- `ExternalPluginProcess` struct extended with `openapi_schema` field +- `ExternalPluginsPlugin` caches OpenAPI schemas at startup for synchronous access during schema merging - Domain selection throughout the app now uses the `DomainSelector` combobox instead of plain ` +
+ +
+ + + + + + + diff --git a/crates/temps-proxy/src/handler/mod.rs b/crates/temps-proxy/src/handler/mod.rs index 7e920c44..f0352207 100644 --- a/crates/temps-proxy/src/handler/mod.rs +++ b/crates/temps-proxy/src/handler/mod.rs @@ -3,5 +3,6 @@ pub mod captcha; #[allow(clippy::module_inception)] pub mod handler; pub mod ip_access_control; +pub mod password_wall; pub mod proxy_logs; pub mod types; diff --git a/crates/temps-proxy/src/handler/password_wall.rs b/crates/temps-proxy/src/handler/password_wall.rs new file mode 100644 index 00000000..4b685ac6 --- /dev/null +++ b/crates/temps-proxy/src/handler/password_wall.rs @@ -0,0 +1,187 @@ +//! Password wall handler for environment password protection. +//! +//! When an environment has password protection enabled, the proxy intercepts +//! requests and shows an HTML password form. After the user enters the correct +//! password, an HMAC-signed cookie is set so subsequent requests pass through. + +use hmac::{Hmac, Mac}; +use sha2::Sha256; + +/// Cookie name for password-protected environments +pub const PASSWORD_COOKIE_NAME: &str = "_temps_pw"; + +/// Cookie max age (7 days) +const COOKIE_MAX_AGE_SECS: u64 = 7 * 24 * 60 * 60; + +/// HTML template for the password form +const PASSWORD_FORM_HTML: &str = include_str!("../../password_wall/password_form.html"); + +type HmacSha256 = Hmac; + +/// Generate the password form HTML for a given redirect path. +pub fn generate_password_form_html( + redirect_path: &str, + show_error: bool, + project_name: &str, + environment_name: &str, +) -> String { + PASSWORD_FORM_HTML + .replace("{{REDIRECT_PATH}}", redirect_path) + .replace("{{PROJECT_NAME}}", &html_escape(project_name)) + .replace("{{ENVIRONMENT_NAME}}", &html_escape(environment_name)) + .replace( + "{{ERROR_DISPLAY}}", + if show_error { "flex" } else { "none" }, + ) + .replace( + "{{ERROR_INPUT_CLASS}}", + if show_error { "input-error" } else { "" }, + ) +} + +fn html_escape(s: &str) -> String { + s.replace('&', "&") + .replace('<', "<") + .replace('>', ">") + .replace('"', """) +} + +/// Create an HMAC-signed cookie value for a given environment ID. +/// +/// The cookie value is `env_id:signature` where signature = HMAC-SHA256(env_id, secret). +/// The secret is derived from the password hash itself, so changing the password +/// invalidates all existing cookies. +pub fn create_cookie_value(environment_id: i32, password_hash: &str) -> String { + let payload = environment_id.to_string(); + let signature = compute_hmac(&payload, password_hash); + format!("{}:{}", payload, signature) +} + +/// Validate an HMAC-signed cookie value for a given environment ID. +pub fn validate_cookie(cookie_value: &str, environment_id: i32, password_hash: &str) -> bool { + let parts: Vec<&str> = cookie_value.splitn(2, ':').collect(); + if parts.len() != 2 { + return false; + } + + let payload = parts[0]; + let provided_signature = parts[1]; + + // Verify the environment ID matches + if payload != environment_id.to_string() { + return false; + } + + // Verify HMAC signature + let expected_signature = compute_hmac(payload, password_hash); + constant_time_eq(provided_signature.as_bytes(), expected_signature.as_bytes()) +} + +/// Verify a plaintext password against an argon2 hash. +pub fn verify_password(password: &str, hash: &str) -> bool { + use argon2::{Argon2, PasswordHash, PasswordVerifier}; + let Ok(parsed_hash) = PasswordHash::new(hash) else { + return false; + }; + Argon2::default() + .verify_password(password.as_bytes(), &parsed_hash) + .is_ok() +} + +/// Build the Set-Cookie header value for a password protection cookie. +pub fn build_set_cookie_header(environment_id: i32, password_hash: &str, host: &str) -> String { + let value = create_cookie_value(environment_id, password_hash); + // Strip port from host for the domain + let domain = host.split(':').next().unwrap_or(host); + format!( + "{}={}; Path=/; Max-Age={}; HttpOnly; SameSite=Lax; Domain={}", + PASSWORD_COOKIE_NAME, value, COOKIE_MAX_AGE_SECS, domain + ) +} + +fn compute_hmac(data: &str, key: &str) -> String { + let mut mac = + HmacSha256::new_from_slice(key.as_bytes()).expect("HMAC can take key of any size"); + mac.update(data.as_bytes()); + hex::encode(mac.finalize().into_bytes()) +} + +/// Constant-time comparison to prevent timing attacks. +fn constant_time_eq(a: &[u8], b: &[u8]) -> bool { + if a.len() != b.len() { + return false; + } + a.iter() + .zip(b.iter()) + .fold(0u8, |acc, (x, y)| acc | (x ^ y)) + == 0 +} + +#[cfg(test)] +mod tests { + use super::*; + + const TEST_ENV_ID: i32 = 42; + const TEST_HASH: &str = "$argon2id$v=19$m=19456,t=2,p=1$test_salt$test_hash_value"; + + #[test] + fn test_create_and_validate_cookie() { + let value = create_cookie_value(TEST_ENV_ID, TEST_HASH); + assert!(validate_cookie(&value, TEST_ENV_ID, TEST_HASH)); + } + + #[test] + fn test_validate_cookie_wrong_env_id() { + let value = create_cookie_value(TEST_ENV_ID, TEST_HASH); + assert!(!validate_cookie(&value, 99, TEST_HASH)); + } + + #[test] + fn test_validate_cookie_wrong_hash() { + let value = create_cookie_value(TEST_ENV_ID, TEST_HASH); + assert!(!validate_cookie(&value, TEST_ENV_ID, "different_hash")); + } + + #[test] + fn test_validate_cookie_tampered() { + assert!(!validate_cookie("42:bad_signature", TEST_ENV_ID, TEST_HASH)); + } + + #[test] + fn test_validate_cookie_malformed() { + assert!(!validate_cookie("garbage", TEST_ENV_ID, TEST_HASH)); + assert!(!validate_cookie("", TEST_ENV_ID, TEST_HASH)); + } + + #[test] + fn test_generate_password_form_html() { + let html = generate_password_form_html("/some/path", false, "My Project", "staging"); + assert!(html.contains("/_temps/password-verify")); + assert!(html.contains("/some/path")); + assert!(html.contains("My Project")); + assert!(html.contains("staging")); + assert!(html.contains("display: none")); + } + + #[test] + fn test_generate_password_form_html_with_error() { + let html = generate_password_form_html("/", true, "App", "production"); + assert!(html.contains("display: flex")); + assert!(html.contains("input-error")); + } + + #[test] + fn test_generate_password_form_html_escapes_html() { + let html = generate_password_form_html("/", false, "", "test"); + assert!(!html.contains("")); + assert!(html.contains("<script>")); + } + + #[test] + fn test_build_set_cookie_header() { + let header = build_set_cookie_header(TEST_ENV_ID, TEST_HASH, "example.com:443"); + assert!(header.starts_with("_temps_pw=")); + assert!(header.contains("Domain=example.com")); + assert!(header.contains("HttpOnly")); + } +} diff --git a/crates/temps-proxy/src/handler/proxy_logs.rs b/crates/temps-proxy/src/handler/proxy_logs.rs index bc69f9d1..0c4b3e35 100644 --- a/crates/temps-proxy/src/handler/proxy_logs.rs +++ b/crates/temps-proxy/src/handler/proxy_logs.rs @@ -10,7 +10,8 @@ use temps_core::{DateTime, UtcDateTime}; use utoipa::{IntoParams, ToSchema}; use crate::service::proxy_log_service::{ - ProxyLogResponse, ProxyLogService, StatsFilters, TimeBucketStats, TodayStatsResponse, + ProjectHealthSummary, ProxyLogResponse, ProxyLogService, StatsFilters, TimeBucketStats, + TodayStatsResponse, }; /// Query parameters for listing proxy logs @@ -428,6 +429,72 @@ async fn get_time_bucket_stats( })) } +/// Query parameters for batch project health summary +#[derive(Debug, Deserialize, IntoParams)] +pub struct ProjectsHealthQuery { + /// Comma-separated list of project IDs + pub project_ids: String, +} + +/// Batch health summary response +#[derive(Debug, Serialize, ToSchema)] +pub struct ProjectsHealthResponse { + /// Health summaries keyed by project ID + pub projects: std::collections::HashMap, +} + +/// Get health summaries for multiple projects (last 1 hour) +#[utoipa::path( + get, + path = "/proxy-logs/stats/projects-health", + params(ProjectsHealthQuery), + responses( + (status = 200, description = "Health summaries per project", body = ProjectsHealthResponse), + (status = 400, description = "Invalid parameters"), + (status = 500, description = "Internal server error") + ), + tag = "Proxy Logs" +)] +async fn get_projects_health( + State(service): State>, + Query(query): Query, +) -> Result { + let project_ids: Vec = query + .project_ids + .split(',') + .filter_map(|s| s.trim().parse().ok()) + .collect(); + + if project_ids.is_empty() { + return Err(( + StatusCode::BAD_REQUEST, + "project_ids must contain at least one valid ID".to_string(), + )); + } + + if project_ids.len() > 100 { + return Err(( + StatusCode::BAD_REQUEST, + "Maximum 100 project IDs allowed".to_string(), + )); + } + + let end_time = chrono::Utc::now(); + let start_time = end_time - chrono::Duration::hours(1); + + let summaries = service + .get_projects_health_summary(&project_ids, start_time, end_time) + .await + .map_err(|e| (StatusCode::INTERNAL_SERVER_ERROR, e.to_string()))?; + + let projects: std::collections::HashMap = summaries + .into_iter() + .map(|s| (s.project_id.to_string(), s)) + .collect(); + + Ok(Json(ProjectsHealthResponse { projects })) +} + /// Create router for proxy log handlers pub fn create_routes() -> axum::Router> { use axum::routing::get; @@ -441,6 +508,10 @@ pub fn create_routes() -> axum::Router> { ) .route("/proxy-logs/stats/today", get(get_today_stats)) .route("/proxy-logs/stats/time-buckets", get(get_time_bucket_stats)) + .route( + "/proxy-logs/stats/projects-health", + get(get_projects_health), + ) } /// Get OpenAPI documentation for proxy logs handlers @@ -455,6 +526,7 @@ pub fn openapi() -> utoipa::openapi::OpenApi { get_proxy_log_by_request_id, get_today_stats, get_time_bucket_stats, + get_projects_health, ), components(schemas( ProxyLogResponse, @@ -463,6 +535,8 @@ pub fn openapi() -> utoipa::openapi::OpenApi { TimeBucketStatsResponse, TimeBucketStats, StatsFilters, + ProjectHealthSummary, + ProjectsHealthResponse, )) )] struct ApiDoc; diff --git a/crates/temps-proxy/src/on_demand.rs b/crates/temps-proxy/src/on_demand.rs index 29aef449..7a01f9cf 100644 --- a/crates/temps-proxy/src/on_demand.rs +++ b/crates/temps-proxy/src/on_demand.rs @@ -16,6 +16,7 @@ use sea_orm::{ use std::sync::atomic::{AtomicBool, AtomicU64, Ordering}; use std::sync::Arc; use std::time::{Duration, Instant}; +use temps_core::OnDemandWaker; use temps_entities::{deployment_containers, environments}; use thiserror::Error; use tokio::sync::Notify; @@ -98,6 +99,10 @@ pub struct OnDemandManager { /// Container lifecycle operations (injected). container_lifecycle: Arc, + + /// Notified whenever the route table finishes a reload. + /// Used by wake-on-request to know when routes are available after waking. + route_reloaded: Notify, } #[derive(Clone, Debug)] @@ -120,6 +125,7 @@ impl OnDemandManager { sleeping_by_domain: DashMap::new(), db, container_lifecycle, + route_reloaded: Notify::new(), } } @@ -155,6 +161,24 @@ impl OnDemandManager { self.sleeping_by_domain.clear(); } + /// Signal that the route table has been reloaded. + /// Called from the route-reload callback in server.rs. + pub fn notify_route_reloaded(&self) { + self.route_reloaded.notify_waiters(); + } + + /// Wait for the next route table reload, with a timeout. + /// Returns Ok(()) if a reload happened, Err on timeout. + pub async fn wait_for_route_reload(&self, timeout: Duration) -> Result<(), OnDemandError> { + match tokio::time::timeout(timeout, self.route_reloaded.notified()).await { + Ok(_) => Ok(()), + Err(_) => { + warn!("Timed out waiting for route reload after wake"); + Ok(()) // Don't fail — the route will appear on the next reload + } + } + } + /// Update on-demand configs from loaded environments. /// Called after route table reload. #[allow(dead_code)] // will be called when wired to route table reload @@ -199,17 +223,23 @@ impl OnDemandManager { // ── Sleep (Idle -> Sleeping) ── /// Run one sweep iteration. Checks all on-demand environments for idle timeout. + /// Also persists last_activity_at to the database for UI display. /// Returns IDs of environments that were put to sleep. pub async fn sweep_idle_environments(&self) -> Vec { let now = self.current_epoch_secs(); let mut slept = Vec::new(); + // Collect activity updates to batch-persist to DB + let mut activity_updates: Vec<(i32, u64)> = Vec::new(); + for entry in self.configs.iter() { let config = entry.value(); let env_id = config.environment_id; if let Some(last) = self.last_activity.get(&env_id) { let last_secs = last.value().load(Ordering::Relaxed); + activity_updates.push((env_id, last_secs)); + let idle_secs = now.saturating_sub(last_secs); if idle_secs >= config.idle_timeout_seconds as u64 { @@ -240,9 +270,41 @@ impl OnDemandManager { } } + // Batch-persist last_activity_at to DB (best-effort, doesn't block sweep) + self.persist_activity_timestamps(&activity_updates).await; + slept } + /// Persist in-memory activity timestamps to the database for UI display. + /// Uses a batch UPDATE for efficiency. Failures are logged but don't affect sweep. + async fn persist_activity_timestamps(&self, updates: &[(i32, u64)]) { + if updates.is_empty() { + return; + } + + for (env_id, epoch_secs) in updates { + let timestamp = chrono::DateTime::from_timestamp(*epoch_secs as i64, 0); + if let Some(ts) = timestamp { + if let Err(e) = self + .db + .execute(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Postgres, + "UPDATE environments SET last_activity_at = $1 WHERE id = $2 AND sleeping = false", + [ts.into(), (*env_id).into()], + )) + .await + { + debug!( + environment_id = env_id, + error = %e, + "Failed to persist last_activity_at" + ); + } + } + } + } + /// Put an environment to sleep: stop all containers, set sleeping=true. /// Returns Ok(true) if this call won the race and performed the sleep. /// Returns Ok(false) if the environment was already sleeping or has no deployment. @@ -278,25 +340,58 @@ impl OnDemandManager { .all(self.db.as_ref()) .await?; - // Stop all containers in parallel + // Stop all containers in parallel, tracking failures let stop_futures: Vec<_> = containers .iter() .map(|c| { let container_id = c.container_id.clone(); let lifecycle = Arc::clone(&self.container_lifecycle); async move { - if let Err(e) = lifecycle.stop_container(&container_id).await { - warn!( - container_id = %container_id, - error = %e, - "Failed to stop container during sleep" - ); + match lifecycle.stop_container(&container_id).await { + Ok(()) => Ok(container_id), + Err(e) => { + warn!( + container_id = %container_id, + error = %e, + "Failed to stop container during sleep" + ); + Err((container_id, e)) + } } } }) .collect(); - futures::future::join_all(stop_futures).await; + let results = futures::future::join_all(stop_futures).await; + + let failed: Vec<_> = results.iter().filter(|r| r.is_err()).collect(); + if !failed.is_empty() { + // Some containers failed to stop — revert sleeping state to avoid + // inconsistency where DB says sleeping but containers are still running + error!( + environment_id = environment_id, + failed_count = failed.len(), + total = containers.len(), + "Failed to stop some containers during sleep, reverting sleeping state" + ); + let _ = self + .db + .execute(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Postgres, + "UPDATE environments SET sleeping = false WHERE id = $1", + [environment_id.into()], + )) + .await; + self.notify_route_change().await; + return Err(OnDemandError::ContainerOperation { + container_id: "multiple".to_string(), + reason: format!( + "Failed to stop {}/{} containers during sleep", + failed.len(), + containers.len() + ), + }); + } info!( environment_id = environment_id, @@ -547,6 +642,17 @@ impl OnDemandManager { // Record activity so we don't immediately sleep again self.record_activity(environment_id); + // Persist last_activity_at to DB so the UI shows it immediately after wake + let now_ts = chrono::Utc::now(); + let _ = self + .db + .execute(Statement::from_sql_and_values( + sea_orm::DatabaseBackend::Postgres, + "UPDATE environments SET last_activity_at = $1 WHERE id = $2", + [now_ts.into(), environment_id.into()], + )) + .await; + info!( environment_id = environment_id, containers_started = started.len(), @@ -612,6 +718,30 @@ impl OnDemandManager { } } +/// Bridge implementation so the proxy's OnDemandManager can be injected into +/// the environments handler AppState via the plugin system. +#[async_trait] +impl OnDemandWaker for OnDemandManager { + async fn wake_environment( + &self, + environment_id: i32, + wake_timeout_seconds: i32, + ) -> Result<(), Box> { + self.wake_environment(environment_id, wake_timeout_seconds) + .await + .map_err(|e| Box::new(e) as Box) + } + + async fn sleep_environment( + &self, + environment_id: i32, + ) -> Result> { + self.sleep_environment(environment_id) + .await + .map_err(|e| Box::new(e) as Box) + } +} + #[cfg(test)] mod tests { use super::*; @@ -627,6 +757,16 @@ mod tests { healthy: bool, fail_start: bool, fail_health: bool, + /// Containers that should fail on start (selective failure) + fail_start_ids: Vec, + /// Containers that should fail on stop + fail_stop_ids: Vec, + /// Number of health checks before becoming healthy (0 = immediate) + health_check_delay: u32, + /// Current health check count per container + health_check_counts: std::collections::HashMap, + /// If true, health checks always return false (never healthy) + never_healthy: bool, } struct MockLifecycle { @@ -652,6 +792,46 @@ mod tests { } } + fn with_selective_start_failures(fail_ids: Vec) -> Self { + Self { + state: Mutex::new(MockContainerState { + healthy: true, + fail_start_ids: fail_ids, + ..Default::default() + }), + } + } + + fn with_selective_stop_failures(fail_ids: Vec) -> Self { + Self { + state: Mutex::new(MockContainerState { + healthy: true, + fail_stop_ids: fail_ids, + ..Default::default() + }), + } + } + + fn with_never_healthy() -> Self { + Self { + state: Mutex::new(MockContainerState { + healthy: false, + never_healthy: true, + ..Default::default() + }), + } + } + + fn with_delayed_health(delay_checks: u32) -> Self { + Self { + state: Mutex::new(MockContainerState { + healthy: false, + health_check_delay: delay_checks, + ..Default::default() + }), + } + } + #[allow(dead_code)] fn with_unhealthy() -> Self { Self { @@ -682,11 +862,25 @@ mod tests { reason: "Mock start failure".to_string(), }); } + if state.fail_start_ids.contains(&container_id.to_string()) { + return Err(OnDemandError::ContainerOperation { + container_id: container_id.to_string(), + reason: format!("Mock selective start failure for {}", container_id), + }); + } state.started.push(container_id.to_string()); Ok(()) } async fn stop_container(&self, container_id: &str) -> Result<(), OnDemandError> { + let state = self.state.lock().unwrap(); + if state.fail_stop_ids.contains(&container_id.to_string()) { + return Err(OnDemandError::ContainerOperation { + container_id: container_id.to_string(), + reason: format!("Mock selective stop failure for {}", container_id), + }); + } + drop(state); self.state .lock() .unwrap() @@ -696,13 +890,27 @@ mod tests { } async fn is_container_healthy(&self, _container_id: &str) -> Result { - let state = self.state.lock().unwrap(); + let mut state = self.state.lock().unwrap(); if state.fail_health { return Err(OnDemandError::ContainerOperation { container_id: _container_id.to_string(), reason: "Mock health check failure".to_string(), }); } + if state.never_healthy { + return Ok(false); + } + if state.health_check_delay > 0 { + let count = state + .health_check_counts + .entry(_container_id.to_string()) + .or_insert(0); + *count += 1; + if *count >= state.health_check_delay { + return Ok(true); + } + return Ok(false); + } Ok(state.healthy) } } @@ -999,6 +1207,7 @@ mod tests { is_preview: false, protected: false, sleeping: false, + last_activity_at: None, }]]) // containers query .append_query_results(vec![vec![deployment_containers::Model { @@ -1064,6 +1273,7 @@ mod tests { is_preview: false, protected: false, sleeping: false, + last_activity_at: None, }]]) // containers .append_query_results(vec![vec![deployment_containers::Model { @@ -1126,6 +1336,7 @@ mod tests { is_preview: false, protected: false, sleeping: false, + last_activity_at: None, }]]) .into_connection(); @@ -1200,6 +1411,7 @@ mod tests { is_preview: false, protected: false, sleeping: false, + last_activity_at: None, }]]) // containers (3 replicas) .append_query_results(vec![vec![ @@ -1555,6 +1767,7 @@ mod tests { is_preview: false, protected: false, sleeping: true, + last_activity_at: None, }; let container = deployment_containers::Model { @@ -1650,6 +1863,7 @@ mod tests { is_preview: false, protected: false, sleeping: false, + last_activity_at: None, }; let container1 = deployment_containers::Model { @@ -1749,6 +1963,7 @@ mod tests { is_preview: false, protected: false, sleeping: false, + last_activity_at: None, }; let container = deployment_containers::Model { @@ -1807,4 +2022,1418 @@ mod tests { // Container should be stopped assert_eq!(lifecycle.stopped_containers(), vec!["idle-container"]); } + + // ── Helper: build a standard env model ── + + fn make_env_model( + id: i32, + project_id: i32, + deployment_id: Option, + sleeping: bool, + ) -> environments::Model { + environments::Model { + id, + name: format!("env-{}", id), + slug: format!("env-{}", id), + subdomain: format!("env-{}", id), + last_deployment: None, + host: "".to_string(), + upstreams: temps_entities::upstream_config::UpstreamList::new(), + created_at: chrono::Utc::now(), + updated_at: chrono::Utc::now(), + project_id, + current_deployment_id: deployment_id, + branch: None, + deleted_at: None, + deployment_config: Some(temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + idle_timeout_seconds: 300, + wake_timeout_seconds: 30, + ..Default::default() + }), + is_preview: false, + protected: false, + sleeping, + last_activity_at: None, + } + } + + fn make_container( + id: i32, + deployment_id: i32, + container_id: &str, + node_id: Option, + ) -> deployment_containers::Model { + deployment_containers::Model { + id, + deployment_id, + container_id: container_id.to_string(), + container_name: format!("app-{}", container_id), + container_port: 3000, + host_port: Some(32000 + id), + image_name: None, + status: Some("running".to_string()), + created_at: chrono::Utc::now(), + deployed_at: chrono::Utc::now(), + ready_at: None, + deleted_at: None, + node_id, + } + } + + // ══════════════════════════════════════════════════════════════════════════ + // PARTIAL CONTAINER START FAILURE + ROLLBACK + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_wake_partial_start_failure_stops_started_containers() { + // 3 containers: c1 and c3 start OK, c2 fails + // Verify: c1 and c3 are stopped (rolled back), DB reverted to sleeping + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS UPDATE sleeping=false -> 1 row + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), true)]]) + // containers + .append_query_results(vec![vec![ + make_container(1, 100, "c1", None), + make_container(2, 100, "c2", None), + make_container(3, 100, "c3", None), + ]]) + // Revert UPDATE sleeping=true + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::with_selective_start_failures(vec![ + "c2".to_string() + ])); + let manager = OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + ); + + let result = manager.wake_environment(1, 30).await; + assert!(result.is_err()); + assert!(matches!( + result.unwrap_err(), + OnDemandError::ContainerOperation { .. } + )); + + // Successfully started containers should have been stopped as rollback + let stopped = lifecycle.stopped_containers(); + let started = lifecycle.started_containers(); + // c1 and c3 started (c2 failed), so c1 and c3 should be in stopped + for c in &started { + assert!( + stopped.contains(c), + "Started container {} should have been stopped in rollback", + c + ); + } + // c2 should NOT be in started + assert!( + !started.contains(&"c2".to_string()), + "c2 should not have been started" + ); + } + + // ══════════════════════════════════════════════════════════════════════════ + // HEALTH CHECK TIMEOUT + ROLLBACK + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_wake_health_timeout_stops_containers_and_reverts() { + // Container starts but never becomes healthy → timeout → rollback + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS UPDATE sleeping=false -> 1 row + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), true)]]) + // containers + .append_query_results(vec![vec![make_container(1, 100, "slow-container", None)]]) + // Revert UPDATE sleeping=true + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::with_never_healthy()); + let manager = OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + ); + + // Use very short timeout (1s) so test doesn't hang + let result = manager.wake_environment(1, 1).await; + assert!(result.is_err()); + assert!( + matches!( + result.unwrap_err(), + OnDemandError::WakeTimeout { + environment_id: 1, + timeout_secs: 1 + } + ), + "Should be WakeTimeout error" + ); + + // Container was started then stopped on rollback + assert_eq!(lifecycle.started_containers(), vec!["slow-container"]); + assert!( + lifecycle + .stopped_containers() + .contains(&"slow-container".to_string()), + "Container should be stopped after health timeout rollback" + ); + } + + // ══════════════════════════════════════════════════════════════════════════ + // DELAYED HEALTH CHECK SUCCESS (becomes healthy after N polls) + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_wake_delayed_health_succeeds() { + // Container takes 2 health checks to become healthy, but within timeout + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS UPDATE sleeping=false -> 1 row + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), true)]]) + // containers + .append_query_results(vec![vec![make_container(1, 100, "delayed-c", None)]]) + // NOTIFY + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + // Healthy after 2 checks (2 * 500ms = 1s, well within 30s timeout) + let lifecycle = Arc::new(MockLifecycle::with_delayed_health(2)); + let manager = OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + ); + + let result = manager.wake_environment(1, 30).await; + assert!( + result.is_ok(), + "Wake should succeed after delayed health: {:?}", + result.err() + ); + assert_eq!(lifecycle.started_containers(), vec!["delayed-c"]); + // No containers stopped (no rollback) + assert!(lifecycle.stopped_containers().is_empty()); + } + + // ══════════════════════════════════════════════════════════════════════════ + // SLEEP WITH CONTAINER STOP FAILURE → REVERT + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_sleep_stop_failure_reverts_db_state() { + // 2 containers, second fails to stop → revert sleeping=false + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS UPDATE sleeping=true -> 1 row + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + // containers + .append_query_results(vec![vec![ + make_container(1, 100, "ok-stop", None), + make_container(2, 100, "fail-stop", None), + ]]) + // Revert UPDATE sleeping=false + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // NOTIFY after revert + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::with_selective_stop_failures(vec![ + "fail-stop".to_string(), + ])); + let manager = OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + ); + + let result = manager.sleep_environment(1).await; + assert!(result.is_err()); + assert!(matches!( + result.unwrap_err(), + OnDemandError::ContainerOperation { .. } + )); + } + + // ══════════════════════════════════════════════════════════════════════════ + // CONCURRENT WAKE: TRUE CONCURRENCY TEST + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_wake_concurrent_only_one_wakes() { + // Simulate 5 concurrent wake requests. Only the first should perform + // the DB CAS and start containers. Others should wait and return Ok. + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS UPDATE sleeping=false -> 1 row (first waker wins) + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + // containers + .append_query_results(vec![vec![make_container(1, 100, "concurrent-c", None)]]) + // NOTIFY + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + // find_by_id for waiters checking env status (up to 4 waiters) + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + let mut handles = Vec::new(); + for _ in 0..5 { + let m = Arc::clone(&manager); + handles.push(tokio::spawn(async move { m.wake_environment(1, 30).await })); + } + + let results: Vec<_> = futures::future::join_all(handles).await; + let ok_count = results + .iter() + .filter(|r| r.as_ref().map(|r| r.is_ok()).unwrap_or(false)) + .count(); + + // All should succeed (first wakes, rest wait and find env awake) + assert!( + ok_count >= 1, + "At least one concurrent wake should succeed, got {} successes out of {}", + ok_count, + results.len() + ); + + // Container should be started exactly once + assert_eq!( + lifecycle.started_containers().len(), + 1, + "Container should only be started once despite concurrent requests" + ); + } + + // ══════════════════════════════════════════════════════════════════════════ + // SWEEP: MIXED IDLE AND ACTIVE ENVIRONMENTS + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_sweep_only_sleeps_idle_not_active() { + // env 1: idle → should sleep + // env 2: active (just now) → should NOT sleep + // Uses only 1 idle env to avoid DashMap iteration order issues with MockDatabase + let db = MockDatabase::new(DatabaseBackend::Postgres) + // env 1 sleep: + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + .append_query_results(vec![vec![make_container(1, 100, "c1", None)]]) + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + manager.register_on_demand_environment(1, 60, 30); + manager.register_on_demand_environment(2, 60, 30); + + let old_time = manager.current_epoch_secs() - 120; + // env 1: idle for 120s (past 60s timeout) + manager.last_activity.insert(1, AtomicU64::new(old_time)); + // env 2: active just now + manager.record_activity(2); + + let slept = manager.sweep_idle_environments().await; + assert!(slept.contains(&1), "Idle env 1 should sleep"); + assert!(!slept.contains(&2), "Active env 2 should NOT sleep"); + assert_eq!(slept.len(), 1); + + assert_eq!(lifecycle.stopped_containers(), vec!["c1"]); + } + + // ══════════════════════════════════════════════════════════════════════════ + // SWEEP CONTINUES AFTER ONE ENVIRONMENT FAILS + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_sweep_does_not_crash_on_sleep_failure() { + // Single env whose container stop fails → sweep should return empty + // (not panic or crash), proving it handles errors gracefully + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS succeeds + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + .append_query_results(vec![vec![make_container(1, 100, "fail-c", None)]]) + // revert sleeping=false + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // NOTIFY after revert + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::with_selective_stop_failures(vec![ + "fail-c".to_string() + ])); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + manager.register_on_demand_environment(1, 60, 30); + let old_time = manager.current_epoch_secs() - 120; + manager.last_activity.insert(1, AtomicU64::new(old_time)); + + let slept = manager.sweep_idle_environments().await; + // Failed to sleep → not in slept list, but no panic + assert!(!slept.contains(&1), "env 1 sleep should have failed"); + assert!(slept.is_empty()); + } + + // ══════════════════════════════════════════════════════════════════════════ + // WAKE: ENVIRONMENT NOT FOUND AFTER CAS + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_wake_env_deleted_between_cas_and_load() { + // CAS succeeds (1 row), but find_by_id returns nothing (deleted concurrently) + let db = MockDatabase::new(DatabaseBackend::Postgres) + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // Empty result for find_by_id + .append_query_results(vec![Vec::::new()]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + let result = manager.wake_environment(1, 30).await; + assert!(result.is_err()); + assert!(matches!( + result.unwrap_err(), + OnDemandError::NotFound { environment_id: 1 } + )); + } + + // ══════════════════════════════════════════════════════════════════════════ + // SLEEP: ENV NOT FOUND AFTER CAS + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_sleep_env_deleted_between_cas_and_load() { + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS succeeds + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // Empty result for find_by_id + .append_query_results(vec![Vec::::new()]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + let result = manager.sleep_environment(1).await; + assert!(result.is_err()); + assert!(matches!( + result.unwrap_err(), + OnDemandError::NotFound { environment_id: 1 } + )); + } + + // ══════════════════════════════════════════════════════════════════════════ + // SLEEP: ENV WITH NO CURRENT DEPLOYMENT + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_sleep_env_no_deployment_returns_false() { + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS succeeds + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // env with no deployment + .append_query_results(vec![vec![make_env_model(1, 10, None, false)]]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + let result = manager.sleep_environment(1).await.unwrap(); + assert!(!result, "Should return false when env has no deployment"); + } + + // ══════════════════════════════════════════════════════════════════════════ + // WAKE: NO CONTAINERS FOUND + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_wake_no_containers_succeeds_gracefully() { + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS UPDATE -> 1 row + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), true)]]) + // empty containers + .append_query_results(vec![Vec::::new()]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + let result = manager.wake_environment(1, 30).await; + assert!( + result.is_ok(), + "Wake with no containers should succeed gracefully" + ); + } + + // ══════════════════════════════════════════════════════════════════════════ + // UPDATE_CONFIGS REPLACES OLD AND INITIALIZES ACTIVITY + // ══════════════════════════════════════════════════════════════════════════ + + #[test] + fn test_update_configs_replaces_and_initializes() { + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + // First batch + manager.update_configs(vec![ + OnDemandConfig { + environment_id: 1, + idle_timeout_seconds: 300, + wake_timeout_seconds: 30, + }, + OnDemandConfig { + environment_id: 2, + idle_timeout_seconds: 600, + wake_timeout_seconds: 60, + }, + ]); + assert!(manager.configs.contains_key(&1)); + assert!(manager.configs.contains_key(&2)); + assert!(manager.last_activity.contains_key(&1)); + assert!(manager.last_activity.contains_key(&2)); + + // Second batch replaces first + manager.update_configs(vec![OnDemandConfig { + environment_id: 3, + idle_timeout_seconds: 120, + wake_timeout_seconds: 15, + }]); + assert!( + !manager.configs.contains_key(&1), + "Old config 1 should be removed" + ); + assert!( + !manager.configs.contains_key(&2), + "Old config 2 should be removed" + ); + assert!( + manager.configs.contains_key(&3), + "New config 3 should be present" + ); + // Activity for old envs persists (not cleared by update_configs) + assert!(manager.last_activity.contains_key(&1)); + } + + // ══════════════════════════════════════════════════════════════════════════ + // REMOVE_ENVIRONMENT CLEANS UP ALL STATE + // ══════════════════════════════════════════════════════════════════════════ + + #[test] + fn test_remove_environment_cleans_all_state() { + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + manager.register_on_demand_environment(42, 300, 30); + manager.record_activity(42); + // Simulate wake state creation + manager.wake_states.insert( + 42, + Arc::new(WakeState { + waking: AtomicBool::new(false), + notify: Notify::new(), + }), + ); + + assert!(manager.configs.contains_key(&42)); + assert!(manager.last_activity.contains_key(&42)); + assert!(manager.wake_states.contains_key(&42)); + + manager.remove_environment(42); + + assert!(!manager.configs.contains_key(&42)); + assert!(!manager.last_activity.contains_key(&42)); + assert!(!manager.wake_states.contains_key(&42)); + } + + // ══════════════════════════════════════════════════════════════════════════ + // ACTIVITY TRACKING: RAPID CONCURRENT UPDATES + // ══════════════════════════════════════════════════════════════════════════ + + #[test] + fn test_rapid_activity_updates_no_panic() { + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new(Arc::new(db), lifecycle)); + + // Simulate rapid concurrent activity recording from multiple threads + let mut handles = Vec::new(); + for _ in 0..10 { + let m = Arc::clone(&manager); + handles.push(std::thread::spawn(move || { + for _ in 0..1000 { + m.record_activity(1); + m.record_activity(2); + m.record_activity(3); + } + })); + } + + for h in handles { + h.join().unwrap(); + } + + // All 3 environments should have activity tracked + assert!(manager.last_activity.contains_key(&1)); + assert!(manager.last_activity.contains_key(&2)); + assert!(manager.last_activity.contains_key(&3)); + } + + // ══════════════════════════════════════════════════════════════════════════ + // SLEEPING DOMAIN: OVERWRITE SAME DOMAIN + // ══════════════════════════════════════════════════════════════════════════ + + #[test] + fn test_sleeping_domain_overwrite() { + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + manager.register_sleeping_domain( + "app.example.com".to_string(), + SleepingEnvironmentInfo { + environment_id: 1, + project_id: 10, + deployment_id: 100, + wake_timeout_seconds: 30, + }, + ); + + // Overwrite with different env for same domain (e.g. after redeployment) + manager.register_sleeping_domain( + "app.example.com".to_string(), + SleepingEnvironmentInfo { + environment_id: 2, + project_id: 20, + deployment_id: 200, + wake_timeout_seconds: 60, + }, + ); + + let info = manager.get_sleeping_environment("app.example.com").unwrap(); + assert_eq!(info.environment_id, 2); + assert_eq!(info.project_id, 20); + assert_eq!(info.deployment_id, 200); + assert_eq!(info.wake_timeout_seconds, 60); + } + + // ══════════════════════════════════════════════════════════════════════════ + // WAKE: MULTIPLE CONTAINERS WITH MULTI-NODE + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_wake_multiple_containers_multi_node() { + // 3 containers across different nodes, all should be started + let db = MockDatabase::new(DatabaseBackend::Postgres) + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), true)]]) + .append_query_results(vec![vec![ + make_container(1, 100, "local-c", None), + make_container(2, 100, "remote-c1", Some(2)), + make_container(3, 100, "remote-c2", Some(3)), + ]]) + // NOTIFY + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + ); + + let result = manager.wake_environment(1, 30).await; + assert!(result.is_ok()); + + let mut started = lifecycle.started_containers(); + started.sort(); + assert_eq!(started, vec!["local-c", "remote-c1", "remote-c2"]); + } + + // ══════════════════════════════════════════════════════════════════════════ + // VALIDATION: BOUNDARY VALUES + // ══════════════════════════════════════════════════════════════════════════ + + #[test] + fn test_on_demand_validation_boundary_idle_min() { + let config = temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + idle_timeout_seconds: 60, // exact minimum + ..Default::default() + }; + assert!(config.validate().is_ok()); + } + + #[test] + fn test_on_demand_validation_boundary_idle_max() { + let config = temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + idle_timeout_seconds: 86400, // exact maximum + ..Default::default() + }; + assert!(config.validate().is_ok()); + } + + #[test] + fn test_on_demand_validation_boundary_wake_min() { + let config = temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + wake_timeout_seconds: 5, // exact minimum + ..Default::default() + }; + assert!(config.validate().is_ok()); + } + + #[test] + fn test_on_demand_validation_boundary_wake_max() { + let config = temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + wake_timeout_seconds: 120, // exact maximum + ..Default::default() + }; + assert!(config.validate().is_ok()); + } + + #[test] + fn test_on_demand_validation_boundary_idle_just_below_min() { + let config = temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + idle_timeout_seconds: 59, // just below 60 + ..Default::default() + }; + assert!(config.validate().is_err()); + } + + #[test] + fn test_on_demand_validation_boundary_idle_just_above_max() { + let config = temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + idle_timeout_seconds: 86401, // just above 86400 + ..Default::default() + }; + assert!(config.validate().is_err()); + } + + #[test] + fn test_on_demand_validation_boundary_wake_just_below_min() { + let config = temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + wake_timeout_seconds: 4, // just below 5 + ..Default::default() + }; + assert!(config.validate().is_err()); + } + + #[test] + fn test_on_demand_validation_boundary_wake_just_above_max() { + let config = temps_entities::deployment_config::DeploymentConfig { + on_demand: true, + wake_timeout_seconds: 121, // just above 120 + ..Default::default() + }; + assert!(config.validate().is_err()); + } + + // ══════════════════════════════════════════════════════════════════════════ + // OnDemandWaker TRAIT BRIDGE + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_on_demand_waker_bridge_wake() { + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS -> 0 rows (already awake) + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + // Call through the OnDemandWaker trait + let waker: &dyn OnDemandWaker = &manager; + let result = waker.wake_environment(1, 30).await; + assert!(result.is_ok()); + } + + #[tokio::test] + async fn test_on_demand_waker_bridge_sleep() { + let db = MockDatabase::new(DatabaseBackend::Postgres) + // CAS -> 0 rows (already sleeping) + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + let waker: &dyn OnDemandWaker = &manager; + let result = waker.sleep_environment(1).await; + assert!(result.is_ok()); + assert!(!result.unwrap()); + } + + // ══════════════════════════════════════════════════════════════════════════ + // ERROR TYPE COVERAGE + // ══════════════════════════════════════════════════════════════════════════ + + #[test] + fn test_on_demand_error_container_operation_display() { + let err = OnDemandError::ContainerOperation { + container_id: "abc123".to_string(), + reason: "connection refused".to_string(), + }; + let msg = err.to_string(); + assert!(msg.contains("abc123")); + assert!(msg.contains("connection refused")); + } + + #[test] + fn test_on_demand_error_not_found_display() { + let err = OnDemandError::NotFound { environment_id: 42 }; + assert!(err.to_string().contains("42")); + } + + #[test] + fn test_on_demand_error_from_db_err() { + let db_err = sea_orm::DbErr::Custom("test db error".to_string()); + let err: OnDemandError = db_err.into(); + assert!(matches!(err, OnDemandError::Database(_))); + assert!(err.to_string().contains("test db error")); + } + + // ══════════════════════════════════════════════════════════════════════════ + // REGISTER ON-DEMAND PRESERVES EXISTING ACTIVITY + // ══════════════════════════════════════════════════════════════════════════ + + #[test] + fn test_register_on_demand_preserves_existing_activity() { + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + // Record activity first + manager.record_activity(1); + let original_ts = manager + .last_activity + .get(&1) + .unwrap() + .value() + .load(Ordering::Relaxed); + + // Register should not overwrite existing activity + std::thread::sleep(Duration::from_millis(10)); + manager.register_on_demand_environment(1, 300, 30); + + let ts_after = manager + .last_activity + .get(&1) + .unwrap() + .value() + .load(Ordering::Relaxed); + + assert_eq!( + original_ts, ts_after, + "register_on_demand_environment should not overwrite existing activity" + ); + } + + // ══════════════════════════════════════════════════════════════════════════ + // SWEEP: NO ACTIVITY RECORDED (EDGE CASE) + // ══════════════════════════════════════════════════════════════════════════ + + #[tokio::test] + async fn test_sweep_env_without_activity_entry() { + // Config exists but no last_activity entry — should not panic + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = OnDemandManager::new(Arc::new(db), lifecycle); + + // Manually insert config without activity + manager.configs.insert( + 99, + OnDemandConfig { + environment_id: 99, + idle_timeout_seconds: 60, + wake_timeout_seconds: 30, + }, + ); + // No last_activity entry for env 99 + + let slept = manager.sweep_idle_environments().await; + // Should not panic, should not try to sleep (no activity data) + assert!(slept.is_empty()); + } + + // ══════════════════════════════════════════════════════════════════════════ + // SLEEPING ENVIRONMENT INFO CLONE + // ══════════════════════════════════════════════════════════════════════════ + + // ══════════════════════════════════════════════════════════════════════════ + // INTEGRATION: FULL LIFECYCLE — ENABLE → IDLE → KILL → WAKE + // ══════════════════════════════════════════════════════════════════════════ + + /// Simulates the real callback wiring from server.rs. + /// Returns the manager with sleeping/on-demand state populated. + fn simulate_route_reload_callback( + manager: &Arc, + sleeping: Vec, + on_demand_configs: Vec, + ) { + manager.clear_sleeping_domains(); + for entry in sleeping { + manager.register_sleeping_domain( + entry.domain.clone(), + SleepingEnvironmentInfo { + environment_id: entry.environment_id, + project_id: entry.project_id, + deployment_id: entry.deployment_id, + wake_timeout_seconds: entry.wake_timeout_seconds, + }, + ); + } + for config in on_demand_configs { + manager.register_on_demand_environment( + config.environment_id, + config.idle_timeout_seconds, + config.wake_timeout_seconds, + ); + } + } + + #[tokio::test] + async fn test_full_lifecycle_enable_idle_kill() { + // Scenario: User enables on-demand for env 1 with 60s idle timeout. + // Route reload fires callback → env registered for idle tracking. + // No requests come in → idle timeout exceeded → sweep kills containers. + + let db = MockDatabase::new(DatabaseBackend::Postgres) + // sleep_environment CAS: UPDATE sleeping=true WHERE sleeping=false → 1 row + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env for sleep + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + // containers for sleep + .append_query_results(vec![vec![ + make_container(1, 100, "app-c1", None), + make_container(2, 100, "app-c2", Some(2)), + ]]) + // NOTIFY route_table_changes + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + // Step 1: Simulate route reload callback (what happens after user enables on-demand) + // The route table found env 1 is awake with on_demand=true, so it's in on_demand_configs + simulate_route_reload_callback( + &manager, + vec![], // no sleeping envs yet + vec![temps_routes::OnDemandConfigEntry { + environment_id: 1, + idle_timeout_seconds: 60, + wake_timeout_seconds: 30, + }], + ); + + // Verify env is registered for idle tracking + assert!(manager.configs.contains_key(&1)); + assert!(manager.last_activity.contains_key(&1)); + + // Step 2: Simulate time passing — set last activity to 120s ago (past 60s timeout) + let old_time = manager.current_epoch_secs() - 120; + manager.last_activity.insert(1, AtomicU64::new(old_time)); + + // Step 3: Run sweep — should detect idle and kill containers + let slept = manager.sweep_idle_environments().await; + assert_eq!(slept, vec![1], "Sweep should put env 1 to sleep"); + + // Step 4: Verify containers were actually stopped + let mut stopped = lifecycle.stopped_containers(); + stopped.sort(); + assert_eq!(stopped, vec!["app-c1", "app-c2"]); + } + + #[tokio::test] + async fn test_full_lifecycle_active_env_not_killed() { + // Scenario: User enables on-demand, but keeps sending requests. + // Container should NOT be killed because activity keeps resetting. + + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + // Route reload: env 1 with on_demand=true, 300s idle timeout + simulate_route_reload_callback( + &manager, + vec![], + vec![temps_routes::OnDemandConfigEntry { + environment_id: 1, + idle_timeout_seconds: 300, + wake_timeout_seconds: 30, + }], + ); + + // Simulate proxy recording activity (happens on every request) + manager.record_activity(1); + + // Run sweep — env just had activity, so it's NOT idle + let slept = manager.sweep_idle_environments().await; + assert!(slept.is_empty(), "Active env should not be put to sleep"); + assert!( + lifecycle.stopped_containers().is_empty(), + "No containers should be stopped" + ); + } + + #[tokio::test] + async fn test_full_lifecycle_kill_then_wake_on_request() { + // Full round-trip: + // 1. Enable on-demand + // 2. Env goes idle → containers killed, marked sleeping + // 3. After sleep, route reload fires again → env is now in sleeping list + // 4. Request comes in → domain found in sleeping map → wake triggered + + // DB mock for the entire lifecycle: + let db = MockDatabase::new(DatabaseBackend::Postgres) + // === PHASE 1: SLEEP (via sweep_idle_environments) === + // CAS UPDATE sleeping=true → 1 row + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env for sleep + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + // containers for sleep + .append_query_results(vec![vec![make_container(1, 100, "myapp", None)]]) + // NOTIFY after sleep + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + // persist_activity_timestamps: UPDATE last_activity_at for env 1 + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // === PHASE 2: WAKE === + // CAS UPDATE sleeping=false → 1 row + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // find env for wake + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), true)]]) + // containers for wake + .append_query_results(vec![vec![make_container(1, 100, "myapp", None)]]) + // UPDATE last_activity_at after wake + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + // NOTIFY after wake + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + // ── PHASE 1: Enable on-demand, go idle, get killed ── + + simulate_route_reload_callback( + &manager, + vec![], + vec![temps_routes::OnDemandConfigEntry { + environment_id: 1, + idle_timeout_seconds: 60, + wake_timeout_seconds: 30, + }], + ); + + // Set activity to 120s ago → past 60s timeout + let old_time = manager.current_epoch_secs() - 120; + manager.last_activity.insert(1, AtomicU64::new(old_time)); + + let slept = manager.sweep_idle_environments().await; + assert_eq!(slept, vec![1]); + assert_eq!(lifecycle.stopped_containers(), vec!["myapp"]); + + // ── PHASE 2: Route reload fires (triggered by NOTIFY) ── + // Now env 1 is sleeping, so route table puts it in sleeping_environments + + simulate_route_reload_callback( + &manager, + vec![temps_routes::SleepingEnvironmentEntry { + domain: "myapp.preview.example.com".to_string(), + environment_id: 1, + project_id: 10, + deployment_id: 100, + wake_timeout_seconds: 30, + }], + vec![], // env 1 is sleeping, so NOT in on_demand_configs + ); + + // Verify domain is now in sleeping map + let sleeping_info = manager.get_sleeping_environment("myapp.preview.example.com"); + assert!( + sleeping_info.is_some(), + "Domain should be in sleeping map after route reload" + ); + let info = sleeping_info.unwrap(); + assert_eq!(info.environment_id, 1); + + // ── PHASE 3: Request comes in → wake ── + // This is what proxy.rs does when it finds the domain in sleeping map + let wake_result = manager + .wake_environment(info.environment_id, info.wake_timeout_seconds) + .await; + assert!( + wake_result.is_ok(), + "Wake should succeed: {:?}", + wake_result.err() + ); + + // Container should have been started + assert_eq!(lifecycle.started_containers(), vec!["myapp"]); + } + + #[tokio::test] + async fn test_full_lifecycle_request_resets_idle_timer() { + // Scenario: Env idle for 50s (timeout 60s), then a request comes in, + // then 50s more passes. Total 100s but timer was reset → should NOT sleep. + + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + simulate_route_reload_callback( + &manager, + vec![], + vec![temps_routes::OnDemandConfigEntry { + environment_id: 1, + idle_timeout_seconds: 60, + wake_timeout_seconds: 30, + }], + ); + + // Activity was 50s ago (under 60s timeout) + let t = manager.current_epoch_secs() - 50; + manager.last_activity.insert(1, AtomicU64::new(t)); + + // Sweep: should NOT sleep (50s < 60s) + let slept = manager.sweep_idle_environments().await; + assert!(slept.is_empty(), "Should not sleep before timeout"); + + // Simulate a request that resets the timer + manager.record_activity(1); + + // Now even after 50s more, the timer was reset, so still not idle + let slept = manager.sweep_idle_environments().await; + assert!(slept.is_empty(), "Should not sleep after activity reset"); + assert!(lifecycle.stopped_containers().is_empty()); + } + + #[tokio::test] + async fn test_full_lifecycle_disable_on_demand_stops_tracking() { + // Scenario: on-demand was enabled, then user disables it. + // Route reload fires without the env in on_demand_configs. + // Sweep should no longer track or sleep this env. + + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + // Initially enabled + simulate_route_reload_callback( + &manager, + vec![], + vec![temps_routes::OnDemandConfigEntry { + environment_id: 1, + idle_timeout_seconds: 60, + wake_timeout_seconds: 30, + }], + ); + assert!(manager.configs.contains_key(&1)); + + // User disables on-demand → route reload fires without env 1 in configs + // Note: the callback currently only adds, doesn't remove old configs. + // This tests the real behavior — the env stays in configs until removed. + // For proper cleanup, remove_environment would need to be called. + manager.remove_environment(1); + assert!(!manager.configs.contains_key(&1)); + + // Set old activity to make it look idle + manager.last_activity.insert(1, AtomicU64::new(0)); + + // Sweep should NOT touch env 1 (not in configs) + let slept = manager.sweep_idle_environments().await; + assert!(slept.is_empty()); + } + + #[tokio::test] + async fn test_full_lifecycle_env_not_idle_enough_stays_awake() { + // env has 300s timeout, idle for 120s → should NOT sleep + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + simulate_route_reload_callback( + &manager, + vec![], + vec![temps_routes::OnDemandConfigEntry { + environment_id: 1, + idle_timeout_seconds: 300, + wake_timeout_seconds: 30, + }], + ); + + // Idle for 120s, but timeout is 300s + let old_time = manager.current_epoch_secs() - 120; + manager.last_activity.insert(1, AtomicU64::new(old_time)); + + let slept = manager.sweep_idle_environments().await; + assert!( + slept.is_empty(), + "env with 300s timeout should NOT sleep after only 120s idle" + ); + assert!(lifecycle.stopped_containers().is_empty()); + } + + #[tokio::test] + async fn test_full_lifecycle_exact_boundary_idle_sleeps() { + // env has 120s timeout, idle for exactly 120s → should sleep (>= check) + let db = MockDatabase::new(DatabaseBackend::Postgres) + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 1, + }]) + .append_query_results(vec![vec![make_env_model(1, 10, Some(100), false)]]) + .append_query_results(vec![vec![make_container(1, 100, "c1", None)]]) + .append_exec_results(vec![MockExecResult { + last_insert_id: 0, + rows_affected: 0, + }]) + .into_connection(); + + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + simulate_route_reload_callback( + &manager, + vec![], + vec![temps_routes::OnDemandConfigEntry { + environment_id: 1, + idle_timeout_seconds: 120, + wake_timeout_seconds: 30, + }], + ); + + let old_time = manager.current_epoch_secs() - 120; + manager.last_activity.insert(1, AtomicU64::new(old_time)); + + let slept = manager.sweep_idle_environments().await; + assert_eq!( + slept, + vec![1], + "env should sleep at exact boundary (idle_secs >= timeout)" + ); + assert_eq!(lifecycle.stopped_containers(), vec!["c1"]); + } + + #[tokio::test] + async fn test_full_lifecycle_route_reload_transitions_sleeping_to_awake() { + // When an env wakes up, the next route reload should: + // 1. Remove it from sleeping_by_domain (clear_sleeping_domains) + // 2. Add it to on_demand_configs (for idle tracking again) + + let db = MockDatabase::new(DatabaseBackend::Postgres).into_connection(); + let lifecycle = Arc::new(MockLifecycle::new()); + let manager = Arc::new(OnDemandManager::new( + Arc::new(db), + Arc::clone(&lifecycle) as Arc, + )); + + // Initial state: env 1 sleeping + simulate_route_reload_callback( + &manager, + vec![temps_routes::SleepingEnvironmentEntry { + domain: "app.preview.example.com".to_string(), + environment_id: 1, + project_id: 10, + deployment_id: 100, + wake_timeout_seconds: 30, + }], + vec![], + ); + + assert!(manager + .get_sleeping_environment("app.preview.example.com") + .is_some()); + + // After wake, route reload fires again: env is now awake + simulate_route_reload_callback( + &manager, + vec![], // no longer sleeping + vec![temps_routes::OnDemandConfigEntry { + environment_id: 1, + idle_timeout_seconds: 300, + wake_timeout_seconds: 30, + }], + ); + + // Domain should no longer be in sleeping map + assert!( + manager + .get_sleeping_environment("app.preview.example.com") + .is_none(), + "Domain should be removed from sleeping map after env wakes up" + ); + // But env should be tracked for idle timeout again + assert!( + manager.configs.contains_key(&1), + "Env should be in configs for idle tracking" + ); + } + + #[test] + fn test_sleeping_environment_info_debug_clone() { + let info = SleepingEnvironmentInfo { + environment_id: 1, + project_id: 10, + deployment_id: 100, + wake_timeout_seconds: 30, + }; + let cloned = info.clone(); + assert_eq!(cloned.environment_id, info.environment_id); + assert_eq!(cloned.project_id, info.project_id); + assert_eq!(cloned.deployment_id, info.deployment_id); + assert_eq!(cloned.wake_timeout_seconds, info.wake_timeout_seconds); + + // Debug trait + let debug_str = format!("{:?}", info); + assert!(debug_str.contains("SleepingEnvironmentInfo")); + } } diff --git a/crates/temps-proxy/src/proxy.rs b/crates/temps-proxy/src/proxy.rs index 70cc2ce8..3e74067f 100644 --- a/crates/temps-proxy/src/proxy.rs +++ b/crates/temps-proxy/src/proxy.rs @@ -555,36 +555,6 @@ impl LoadBalancer { } } - /// Generate HTML for the "waking up" interstitial page. - /// Displayed when a request hits a sleeping on-demand environment. - /// Auto-refreshes every 3 seconds until the environment is awake. - fn generate_waking_html() -> String { - r#" - - - - - - Waking Up - - - -
-
-

Waking Up

-

This environment is starting up. The page will refresh automatically.

-
- -"#.to_string() - } - /// Generate HTML for CAPTCHA challenge page fn generate_challenge_html( project_name: &str, @@ -1968,6 +1938,10 @@ impl ProxyHttp for LoadBalancer { ctx.ip_address = Some(client_ip.to_string()); } + // SECURITY: Strip client-supplied X-Temps-Demo-Mode header to prevent + // bypass of authentication. Only the proxy should set this header. + let _ = session.req_header_mut().remove_header("X-Temps-Demo-Mode"); + // Detect demo subdomain and add demo mode header // This allows the auth middleware to auto-authenticate as demo user // Demo mode must be explicitly enabled in settings @@ -2015,53 +1989,64 @@ impl ProxyHttp for LoadBalancer { // On-demand: check if this host maps to a sleeping environment. // Sleeping environments are excluded from the route table, so we must - // check before project context resolution. Serve a "waking up" page and - // trigger an async wake. + // check before project context resolution. Wake the environment inline + // and hold the request until the container is ready and routes are reloaded. if let Some(ref on_demand) = self.on_demand_manager { let host_without_port = ctx.host.split(':').next().unwrap_or(&ctx.host); if let Some(sleeping_info) = on_demand.get_sleeping_environment(host_without_port) { info!( environment_id = sleeping_info.environment_id, host = %ctx.host, - "Request hit sleeping environment, serving wake page" + "Request hit sleeping environment, waking inline" ); - // Trigger async wake (doesn't block the response) - let on_demand_clone = Arc::clone(on_demand); let env_id = sleeping_info.environment_id; let wake_timeout = sleeping_info.wake_timeout_seconds; - tokio::spawn(async move { - match on_demand_clone.wake_environment(env_id, wake_timeout).await { - Ok(()) => { - info!(environment_id = env_id, "Environment woke up successfully"); - } - Err(e) => { - error!( - environment_id = env_id, - error = %e, - "Failed to wake environment" - ); - } - } - }); - // Serve a "waking up" HTML page with auto-refresh - let html = Self::generate_waking_html(); - let html_bytes = Bytes::from(html); + // Block until the environment is fully awake (containers healthy) + match on_demand.wake_environment(env_id, wake_timeout).await { + Ok(()) => { + info!( + environment_id = env_id, + "Environment woke up, waiting for route reload" + ); + + // Wait for the route table to reload so resolve_context works + let reload_timeout = std::time::Duration::from_secs(10); + let _ = on_demand.wait_for_route_reload(reload_timeout).await; + + // Fall through to normal request handling — don't return Ok(true) + } + Err(e) => { + error!( + environment_id = env_id, + error = %e, + "Failed to wake environment" + ); - let mut response = ResponseHeader::build(StatusCode::SERVICE_UNAVAILABLE, None)?; - response.insert_header("Content-Type", "text/html; charset=utf-8")?; - response.insert_header("Retry-After", "3")?; - response.insert_header("Cache-Control", "no-store")?; - response.insert_header("X-Request-ID", &ctx.request_id)?; + // Wake failed — return 503 with Retry-After + let mut response = + ResponseHeader::build(StatusCode::SERVICE_UNAVAILABLE, None)?; + response.insert_header("Retry-After", "5")?; + response.insert_header("Cache-Control", "no-store")?; + response.insert_header("X-Request-ID", &ctx.request_id)?; + response.insert_header("Content-Type", "application/json")?; + + let body_bytes = Bytes::from(format!( + r#"{{"status":"wake_failed","environment_id":{},"message":"Failed to start environment: {}"}}"#, + env_id, + e.to_string().replace('"', "\\\"") + )); - session - .write_response_header(Box::new(response), false) - .await?; - session.write_response_body(Some(html_bytes), true).await?; + session + .write_response_header(Box::new(response), false) + .await?; + session.write_response_body(Some(body_bytes), true).await?; - ctx.routing_status = "sleeping".to_string(); - return Ok(true); + ctx.routing_status = "wake_failed".to_string(); + return Ok(true); + } + } } } @@ -2199,6 +2184,142 @@ impl ProxyHttp for LoadBalancer { )); } } + + // Password wall: check if environment has password protection enabled + let password_protection = project_ctx + .environment + .deployment_config + .as_ref() + .and_then(|dc| dc.security.as_ref()) + .and_then(|s| s.password_protection.as_ref()) + .filter(|pp| pp.enabled); + + if let Some(pp) = password_protection { + let password_hash = pp.password_hash.clone(); + let env_id = project_ctx.environment.id; + let project_name = &project_ctx.project.name; + let environment_name = &project_ctx.environment.name; + + // Check if this is the password verify POST endpoint + if ctx.path == "/_temps/password-verify" && ctx.method == "POST" { + // Read the POST body to get the password + let body = session.read_request_body().await.map_err(|e| { + error!("Failed to read password verify body: {}", e); + e + })?; + + let body_str = body + .as_ref() + .map(|b| String::from_utf8_lossy(b).to_string()) + .unwrap_or_default(); + + // Parse form data (application/x-www-form-urlencoded) + let params: Vec<(String, String)> = + url::form_urlencoded::parse(body_str.as_bytes()) + .into_owned() + .collect(); + + let password = params + .iter() + .find(|(k, _)| k == "password") + .map(|(_, v)| v.as_str()) + .unwrap_or(""); + + let redirect = params + .iter() + .find(|(k, _)| k == "redirect") + .map(|(_, v)| v.as_str()) + .unwrap_or("/"); + + if crate::handler::password_wall::verify_password(password, &password_hash) { + // Password correct — set cookie and redirect + let host = ctx.host.clone(); + let set_cookie = crate::handler::password_wall::build_set_cookie_header( + env_id, + &password_hash, + &host, + ); + + let mut resp = ResponseHeader::build(303, None)?; + resp.insert_header("Location", redirect)?; + resp.insert_header("Set-Cookie", &set_cookie)?; + resp.insert_header("Cache-Control", "no-store")?; + resp.insert_header("X-Request-ID", &ctx.request_id)?; + + session.write_response_header(Box::new(resp), true).await?; + ctx.routing_status = "password_verified".to_string(); + return Ok(true); + } else { + // Wrong password — show form again with error + let html = crate::handler::password_wall::generate_password_form_html( + redirect, + true, + project_name, + environment_name, + ); + let html_bytes = Bytes::from(html); + + let mut resp = ResponseHeader::build(StatusCode::OK, None)?; + resp.insert_header("Content-Type", "text/html; charset=utf-8")?; + resp.insert_header("Cache-Control", "no-store")?; + resp.insert_header("X-Request-ID", &ctx.request_id)?; + + session.write_response_header(Box::new(resp), false).await?; + session.write_response_body(Some(html_bytes), true).await?; + ctx.routing_status = "password_wrong".to_string(); + return Ok(true); + } + } + + // Check for valid password cookie + let has_valid_cookie = session + .req_header() + .headers + .get_all("Cookie") + .iter() + .filter_map(|h| h.to_str().ok()) + .flat_map(|s| Cookie::split_parse(s).filter_map(Result::ok)) + .find(|c| c.name() == crate::handler::password_wall::PASSWORD_COOKIE_NAME) + .map(|c| { + crate::handler::password_wall::validate_cookie( + c.value(), + env_id, + &password_hash, + ) + }) + .unwrap_or(false); + + if !has_valid_cookie { + // No valid cookie — show password form + let current_path = if let Some(ref qs) = ctx.query_string { + if qs.is_empty() { + ctx.path.clone() + } else { + format!("{}?{}", ctx.path, qs) + } + } else { + ctx.path.clone() + }; + + let html = crate::handler::password_wall::generate_password_form_html( + ¤t_path, + false, + project_name, + environment_name, + ); + let html_bytes = Bytes::from(html); + + let mut resp = ResponseHeader::build(StatusCode::OK, None)?; + resp.insert_header("Content-Type", "text/html; charset=utf-8")?; + resp.insert_header("Cache-Control", "no-store")?; + resp.insert_header("X-Request-ID", &ctx.request_id)?; + + session.write_response_header(Box::new(resp), false).await?; + session.write_response_body(Some(html_bytes), true).await?; + ctx.routing_status = "password_wall".to_string(); + return Ok(true); + } + } } else { ctx.routing_status = "no_project".to_string(); } @@ -3003,6 +3124,9 @@ impl ProxyHttp for LoadBalancer { if let Err(e) = header.insert_header(header::CACHE_CONTROL, "private, no-store") { error!("Failed to insert CACHE_CONTROL header: {:?}", e); } + if let Err(e) = header.insert_header("content-type", "text/html; charset=utf-8") { + error!("Failed to insert content-type header: {:?}", e); + } if let Err(e) = session.write_response_header(Box::new(header), false).await { error!("Failed to write response header: {:?}", e); @@ -3012,8 +3136,20 @@ impl ProxyHttp for LoadBalancer { }; } + const SERVICE_UNAVAILABLE_BODY: &str = concat!( + "Service Unavailable", + "", + "

Service Unavailable

", + "

This application is temporarily unable to handle requests.

", + "

If you are the site owner, check that your deployment is running.

", + "
" + ); + if let Err(e) = session - .write_response_body(Some(Bytes::from("Service Unavailable")), true) + .write_response_body(Some(Bytes::from(SERVICE_UNAVAILABLE_BODY)), true) .await { error!("Failed to write response body: {:?}", e); @@ -3031,7 +3167,7 @@ impl ProxyHttp for LoadBalancer { .and_then(|v| v.parse::().ok()); // For failed requests, response size is the error message size - let response_size = Some("Service Unavailable".len() as i64); + let response_size = Some(SERVICE_UNAVAILABLE_BODY.len() as i64); let proxy_log_request = CreateProxyLogRequest { method: ctx.method.clone(), diff --git a/crates/temps-proxy/src/server.rs b/crates/temps-proxy/src/server.rs index 8ea3bd9f..a2c7ba1b 100644 --- a/crates/temps-proxy/src/server.rs +++ b/crates/temps-proxy/src/server.rs @@ -22,7 +22,7 @@ use temps_config::ServerConfig; use temps_core::plugin::{ServiceRegistrationContext, TempsPlugin}; use temps_database::DbConnection; use temps_routes::CachedPeerTable; -use tracing::{debug, info}; +use tracing::{debug, error, info}; use async_trait::async_trait; use std::future::Future; @@ -207,7 +207,7 @@ pub fn setup_proxy_server( route_table: Arc, shutdown_signal: Box, config: Arc, - container_lifecycle: Option>, + on_demand_manager: Option>, ) -> Result<()> { // Setup plugin system (async operation in sync context) let context = tokio::runtime::Runtime::new()? @@ -276,16 +276,11 @@ pub fn setup_proxy_server( proxy_config.disable_https_redirect, ); - // Wire up on-demand scale-to-zero if container lifecycle is provided - if let Some(lifecycle) = container_lifecycle { - let on_demand_manager = Arc::new(crate::on_demand::OnDemandManager::new( - db.clone(), - lifecycle, - )); - - // Register callback so sleeping domains are populated on every route reload - let on_demand_for_callback = Arc::clone(&on_demand_manager); - route_table.set_on_sleeping_callback(Arc::new(move |entries| { + // Wire up on-demand scale-to-zero if OnDemandManager was created + if let Some(ref on_demand_manager) = on_demand_manager { + // Register callback so sleeping domains and on-demand configs are populated on every route reload + let on_demand_for_callback = Arc::clone(on_demand_manager); + route_table.set_on_sleeping_callback(Arc::new(move |entries, on_demand_configs| { on_demand_for_callback.clear_sleeping_domains(); for entry in entries { on_demand_for_callback.register_sleeping_domain( @@ -298,12 +293,36 @@ pub fn setup_proxy_server( }, ); } + // Register on-demand configs so the idle sweep can track awake environments + for config in on_demand_configs { + on_demand_for_callback.register_on_demand_environment( + config.environment_id, + config.idle_timeout_seconds, + config.wake_timeout_seconds, + ); + } + // Signal any requests waiting for routes after a wake + on_demand_for_callback.notify_route_reloaded(); })); + // Reload routes so the callback populates on-demand configs. + // The initial load_routes() in start_listening() runs before this callback + // is registered, so without this reload the configs DashMap stays empty + // until the next PG NOTIFY event, and the idle sweep has nothing to check. + { + let rt = tokio::runtime::Runtime::new()?; + if let Err(e) = rt.block_on(route_table.load_routes()) { + error!( + "Failed to reload routes for on-demand config population: {}", + e + ); + } + } + // Start background idle sweep (checks every 60 seconds) on_demand_manager.start_sweep_task(std::time::Duration::from_secs(60)); - lb = lb.with_on_demand_manager(on_demand_manager); + lb = lb.with_on_demand_manager(Arc::clone(on_demand_manager)); info!("On-demand scale-to-zero enabled"); } diff --git a/crates/temps-proxy/src/service/proxy_log_service.rs b/crates/temps-proxy/src/service/proxy_log_service.rs index d3cf5b32..9142a5ed 100644 --- a/crates/temps-proxy/src/service/proxy_log_service.rs +++ b/crates/temps-proxy/src/service/proxy_log_service.rs @@ -533,7 +533,9 @@ impl ProxyLogService { format!("WHERE {}", where_clauses.join(" AND ")) }; - // Build the TimescaleDB query with time_bucket_gapfill + // Pass bucket_interval as a parameterized value to prevent SQL injection + let bucket_param_index = param_index; + let sql = format!( r#" SELECT @@ -545,7 +547,7 @@ impl ProxyLogService { COALESCE(total_response_bytes, 0) as total_response_bytes FROM ( SELECT - time_bucket_gapfill('{}', timestamp) AS bucket, + time_bucket_gapfill(${}::interval, timestamp) AS bucket, COUNT(*) as count, AVG(response_time_ms) as avg_response_time, SUM(CASE WHEN status_code >= 400 THEN 1 ELSE 0 END) as error_count, @@ -557,7 +559,7 @@ impl ProxyLogService { ) sub ORDER BY bucket ASC "#, - bucket_interval, where_clause + bucket_param_index, where_clause ); // Execute raw SQL query @@ -571,6 +573,9 @@ impl ProxyLogService { Self::add_filter_values(&mut values, f); } + // Add bucket_interval as parameterized value + values.push(bucket_interval.into()); + let stmt = sea_orm::Statement::from_sql_and_values(db_backend, &sql, values); let results = self.db.query_all(stmt).await?; @@ -602,6 +607,107 @@ impl ProxyLogService { Ok(stats) } + /// Get health summaries for multiple projects in a single query + pub async fn get_projects_health_summary( + &self, + project_ids: &[i32], + start_time: UtcDateTime, + end_time: UtcDateTime, + ) -> Result, ProxyLogServiceError> { + if project_ids.is_empty() { + return Ok(vec![]); + } + + // Build placeholders for project IDs ($3, $4, $5, ...) + let placeholders: Vec = project_ids + .iter() + .enumerate() + .map(|(i, _)| format!("${}", i + 3)) + .collect(); + let placeholders_str = placeholders.join(", "); + + let sql = format!( + r#" + SELECT + project_id, + COALESCE(COUNT(*), 0) as total_requests, + COALESCE(SUM(CASE WHEN status_code >= 400 THEN 1 ELSE 0 END), 0) as total_errors, + COALESCE(AVG(response_time_ms)::float8, 0) as avg_response_time_ms + FROM proxy_logs + WHERE timestamp >= $1 + AND timestamp < $2 + AND project_id IN ({}) + GROUP BY project_id + "#, + placeholders_str + ); + + let db_backend = sea_orm::DatabaseBackend::Postgres; + let mut values: Vec = vec![start_time.into(), end_time.into()]; + for &id in project_ids { + values.push(id.into()); + } + + let stmt = sea_orm::Statement::from_sql_and_values(db_backend, &sql, values); + let results = self.db.query_all(stmt).await?; + + // Build a map from query results + let mut summaries: std::collections::HashMap = + std::collections::HashMap::new(); + + for row in &results { + let project_id: i32 = row.try_get("", "project_id").unwrap_or(0); + let total_requests: i64 = row.try_get("", "total_requests").unwrap_or(0); + let total_errors: i64 = row.try_get("", "total_errors").unwrap_or(0); + let avg_response_time_ms: f64 = row.try_get("", "avg_response_time_ms").unwrap_or(0.0); + + let error_rate = if total_requests > 0 { + (total_errors as f64 / total_requests as f64) * 100.0 + } else { + 0.0 + }; + + let status = if total_requests == 0 { + "unknown".to_string() + } else if error_rate > 50.0 { + "down".to_string() + } else if error_rate > 10.0 { + "degraded".to_string() + } else { + "healthy".to_string() + }; + + summaries.insert( + project_id, + ProjectHealthSummary { + project_id, + total_requests, + total_errors, + avg_response_time_ms: (avg_response_time_ms * 10.0).round() / 10.0, + error_rate: (error_rate * 10.0).round() / 10.0, + status, + }, + ); + } + + // Include projects with no data as "unknown" + let result: Vec = project_ids + .iter() + .map(|&id| { + summaries.remove(&id).unwrap_or(ProjectHealthSummary { + project_id: id, + total_requests: 0, + total_errors: 0, + avg_response_time_ms: 0.0, + error_rate: 0.0, + status: "unknown".to_string(), + }) + }) + .collect(); + + Ok(result) + } + // Helper methods for filtering fn apply_stats_filters( mut query: Select, @@ -852,6 +958,22 @@ pub struct TodayStatsResponse { pub date: String, } +/// Health summary for a single project (last 1 hour) +#[derive(Debug, Clone, Serialize, Deserialize, ToSchema)] +pub struct ProjectHealthSummary { + pub project_id: i32, + /// Total requests in the period + pub total_requests: i64, + /// Total errors (status >= 400) in the period + pub total_errors: i64, + /// Average response time in ms + pub avg_response_time_ms: f64, + /// Error rate as a percentage (0-100) + pub error_rate: f64, + /// Health status: "healthy", "degraded", "down", "unknown" + pub status: String, +} + #[cfg(test)] mod tests { use super::*; diff --git a/crates/temps-proxy/src/services.rs b/crates/temps-proxy/src/services.rs index f13a2438..76a1d6db 100644 --- a/crates/temps-proxy/src/services.rs +++ b/crates/temps-proxy/src/services.rs @@ -144,9 +144,10 @@ impl UpstreamResolver for UpstreamResolverImpl { } // No route found - route to console address as default - debug!( - "No route found in table for host: {}, routing to console", - host + warn!( + "No route found in table for host: {}, routing to console (route_count={})", + host, + self.route_table.len() ); let peer = Box::new(HttpPeer::new( self.server_config.console_address.clone(), diff --git a/crates/temps-query-postgres/Cargo.toml b/crates/temps-query-postgres/Cargo.toml index 85de1e90..1968c24c 100644 --- a/crates/temps-query-postgres/Cargo.toml +++ b/crates/temps-query-postgres/Cargo.toml @@ -10,6 +10,8 @@ repository.workspace = true temps-query = { path = "../temps-query" } tokio = { version = "1.40", features = ["full"] } tokio-postgres = { version = "0.7", features = ["with-serde_json-1", "with-chrono-0_4", "with-uuid-1"] } +tokio-postgres-rustls = "0.13" +rustls = { workspace = true } async-trait = "0.1" serde_json = "1.0" tracing = "0.1" diff --git a/crates/temps-query-postgres/src/lib.rs b/crates/temps-query-postgres/src/lib.rs index 6016aba0..6df86f8d 100644 --- a/crates/temps-query-postgres/src/lib.rs +++ b/crates/temps-query-postgres/src/lib.rs @@ -10,13 +10,61 @@ use temps_query::{ DataRow, DataSource, DatasetSchema, EntityCountHint, EntityInfo, FieldDef, FieldType, Introspect, QueryOptions, QueryResult, QueryStats, Queryable, Result, }; -use tokio::sync::RwLock; use tokio_postgres::{Client, NoTls, Row}; -use tracing::{debug, error}; +use tokio_postgres_rustls::MakeRustlsConnect; +use tracing::{debug, error, warn}; + +/// Escape a SQL identifier by doubling any internal double-quote characters. +/// Prevents identifier injection when used inside `"..."` quoting. +fn escape_ident(name: &str) -> String { + name.replace('"', "\"\"") +} + +/// A certificate verifier that accepts all server certificates (including self-signed). +/// Used for connecting to PostgreSQL clusters with `--ssl-self-signed` certificates. +#[derive(Debug)] +struct AcceptAllVerifier; + +impl rustls::client::danger::ServerCertVerifier for AcceptAllVerifier { + fn verify_server_cert( + &self, + _end_entity: &rustls::pki_types::CertificateDer<'_>, + _intermediates: &[rustls::pki_types::CertificateDer<'_>], + _server_name: &rustls::pki_types::ServerName<'_>, + _ocsp_response: &[u8], + _now: rustls::pki_types::UnixTime, + ) -> std::result::Result { + Ok(rustls::client::danger::ServerCertVerified::assertion()) + } + + fn verify_tls12_signature( + &self, + _message: &[u8], + _cert: &rustls::pki_types::CertificateDer<'_>, + _dss: &rustls::DigitallySignedStruct, + ) -> std::result::Result { + Ok(rustls::client::danger::HandshakeSignatureValid::assertion()) + } + + fn verify_tls13_signature( + &self, + _message: &[u8], + _cert: &rustls::pki_types::CertificateDer<'_>, + _dss: &rustls::DigitallySignedStruct, + ) -> std::result::Result { + Ok(rustls::client::danger::HandshakeSignatureValid::assertion()) + } + + fn supported_verify_schemes(&self) -> Vec { + rustls::crypto::ring::default_provider() + .signature_verification_algorithms + .supported_schemes() + } +} /// PostgreSQL data source implementation pub struct PostgresSource { - client: Arc>, + client: Arc, database_name: String, } @@ -39,16 +87,33 @@ impl PostgresSource { username, host, port, database ); - let (client, connection) = tokio_postgres::connect(&config, NoTls).await.map_err(|e| { - DataError::ConnectionFailed(format!("PostgreSQL connection failed: {}", e)) - })?; + let client = match Self::connect_with_tls(&config).await { + Ok(client) => { + debug!("Connected to PostgreSQL with TLS"); + client + } + Err(tls_err) => { + warn!( + "TLS connection failed, falling back to plain connection: {}", + tls_err + ); + let (client, connection) = + tokio_postgres::connect(&config, NoTls).await.map_err(|e| { + DataError::ConnectionFailed(format!( + "PostgreSQL connection failed (TLS error: {}, plain error: {})", + tls_err, e + )) + })?; - // Spawn connection handler - tokio::spawn(async move { - if let Err(e) = connection.await { - error!("PostgreSQL connection error: {}", e); + tokio::spawn(async move { + if let Err(e) = connection.await { + error!("PostgreSQL connection error: {}", e); + } + }); + + client } - }); + }; debug!( "Successfully connected to PostgreSQL database: {}", @@ -56,11 +121,41 @@ impl PostgresSource { ); Ok(Self { - client: Arc::new(RwLock::new(client)), + client: Arc::new(client), database_name: database.to_string(), }) } + /// Execute a raw SQL statement (no result rows expected). + /// Used for DDL/admin operations like creating roles and granting privileges. + pub async fn execute_raw(&self, sql: &str) -> Result<()> { + self.client + .batch_execute(sql) + .await + .map_err(|e| DataError::QueryFailed(format!("Execute failed: {}", e)))?; + Ok(()) + } + + /// Attempt a TLS connection using rustls configured to accept self-signed certificates. + /// Returns the Client after spawning the connection task. + async fn connect_with_tls(config: &str) -> std::result::Result { + let rustls_config = rustls::ClientConfig::builder() + .dangerous() + .with_custom_certificate_verifier(Arc::new(AcceptAllVerifier)) + .with_no_client_auth(); + + let tls = MakeRustlsConnect::new(rustls_config); + let (client, connection) = tokio_postgres::connect(config, tls).await?; + + tokio::spawn(async move { + if let Err(e) = connection.await { + error!("PostgreSQL TLS connection error: {}", e); + } + }); + + Ok(client) + } + /// Strip SQL string literals to avoid false positives when scanning for dangerous patterns. /// Replaces content inside single-quoted strings with empty strings. fn strip_sql_string_literals(sql: &str) -> String { @@ -403,7 +498,7 @@ impl DataSource for PostgresSource { } async fn list_containers(&self, path: &ContainerPath) -> Result> { - let client = self.client.read().await; + let client = &self.client; match path.depth() { // Depth 0: List databases @@ -536,7 +631,7 @@ impl DataSource for PostgresSource { } async fn get_container_info(&self, path: &ContainerPath) -> Result { - let client = self.client.read().await; + let client = &self.client; match path.depth() { // Depth 1: Get database info @@ -660,7 +755,7 @@ impl DataSource for PostgresSource { ))); } - let client = self.client.read().await; + let client = &self.client; debug!("Listing tables in schema: {}", schema_name); @@ -731,7 +826,7 @@ impl DataSource for PostgresSource { ))); } - let client = self.client.read().await; + let client = &self.client; let query = r#" SELECT table_type @@ -754,7 +849,8 @@ impl DataSource for PostgresSource { // Get row count let count_query = format!( "SELECT COUNT(*) FROM \"{}\".\"{}\"", - schema_name, entity_name + escape_ident(schema_name), + escape_ident(entity_name) ); let row_count = client @@ -797,7 +893,7 @@ impl DataSource for PostgresSource { ))); } - let client = self.client.read().await; + let client = &self.client; debug!("Getting schema for table: {}.{}", schema_name, entity_name); @@ -882,7 +978,7 @@ impl Introspect for PostgresSource { } let schema_name = &container_path.segments[1]; - let client = self.client.read().await; + let client = &self.client; let query = r#" SELECT COUNT(*) @@ -912,7 +1008,7 @@ impl Introspect for PostgresSource { } let schema_name = &container_path.segments[1]; - let client = self.client.read().await; + let client = &self.client; let query = r#" SELECT data_type @@ -951,12 +1047,15 @@ impl Queryable for PostgresSource { } let schema_name = &container_path.segments[1]; - let client = self.client.read().await; let start = std::time::Instant::now(); // Build SQL query - let mut sql = format!("SELECT * FROM \"{}\".\"{}\"", schema_name, entity_name); + let mut sql = format!( + "SELECT * FROM \"{}\".\"{}\"", + escape_ident(schema_name), + escape_ident(entity_name) + ); // Add WHERE clause if filters provided if let Some(filter_json) = filters { @@ -985,6 +1084,11 @@ impl Queryable for PostgresSource { debug!("Executing query: {}", sql); + // Safety: SQL injection is prevented by validate_sql() for WHERE clauses + // and escape_ident() for identifiers. The database user should be read-only + // as defense-in-depth. + let client = &self.client; + let rows = client.query(&sql, &[]).await.map_err(|e| { error!("PostgreSQL query failed: {}", e); error!("Failed SQL: {}", sql); @@ -1057,11 +1161,11 @@ impl Queryable for PostgresSource { } let schema_name = &container_path.segments[1]; - let client = self.client.read().await; let mut sql = format!( "SELECT COUNT(*) FROM \"{}\".\"{}\"", - schema_name, entity_name + escape_ident(schema_name), + escape_ident(entity_name) ); // Add WHERE clause if filters provided @@ -1074,6 +1178,8 @@ impl Queryable for PostgresSource { } } + let client = &self.client; + let row = client .query_one(&sql, &[]) .await @@ -1093,7 +1199,7 @@ impl Queryable for PostgresSource { } let schema_name = &container_path.segments[1]; - let client = self.client.read().await; + let client = &self.client; let query = r#" SELECT COUNT(*) diff --git a/crates/temps-routes/src/project_change_listener.rs b/crates/temps-routes/src/project_change_listener.rs index b9593007..a4fa7aca 100644 --- a/crates/temps-routes/src/project_change_listener.rs +++ b/crates/temps-routes/src/project_change_listener.rs @@ -10,7 +10,7 @@ use crate::route_table::CachedPeerTable; use anyhow::Result; use std::sync::Arc; -use tracing::{error, info}; +use tracing::{debug, error, info}; /// Listens for project route changes and updates the route cache pub struct ProjectChangeListener { @@ -51,36 +51,71 @@ impl ProjectChangeListener { let queue = self.queue.clone(); let handle = tokio::spawn(async move { + let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(10)); + // Don't pile up missed ticks if a reload takes longer than 10s + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + // Skip the first immediate tick (routes already loaded by RouteTableListener) + interval.tick().await; + loop { - match pg_listener.recv().await { - Ok(notification) => { - Self::handle_project_change_static( - &peer_table, - &queue, - notification.payload(), - ) - .await; - } - Err(e) => { - error!("Error receiving project change notification: {}", e); - - // Attempt to reconnect after error - tokio::time::sleep(tokio::time::Duration::from_secs(5)).await; - - match PgListener::connect_with(&pool).await { - Ok(mut new_listener) => { - if let Err(e) = new_listener.listen("project_route_change").await { - error!("Failed to re-subscribe to project_route_change: {}", e); - } else { - pg_listener = new_listener; - info!("Reconnected to project_route_change listener"); - } + tokio::select! { + notification = pg_listener.recv() => { + match notification { + Ok(n) => { + // Handle the specific change payload for logging + Self::handle_project_change_static( + &peer_table, + &queue, + n.payload(), + ) + .await; + // handle_project_change_static already calls load_routes + sends event + continue; } Err(e) => { - error!("Failed to reconnect project_route_change listener: {}", e); + error!("Error receiving project change notification: {}", e); + + // Attempt to reconnect after error + tokio::time::sleep(tokio::time::Duration::from_secs(5)).await; + + match PgListener::connect_with(&pool).await { + Ok(mut new_listener) => { + if let Err(e) = new_listener.listen("project_route_change").await { + error!("Failed to re-subscribe to project_route_change: {}", e); + } else { + pg_listener = new_listener; + info!("Reconnected to project_route_change listener"); + } + } + Err(e) => { + error!("Failed to reconnect project_route_change listener: {}", e); + } + } + // Fall through to periodic reload below } } } + _ = interval.tick() => { + debug!("Periodic project route sync triggered"); + } + } + + // Periodic self-healing reload (timer tick or error recovery) + if let Err(e) = peer_table.load_routes().await { + error!("Failed to reload routes during periodic sync: {}", e); + } else { + let route_count = peer_table.len(); + debug!("Project route table synchronized ({} entries)", route_count); + + let event = + temps_core::Job::RouteTableUpdated(temps_core::RouteTableUpdatedJob { + environment_id: None, + deployment_id: None, + route_count, + }); + if let Err(e) = queue.send(event).await { + error!("Failed to send RouteTableUpdated event: {}", e); + } } } }); @@ -175,20 +210,33 @@ impl Drop for ProjectChangeListener { } /// Unified payload structure for route changes (project or environment) +/// +/// IMPORTANT: `Environment` must be listed before `Project` because with +/// `#[serde(untagged)]`, serde tries variants in order. `EnvironmentChangePayload` +/// has a required `environment_id` field that acts as a discriminator. +/// `ProjectChangePayload` uses `#[serde(default)]` on most fields, so it would +/// greedily match environment payloads too if listed first. #[derive(Debug, serde::Deserialize)] #[serde(untagged)] enum RouteChangePayload { - Project(ProjectChangePayload), Environment(EnvironmentChangePayload), + Project(ProjectChangePayload), } /// Payload from project triggers +/// +/// Fields are optional with defaults because the INSERT/DELETE trigger sends a +/// minimal payload (`action`, `project_id`, `field`) that lacks `is_deleted`, +/// `slug`, and `timestamp`. Only UPDATEEs send the full payload. #[derive(Debug, serde::Deserialize)] struct ProjectChangePayload { action: String, // INSERT, UPDATE, or DELETE project_id: i32, + #[serde(default)] is_deleted: bool, + #[serde(default)] slug: String, + #[serde(default)] #[allow(dead_code)] timestamp: String, // Included for debugging/auditing } @@ -263,6 +311,37 @@ mod tests { } } + #[test] + fn test_parse_project_insert_payload_minimal() { + // INSERT/DELETE triggers send minimal payloads without is_deleted, slug, timestamp. + // These must parse successfully with defaults — this was a bug that caused + // route reloads to be silently skipped for new project deployments. + let payload = r#"{"action":"INSERT","project_id":42,"field":"project"}"#; + let change: RouteChangePayload = serde_json::from_str(payload).unwrap(); + match change { + RouteChangePayload::Project(project) => { + assert_eq!(project.project_id, 42); + assert_eq!(project.action, "INSERT"); + assert!(!project.is_deleted); // default + assert_eq!(project.slug, ""); // default + } + _ => panic!("Expected Project payload, got {:?}", change), + } + } + + #[test] + fn test_parse_project_delete_payload_minimal() { + let payload = r#"{"action":"DELETE","project_id":7,"field":"project"}"#; + let change: RouteChangePayload = serde_json::from_str(payload).unwrap(); + match change { + RouteChangePayload::Project(project) => { + assert_eq!(project.project_id, 7); + assert_eq!(project.action, "DELETE"); + } + _ => panic!("Expected Project payload, got {:?}", change), + } + } + // ======================================================================== // ProjectChangeListener lifecycle tests // ======================================================================== diff --git a/crates/temps-routes/src/route_table.rs b/crates/temps-routes/src/route_table.rs index 3a0071ee..b5d2033e 100644 --- a/crates/temps-routes/src/route_table.rs +++ b/crates/temps-routes/src/route_table.rs @@ -112,6 +112,14 @@ pub struct SleepingEnvironmentEntry { pub wake_timeout_seconds: i32, } +/// On-demand config for an awake environment that should be tracked for idle timeout. +#[derive(Clone, Debug)] +pub struct OnDemandConfigEntry { + pub environment_id: i32, + pub idle_timeout_seconds: i32, + pub wake_timeout_seconds: i32, +} + /// A single backend entry: network address plus container metadata for tracking. #[derive(Clone, Debug)] pub struct BackendEntry { @@ -261,8 +269,9 @@ impl RouteInfo { /// - `http_wildcards`: Wildcard patterns for HTTP Host header routing /// - `tls_wildcards`: Wildcard patterns for TLS SNI routing /// -/// Callback invoked after each route table reload with the list of sleeping environments. -pub type OnSleepingCallback = Arc) + Send + Sync>; +/// Callback invoked after each route table reload with sleeping environments and on-demand configs. +pub type OnSleepingCallback = + Arc, Vec) + Send + Sync>; pub struct CachedPeerTable { /// Exact hostname -> RouteInfo for HTTP routes (route_type = 'http') @@ -799,15 +808,17 @@ impl CachedPeerTable { .entry(env.id) .or_insert_with(|| Arc::new(env.clone())); - // Fetch deployment if not cached + // Fetch deployment if not cached. + // Accept any state — if current_deployment_id points here, it should be routable. + // The previous "completed" filter caused a race: mark_deployment_complete sets + // current_deployment_id (fires PG NOTIFY) BEFORE setting state="completed", + // so the route table reload would skip the deployment and never confirm. if !deployments_cache.contains_key(&deployment_id) { if let Ok(Some(dep)) = deployments::Entity::find_by_id(deployment_id) .one(self.db.as_ref()) .await { - if dep.state == "completed" { - deployments_cache.insert(dep.id, Arc::new(dep)); - } + deployments_cache.insert(dep.id, Arc::new(dep)); } } @@ -957,15 +968,13 @@ impl CachedPeerTable { .or_insert_with(|| Arc::new(env.clone())); if let Some(deployment_id) = env.current_deployment_id { - // Fetch deployment if not cached + // Fetch deployment if not cached (accept any state — same rationale as section 4) if !deployments_cache.contains_key(&deployment_id) { if let Ok(Some(dep)) = deployments::Entity::find_by_id(deployment_id) .one(self.db.as_ref()) .await { - if dep.state == "completed" { - deployments_cache.insert(dep.id, Arc::new(dep)); - } + deployments_cache.insert(dep.id, Arc::new(dep)); } } @@ -1084,13 +1093,36 @@ impl CachedPeerTable { ); } - debug!( + info!( "Route table loaded with {} total entries ({} HTTP exact, {} TLS exact, {} HTTP wildcards, {} TLS wildcards)", route_count, http_routes_count, tls_routes_count, http_wildcards_count, tls_wildcards_count ); - // Notify callback with sleeping environments (for on-demand wake-on-request) + // Collect on-demand configs for awake environments so the idle sweep can track them. + let on_demand_configs: Vec = environments_cache + .values() + .filter(|env| !env.sleeping) + .filter_map(|env| { + let dc = env.deployment_config.as_ref()?; + if dc.on_demand { + Some(OnDemandConfigEntry { + environment_id: env.id, + idle_timeout_seconds: dc.idle_timeout_seconds, + wake_timeout_seconds: dc.wake_timeout_seconds, + }) + } else { + None + } + }) + .collect(); + + debug!( + "Found {} on-demand configs for idle tracking", + on_demand_configs.len() + ); + + // Notify callback with sleeping environments and on-demand configs if let Some(callback) = self.on_sleeping_callback.lock().as_ref() { - callback(sleeping_environments.clone()); + callback(sleeping_environments.clone(), on_demand_configs); } Ok(sleeping_environments) @@ -1110,6 +1142,30 @@ impl CachedPeerTable { pub fn is_empty(&self) -> bool { self.routes.read().is_empty() } + + /// Check if any route in the table points to a specific deployment. + /// + /// Used by `mark_deployment_complete` to verify the proxy's in-memory route + /// table has actually loaded the new deployment — not just that the DB row + /// was written (which would always be true since we just wrote it). + pub fn has_route_for_deployment(&self, deployment_id: i32) -> bool { + let routes = self.routes.read(); + routes.values().any(|route| { + route + .deployment + .as_ref() + .is_some_and(|d| d.id == deployment_id) + }) + } +} + +#[temps_core::async_trait::async_trait] +impl temps_core::route_table::RouteTableRefresher for CachedPeerTable { + async fn refresh_routes(&self) -> Result> { + self.load_routes().await?; + let count = self.len() + self.http_routes.read().len() + self.tls_routes.read().len(); + Ok(count) + } } /// Listens for PostgreSQL notifications and automatically reloads the route table @@ -1154,63 +1210,76 @@ impl RouteTableListener { "Started listening for route table changes on PostgreSQL channel 'route_table_changes'" ); - // Spawn background task to handle notifications + // Spawn background task that combines: + // 1. PG NOTIFY events for immediate reloads (low latency) + // 2. Periodic sync every 10 seconds as self-healing fallback + // + // This ensures the route table stays in sync even if NOTIFY events + // are lost (connection drops, reconnect windows, parse failures). let peer_table = self.peer_table.clone(); let queue = self.queue.clone(); let handle = tokio::spawn(async move { - loop { - match listener.recv().await { - Ok(notification) => { - debug!( - "Received route table change notification: {}", - notification.payload() - ); - - debug!("Route table synchronizing..."); + let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(10)); + // Don't pile up missed ticks if a reload takes longer than 10s + interval.set_missed_tick_behavior(tokio::time::MissedTickBehavior::Delay); + // Skip the first immediate tick (we already loaded above) + interval.tick().await; - if let Err(e) = peer_table.load_routes().await { - error!("Failed to reload routes: {}", e); - } else { - let route_count = peer_table.len(); - debug!("Route table synchronized ({} entries)", route_count); - - // Notify via queue that routes have been reloaded. - // This channel handles generic route_table_changes - // (domains, custom_routes, etc.) so we don't have - // environment/deployment context — those fields are None. - let event = temps_core::Job::RouteTableUpdated( - temps_core::RouteTableUpdatedJob { - environment_id: None, - deployment_id: None, - route_count, - }, - ); - if let Err(e) = queue.send(event).await { - error!("Failed to send RouteTableUpdated event: {}", e); - } - } - } - Err(e) => { - error!("Listener error: {}", e); - - // Attempt to reconnect after error - tokio::time::sleep(tokio::time::Duration::from_secs(5)).await; - - match PgListener::connect_with(&pool).await { - Ok(mut new_listener) => { - if let Err(e) = new_listener.listen("route_table_changes").await { - error!("Failed to re-subscribe to notifications: {}", e); - } else { - listener = new_listener; - info!("Reconnected to route table notification listener"); - } + loop { + tokio::select! { + notification = listener.recv() => { + match notification { + Ok(n) => { + debug!( + "Received route table change notification: {}", + n.payload() + ); } Err(e) => { - error!("Failed to reconnect listener: {}", e); - warn!("Route table updates will not be received until reconnection succeeds"); + error!("Listener error: {}", e); + + // Attempt to reconnect after error + tokio::time::sleep(tokio::time::Duration::from_secs(5)).await; + + match PgListener::connect_with(&pool).await { + Ok(mut new_listener) => { + if let Err(e) = new_listener.listen("route_table_changes").await { + error!("Failed to re-subscribe to notifications: {}", e); + } else { + listener = new_listener; + info!("Reconnected to route table notification listener"); + } + } + Err(e) => { + error!("Failed to reconnect listener: {}", e); + warn!("Route table updates will not be received until reconnection succeeds"); + } + } + // Still fall through to reload routes below } } } + _ = interval.tick() => { + debug!("Periodic route table sync triggered"); + } + } + + // Reload routes regardless of whether triggered by NOTIFY or timer + if let Err(e) = peer_table.load_routes().await { + error!("Failed to reload routes: {}", e); + } else { + let route_count = peer_table.len(); + debug!("Route table synchronized ({} entries)", route_count); + + let event = + temps_core::Job::RouteTableUpdated(temps_core::RouteTableUpdatedJob { + environment_id: None, + deployment_id: None, + route_count, + }); + if let Err(e) = queue.send(event).await { + error!("Failed to send RouteTableUpdated event: {}", e); + } } } }); diff --git a/crates/temps-static-files/src/handler.rs b/crates/temps-static-files/src/handler.rs index ac694202..fc05e2b6 100644 --- a/crates/temps-static-files/src/handler.rs +++ b/crates/temps-static-files/src/handler.rs @@ -65,15 +65,15 @@ async fn get_file( ) } Err(e) if e.kind() == std::io::ErrorKind::NotFound => { - error!("File not found: {}", file_path); + debug!("File not found: {}", file_path); ( StatusCode::NOT_FOUND, [(header::CONTENT_TYPE, "text/plain")], - format!("File not found: {}", file_path).into_bytes(), + b"Not found".to_vec(), ) } Err(e) if e.kind() == std::io::ErrorKind::PermissionDenied => { - error!("Access denied: {}", file_path); + error!("Access denied for file: {}", file_path); ( StatusCode::FORBIDDEN, [(header::CONTENT_TYPE, "text/plain")], @@ -85,7 +85,7 @@ async fn get_file( ( StatusCode::INTERNAL_SERVER_ERROR, [(header::CONTENT_TYPE, "text/plain")], - format!("Error reading file: {}", e).into_bytes(), + b"Internal server error".to_vec(), ) } } diff --git a/crates/temps-status-page/src/routes/status_page.rs b/crates/temps-status-page/src/routes/status_page.rs index 91cabde3..3e433a0e 100644 --- a/crates/temps-status-page/src/routes/status_page.rs +++ b/crates/temps-status-page/src/routes/status_page.rs @@ -16,9 +16,9 @@ use utoipa::OpenApi; use crate::services::{ CreateIncidentRequest, CreateMonitorRequest, CurrentStatusResponse, IncidentBucketedResponse, - IncidentResponse, IncidentUpdateResponse, MonitorResponse, StatusBucketedResponse, - StatusPageError, StatusPageOverview, StatusPageService, UpdateIncidentStatusRequest, - UptimeHistoryResponse, + IncidentResponse, IncidentUpdateResponse, MonitorResponse, ProjectMonitorHealth, + StatusBucketedResponse, StatusPageError, StatusPageOverview, StatusPageService, + UpdateIncidentStatusRequest, UptimeHistoryResponse, }; /// Application state trait for status page routes @@ -44,6 +44,7 @@ pub trait StatusPageAppState: Send + Sync + 'static { update_incident_status, get_incident_updates, get_bucketed_incidents, + get_projects_monitor_health, ), components( schemas( @@ -58,6 +59,8 @@ pub trait StatusPageAppState: Send + Sync + 'static { UpdateIncidentStatusRequest, IncidentUpdateResponse, IncidentBucketedResponse, + ProjectMonitorHealth, + ProjectsMonitorHealthResponse, ) ), tags( @@ -656,6 +659,76 @@ where .map_err(map_error) } +/// Query parameters for batch project health +#[derive(Deserialize, utoipa::IntoParams)] +pub struct ProjectsHealthQuery { + /// Comma-separated list of project IDs + pub project_ids: String, +} + +/// Batch response for projects health +#[derive(serde::Serialize, utoipa::ToSchema)] +pub struct ProjectsMonitorHealthResponse { + pub projects: std::collections::HashMap, +} + +/// Get monitor-based health summaries for multiple projects in a single query +#[utoipa::path( + get, + path = "/monitors-health/projects", + params(ProjectsHealthQuery), + responses( + (status = 200, description = "Health summaries per project", body = ProjectsMonitorHealthResponse), + (status = 400, description = "Invalid parameters"), + (status = 401, description = "Unauthorized"), + (status = 500, description = "Internal server error"), + ), + tag = "Status Page", + security(("bearer_auth" = [])) +)] +pub async fn get_projects_monitor_health( + RequireAuth(auth): RequireAuth, + State(app_state): State>, + Query(query): Query, +) -> Result +where + T: StatusPageAppState, +{ + permission_guard!(auth, StatusPageRead); + + let project_ids: Vec = query + .project_ids + .split(',') + .filter_map(|s| s.trim().parse().ok()) + .collect(); + + if project_ids.is_empty() { + return Err(bad_request() + .detail("project_ids must contain at least one valid ID") + .build()); + } + + if project_ids.len() > 100 { + return Err(bad_request() + .detail("Maximum 100 project IDs allowed") + .build()); + } + + let summaries = app_state + .status_page_service() + .monitor_service() + .get_projects_monitor_health(&project_ids) + .await + .map_err(map_error)?; + + let projects: std::collections::HashMap = summaries + .into_iter() + .map(|s| (s.project_id.to_string(), s)) + .collect(); + + Ok(Json(ProjectsMonitorHealthResponse { projects })) +} + /// Create router for status page endpoints pub fn create_router() -> Router> where @@ -665,6 +738,10 @@ where .route("/projects/{project_id}/status", get(get_status_overview)) .route("/projects/{project_id}/monitors", post(create_monitor)) .route("/projects/{project_id}/monitors", get(list_monitors)) + .route( + "/monitors-health/projects", + get(get_projects_monitor_health), + ) .route("/monitors/{monitor_id}", get(get_monitor)) .route("/monitors/{monitor_id}", delete(delete_monitor)) .route( diff --git a/crates/temps-status-page/src/services/health_check_service.rs b/crates/temps-status-page/src/services/health_check_service.rs index eaa99d59..043ffc02 100644 --- a/crates/temps-status-page/src/services/health_check_service.rs +++ b/crates/temps-status-page/src/services/health_check_service.rs @@ -45,19 +45,31 @@ impl HealthCheckService { pub async fn run_all_checks(&self) -> Result<(), StatusPageError> { debug!("Starting health check cycle"); - // Get all active monitors - let monitors = status_monitors::Entity::find() + // Single query: join monitors with environments to skip on-demand ones. + // Health checks go through the proxy, which resets the idle timer and + // would prevent scale-to-zero from ever triggering. + let monitors_with_envs = status_monitors::Entity::find() .filter(status_monitors::Column::IsActive.eq(true)) + .find_also_related(environments::Entity) .all(self.db.as_ref()) .await?; - debug!("Found {} active monitors to check", monitors.len()); + let total_monitors = monitors_with_envs.len(); + debug!("Found {} active monitors to check", total_monitors); + + let filtered_monitors: Vec<_> = Self::filter_on_demand_monitors(monitors_with_envs); + + debug!( + "Running checks for {} monitors ({} skipped as on-demand)", + filtered_monitors.len(), + total_monitors - filtered_monitors.len() + ); // Run checks concurrently with a limit let semaphore = Arc::new(tokio::sync::Semaphore::new(10)); // Limit concurrent checks let mut tasks = Vec::new(); - for monitor in monitors { + for monitor in filtered_monitors { let db = self.db.clone(); let http_client = self.http_client.clone(); let config_service = self.config_service.clone(); @@ -548,6 +560,35 @@ impl HealthCheckService { } } + /// Filter out monitors whose environment has on_demand enabled. + /// Health checks go through the proxy and reset the idle timer, + /// which would prevent scale-to-zero from ever triggering. + fn filter_on_demand_monitors( + monitors_with_envs: Vec<(status_monitors::Model, Option)>, + ) -> Vec { + monitors_with_envs + .into_iter() + .filter(|(monitor, env)| { + if let Some(env) = env { + let is_on_demand = env + .deployment_config + .as_ref() + .map(|dc| dc.on_demand) + .unwrap_or(false); + if is_on_demand { + debug!( + "Skipping monitor {} for on-demand environment {} ({})", + monitor.id, env.id, env.name + ); + return false; + } + } + true + }) + .map(|(monitor, _)| monitor) + .collect() + } + /// Check a specific environment using its deployment URL pub async fn check_environment( &self, @@ -616,3 +657,135 @@ impl HealthCheckService { } } } + +#[cfg(test)] +mod tests { + use super::*; + use temps_entities::deployment_config::DeploymentConfig; + use temps_entities::upstream_config::UpstreamList; + + fn make_monitor(id: i32, env_id: Option) -> status_monitors::Model { + status_monitors::Model { + id, + project_id: 1, + environment_id: env_id, + name: format!("monitor-{}", id), + monitor_type: "web".to_string(), + check_interval_seconds: 60, + is_active: true, + created_at: Utc::now(), + updated_at: Utc::now(), + } + } + + fn make_env(id: i32, on_demand: bool) -> environments::Model { + let deployment_config = if on_demand { + Some(DeploymentConfig { + on_demand: true, + idle_timeout_seconds: 60, + ..Default::default() + }) + } else { + Some(DeploymentConfig::default()) + }; + + environments::Model { + id, + name: format!("env-{}", id), + slug: format!("env-{}", id), + subdomain: format!("proj-env-{}", id), + last_deployment: None, + host: String::new(), + upstreams: UpstreamList::default(), + created_at: Utc::now(), + updated_at: Utc::now(), + project_id: 1, + current_deployment_id: Some(1), + branch: None, + deleted_at: None, + deployment_config, + is_preview: false, + protected: false, + sleeping: false, + last_activity_at: None, + } + } + + #[test] + fn test_filter_skips_on_demand_monitors() { + let input = vec![ + (make_monitor(1, Some(10)), Some(make_env(10, true))), + (make_monitor(2, Some(20)), Some(make_env(20, false))), + ]; + + let result = HealthCheckService::filter_on_demand_monitors(input); + + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, 2); + } + + #[test] + fn test_filter_keeps_monitors_without_environment() { + let input = vec![(make_monitor(1, None), None)]; + + let result = HealthCheckService::filter_on_demand_monitors(input); + + assert_eq!(result.len(), 1); + assert_eq!(result[0].id, 1); + } + + #[test] + fn test_filter_keeps_normal_monitors() { + let input = vec![ + (make_monitor(1, Some(10)), Some(make_env(10, false))), + (make_monitor(2, Some(20)), Some(make_env(20, false))), + (make_monitor(3, Some(30)), Some(make_env(30, false))), + ]; + + let result = HealthCheckService::filter_on_demand_monitors(input); + + assert_eq!(result.len(), 3); + } + + #[test] + fn test_filter_skips_all_on_demand() { + let input = vec![ + (make_monitor(1, Some(10)), Some(make_env(10, true))), + (make_monitor(2, Some(20)), Some(make_env(20, true))), + ]; + + let result = HealthCheckService::filter_on_demand_monitors(input); + + assert!(result.is_empty()); + } + + #[test] + fn test_filter_keeps_monitor_with_no_deployment_config() { + let mut env = make_env(10, false); + env.deployment_config = None; + + let input = vec![(make_monitor(1, Some(10)), Some(env))]; + + let result = HealthCheckService::filter_on_demand_monitors(input); + + assert_eq!(result.len(), 1); + } + + #[test] + fn test_filter_mixed_on_demand_and_normal() { + let input = vec![ + (make_monitor(1, Some(10)), Some(make_env(10, true))), // on-demand -> skip + (make_monitor(2, Some(20)), Some(make_env(20, false))), // normal -> keep + (make_monitor(3, None), None), // no env -> keep + (make_monitor(4, Some(40)), Some(make_env(40, true))), // on-demand -> skip + (make_monitor(5, Some(50)), Some(make_env(50, false))), // normal -> keep + ]; + + let result = HealthCheckService::filter_on_demand_monitors(input); + + assert_eq!(result.len(), 3); + assert_eq!(result[0].id, 2); + assert_eq!(result[1].id, 3); + assert_eq!(result[2].id, 5); + } +} diff --git a/crates/temps-status-page/src/services/monitor_service.rs b/crates/temps-status-page/src/services/monitor_service.rs index d96e3409..fe008f6e 100644 --- a/crates/temps-status-page/src/services/monitor_service.rs +++ b/crates/temps-status-page/src/services/monitor_service.rs @@ -299,6 +299,108 @@ impl MonitorService { Ok(responses) } + /// Get production health for multiple projects in one query. + /// Checks only monitors linked to the "production" environment. + /// Returns a simple status per project: operational, degraded, down, or no_monitors. + pub async fn get_projects_monitor_health( + &self, + project_ids: &[i32], + ) -> Result, StatusPageError> { + use sea_orm::FromQueryResult; + + if project_ids.is_empty() { + return Ok(vec![]); + } + + let placeholders: Vec = project_ids + .iter() + .enumerate() + .map(|(i, _)| format!("${}", i + 1)) + .collect(); + let placeholders_str = placeholders.join(", "); + + #[derive(FromQueryResult)] + struct ProjectHealth { + project_id: i32, + monitor_count: i64, + operational_count: i64, + } + + // Single query: for each project, get latest check status of production monitors. + // Inlined subquery (no CTE) so Postgres can push predicates down. + // LATERAL + LIMIT 1 uses idx_status_checks_monitor_time for index-only lookups. + let sql = format!( + r#" + SELECT + sm.project_id, + COUNT(sm.id) as monitor_count, + COUNT(lc.status) FILTER (WHERE lc.status = 'operational') as operational_count + FROM status_monitors sm + JOIN environments e ON e.id = sm.environment_id + AND e.name = 'production' + AND e.deleted_at IS NULL + LEFT JOIN LATERAL ( + SELECT sc.status + FROM status_checks sc + WHERE sc.monitor_id = sm.id + ORDER BY sc.checked_at DESC + LIMIT 1 + ) lc ON true + WHERE sm.project_id IN ({placeholders}) + AND sm.is_active = true + GROUP BY sm.project_id + "#, + placeholders = placeholders_str + ); + + let db_backend = sea_orm::DatabaseBackend::Postgres; + let values: Vec = project_ids.iter().map(|&id| id.into()).collect(); + let stmt = sea_orm::Statement::from_sql_and_values(db_backend, &sql, values); + + let results = status_monitors::Entity::find() + .from_raw_sql(stmt) + .into_model::() + .all(self.db.as_ref()) + .await?; + + let mut health_map: std::collections::HashMap = + std::collections::HashMap::new(); + + for r in results { + let status = if r.monitor_count == 0 { + "no_monitors".to_string() + } else if r.operational_count == r.monitor_count { + "operational".to_string() + } else if r.operational_count == 0 { + "down".to_string() + } else { + "degraded".to_string() + }; + + health_map.insert( + r.project_id, + super::types::ProjectMonitorHealth { + project_id: r.project_id, + status, + }, + ); + } + + let result: Vec = project_ids + .iter() + .map(|&id| { + health_map + .remove(&id) + .unwrap_or(super::types::ProjectMonitorHealth { + project_id: id, + status: "no_monitors".to_string(), + }) + }) + .collect(); + + Ok(result) + } + /// Update monitor active status pub async fn update_monitor_status( &self, diff --git a/crates/temps-status-page/src/services/types.rs b/crates/temps-status-page/src/services/types.rs index 9d4f404d..9882a0f8 100644 --- a/crates/temps-status-page/src/services/types.rs +++ b/crates/temps-status-page/src/services/types.rs @@ -141,6 +141,14 @@ pub struct CurrentStatusResponse { pub last_check_at: Option, } +/// Health summary for a single project based on its production monitors +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct ProjectMonitorHealth { + pub project_id: i32, + /// Overall status: "operational", "degraded", "down", or "no_monitors" + pub status: String, +} + // Time-bucketed aggregated data #[derive(Debug, Serialize, Deserialize, ToSchema)] pub struct StatusBucketedResponse { diff --git a/crates/temps-wireguard/Cargo.toml b/crates/temps-wireguard/Cargo.toml index ae0b57f0..59b4b65b 100644 --- a/crates/temps-wireguard/Cargo.toml +++ b/crates/temps-wireguard/Cargo.toml @@ -16,3 +16,7 @@ serde_json = { workspace = true } rand = { workspace = true } base64 = { workspace = true } temps-core = { path = "../temps-core" } + +# Embedded WireGuard (no external wireguard-tools dependency) +defguard_wireguard_rs = "0.9" +x25519-dalek = { version = "2.0", features = ["static_secrets"] } diff --git a/crates/temps-wireguard/src/lib.rs b/crates/temps-wireguard/src/lib.rs index ca48d7e2..f2e21023 100644 --- a/crates/temps-wireguard/src/lib.rs +++ b/crates/temps-wireguard/src/lib.rs @@ -1,20 +1,21 @@ //! WireGuard mesh networking for Temps multi-node deployments. //! -//! Wraps the `wg` and `ip` CLI commands to manage WireGuard interfaces -//! and peer connections. WireGuard is in-kernel on Linux 5.6+, so no -//! additional installation is required on modern Linux systems. +//! Uses `defguard_wireguard_rs` for embedded userspace WireGuard — no external +//! `wireguard-tools` package or kernel module required. The WireGuard protocol +//! runs in-process via boringtun (Cloudflare's Rust implementation). +use base64::{engine::general_purpose::STANDARD as BASE64, Engine}; use serde::{Deserialize, Serialize}; use std::net::Ipv4Addr; use thiserror::Error; #[derive(Error, Debug)] pub enum WireGuardError { - #[error("WireGuard command failed: {command} — {reason}")] - CommandFailed { command: String, reason: String }, + #[error("WireGuard operation failed: {operation} — {reason}")] + OperationFailed { operation: String, reason: String }, - #[error("WireGuard not available on this system: {0}")] - NotAvailable(String), + #[error("WireGuard interface error: {0}")] + InterfaceError(String), #[error("No available IP addresses in subnet {subnet}")] SubnetExhausted { subnet: String }, @@ -22,7 +23,7 @@ pub enum WireGuardError { #[error("Invalid configuration: {0}")] InvalidConfig(String), - #[error("IO error running WireGuard command: {0}")] + #[error("IO error: {0}")] Io(#[from] std::io::Error), #[error("Interface {interface} already exists")] @@ -51,6 +52,9 @@ pub struct WireGuardKeypair { } /// Manages a WireGuard interface for the Temps mesh network. +/// +/// Uses embedded userspace WireGuard via defguard/boringtun — no external +/// `wg` or `ip` CLI tools required. #[derive(Debug)] pub struct WireGuardManager { /// Interface name, e.g. "wg0" @@ -108,56 +112,22 @@ impl WireGuardManager { Self::new("wg0", &subnet, port) } - /// Check if WireGuard CLI tools are available on this system. + /// Check if WireGuard is available. + /// + /// With embedded userspace WireGuard this always succeeds — no external tools needed. pub async fn check_available(&self) -> Result<(), WireGuardError> { - let output = tokio::process::Command::new("wg") - .arg("--version") - .output() - .await - .map_err(|e| WireGuardError::NotAvailable(format!("Failed to run 'wg': {}", e)))?; - - if !output.status.success() { - return Err(WireGuardError::NotAvailable( - "wg command returned non-zero exit code".into(), - )); - } - Ok(()) } - /// Generate a new WireGuard keypair using `wg genkey` and `wg pubkey`. + /// Generate a new WireGuard keypair using pure Rust cryptography. + /// + /// Uses x25519-dalek for Curve25519 key generation — no `wg genkey` needed. pub async fn generate_keypair(&self) -> Result { - let genkey_output = tokio::process::Command::new("wg") - .arg("genkey") - .output() - .await?; - - if !genkey_output.status.success() { - return Err(WireGuardError::CommandFailed { - command: "wg genkey".into(), - reason: String::from_utf8_lossy(&genkey_output.stderr).to_string(), - }); - } - - let private_key = String::from_utf8_lossy(&genkey_output.stdout) - .trim() - .to_string(); - - // Pipe private key through wg pubkey - let child = tokio::process::Command::new("sh") - .arg("-c") - .arg(format!("echo '{}' | wg pubkey", private_key)) - .output() - .await?; - - if !child.status.success() { - return Err(WireGuardError::CommandFailed { - command: "wg pubkey".into(), - reason: String::from_utf8_lossy(&child.stderr).to_string(), - }); - } + let secret = x25519_dalek::StaticSecret::random_from_rng(rand::rngs::OsRng); + let public = x25519_dalek::PublicKey::from(&secret); - let public_key = String::from_utf8_lossy(&child.stdout).trim().to_string(); + let private_key = BASE64.encode(secret.as_bytes()); + let public_key = BASE64.encode(public.as_bytes()); Ok(WireGuardKeypair { private_key, @@ -167,58 +137,61 @@ impl WireGuardManager { /// Initialize the WireGuard interface with the given IP address and private key. /// - /// Creates the interface, assigns the IP, sets the private key, and brings it up. + /// Creates a userspace WireGuard interface via defguard/boringtun. + /// No external `wg` or `ip` CLI tools are needed. pub async fn init_interface( &self, ip: Ipv4Addr, private_key: &str, ) -> Result<(), WireGuardError> { - // Create the WireGuard interface - self.run_command( - "ip", - &["link", "add", "dev", &self.interface, "type", "wireguard"], - ) - .await?; - - // Assign IP address - let addr = format!("{}/{}", ip, self.subnet_mask); - self.run_command("ip", &["address", "add", "dev", &self.interface, &addr]) - .await?; - - // Write private key to a temporary file and configure - let key_path = format!("/tmp/temps-wg-{}.key", self.interface); - tokio::fs::write(&key_path, private_key).await?; - - // Set permissions - self.run_command("chmod", &["600", &key_path]).await?; - - // Configure WireGuard with private key and listen port - let port_str = self.listen_port.to_string(); - self.run_command( - "wg", - &[ - "set", - &self.interface, - "listen-port", - &port_str, - "private-key", - &key_path, - ], - ) - .await?; - - // Clean up key file - let _ = tokio::fs::remove_file(&key_path).await; - - // Bring the interface up - self.run_command("ip", &["link", "set", "up", "dev", &self.interface]) - .await?; + use defguard_wireguard_rs::{ + InterfaceConfiguration, Userspace, WGApi, WireguardInterfaceApi, + }; + use std::str::FromStr; + + let mut wgapi = WGApi::::new(self.interface.clone()).map_err(|e| { + WireGuardError::InterfaceError(format!( + "Failed to create WireGuard API for {}: {}", + self.interface, e + )) + })?; + + // Create the userspace WireGuard interface (TUN device via boringtun) + wgapi.create_interface().map_err(|e| { + WireGuardError::InterfaceError(format!( + "Failed to create interface {}: {}", + self.interface, e + )) + })?; + + // Configure the interface with private key, port, and address + let addr_str = format!("{}/{}", ip, self.subnet_mask); + let address = defguard_wireguard_rs::net::IpAddrMask::from_str(&addr_str).map_err(|e| { + WireGuardError::InvalidConfig(format!("Invalid address {}: {}", addr_str, e)) + })?; + + let config = InterfaceConfiguration { + name: self.interface.clone(), + prvkey: private_key.to_string(), + addresses: vec![address], + port: self.listen_port, + peers: Vec::new(), + mtu: None, + fwmark: None, + }; + + wgapi.configure_interface(&config).map_err(|e| { + WireGuardError::InterfaceError(format!( + "Failed to configure interface {}: {}", + self.interface, e + )) + })?; tracing::info!( interface = %self.interface, ip = %ip, port = %self.listen_port, - "WireGuard interface initialized" + "WireGuard interface initialized (embedded userspace)" ); Ok(()) @@ -226,26 +199,53 @@ impl WireGuardManager { /// Add a peer to the WireGuard interface. pub async fn add_peer(&self, peer: &WireGuardPeer) -> Result<(), WireGuardError> { - let mut args = vec![ - "set", - &self.interface, - "peer", - &peer.public_key, - "allowed-ips", - &peer.allowed_ips, - ]; - - // Only set endpoint if provided (peer may be behind NAT) + use defguard_wireguard_rs::{Userspace, WGApi, WireguardInterfaceApi}; + + let wgapi = WGApi::::new(self.interface.clone()).map_err(|e| { + WireGuardError::InterfaceError(format!( + "Failed to create WireGuard API for {}: {}", + self.interface, e + )) + })?; + + // Parse the base64 public key into a Key + let key: defguard_wireguard_rs::key::Key = + peer.public_key.as_str().try_into().map_err(|e| { + WireGuardError::InvalidConfig(format!( + "Invalid peer public key '{}': {:?}", + peer.public_key, e + )) + })?; + + let mut wg_peer = defguard_wireguard_rs::peer::Peer::new(key); + + // Parse endpoint if !peer.endpoint.is_empty() { - args.push("endpoint"); - args.push(&peer.endpoint); + wg_peer.set_endpoint(&peer.endpoint).map_err(|e| { + WireGuardError::InvalidConfig(format!( + "Invalid peer endpoint '{}': {}", + peer.endpoint, e + )) + })?; + } + + // Parse allowed IPs + if let Ok(addr_mask) = peer + .allowed_ips + .parse::() + { + wg_peer.allowed_ips.push(addr_mask); } // Enable persistent keepalive for NAT traversal - args.push("persistent-keepalive"); - args.push("25"); + wg_peer.persistent_keepalive_interval = Some(25); - self.run_command("wg", &args).await?; + wgapi + .configure_peer(&wg_peer) + .map_err(|e| WireGuardError::OperationFailed { + operation: format!("add peer {}", peer.public_key), + reason: format!("{}", e), + })?; tracing::info!( interface = %self.interface, @@ -260,11 +260,25 @@ impl WireGuardManager { /// Remove a peer from the WireGuard interface. pub async fn remove_peer(&self, public_key: &str) -> Result<(), WireGuardError> { - self.run_command( - "wg", - &["set", &self.interface, "peer", public_key, "remove"], - ) - .await?; + use defguard_wireguard_rs::{Userspace, WGApi, WireguardInterfaceApi}; + + let wgapi = WGApi::::new(self.interface.clone()).map_err(|e| { + WireGuardError::InterfaceError(format!( + "Failed to create WireGuard API for {}: {}", + self.interface, e + )) + })?; + + let key: defguard_wireguard_rs::key::Key = public_key.try_into().map_err(|e| { + WireGuardError::InvalidConfig(format!("Invalid public key '{}': {:?}", public_key, e)) + })?; + + wgapi + .remove_peer(&key) + .map_err(|e| WireGuardError::OperationFailed { + operation: format!("remove peer {}", public_key), + reason: format!("{}", e), + })?; tracing::info!( interface = %self.interface, @@ -277,8 +291,21 @@ impl WireGuardManager { /// Tear down the WireGuard interface. pub async fn destroy_interface(&self) -> Result<(), WireGuardError> { - self.run_command("ip", &["link", "del", "dev", &self.interface]) - .await?; + use defguard_wireguard_rs::{Userspace, WGApi, WireguardInterfaceApi}; + + let wgapi = WGApi::::new(self.interface.clone()).map_err(|e| { + WireGuardError::InterfaceError(format!( + "Failed to create WireGuard API for {}: {}", + self.interface, e + )) + })?; + + wgapi.remove_interface().map_err(|e| { + WireGuardError::InterfaceError(format!( + "Failed to remove interface {}: {}", + self.interface, e + )) + })?; tracing::info!( interface = %self.interface, @@ -321,24 +348,6 @@ impl WireGuardManager { pub fn listen_port(&self) -> u16 { self.listen_port } - - /// Run a system command and return an error if it fails. - async fn run_command(&self, program: &str, args: &[&str]) -> Result<(), WireGuardError> { - let output = tokio::process::Command::new(program) - .args(args) - .output() - .await?; - - if !output.status.success() { - let stderr = String::from_utf8_lossy(&output.stderr).to_string(); - return Err(WireGuardError::CommandFailed { - command: format!("{} {}", program, args.join(" ")), - reason: stderr, - }); - } - - Ok(()) - } } #[cfg(test)] @@ -404,4 +413,43 @@ mod tests { assert_eq!(deserialized.endpoint, peer.endpoint); assert_eq!(deserialized.allowed_ips, peer.allowed_ips); } + + #[tokio::test] + async fn test_generate_keypair_produces_valid_keys() { + let manager = WireGuardManager::new("wg0", "10.100.0.0/24", 51820).unwrap(); + let keypair = manager.generate_keypair().await.unwrap(); + + // Keys should be valid base64 + let private_bytes = BASE64.decode(&keypair.private_key).unwrap(); + let public_bytes = BASE64.decode(&keypair.public_key).unwrap(); + + // Keys should be 32 bytes (Curve25519) + assert_eq!(private_bytes.len(), 32); + assert_eq!(public_bytes.len(), 32); + + // Public key should derive from private key + let secret = x25519_dalek::StaticSecret::from( + <[u8; 32]>::try_from(private_bytes.as_slice()).unwrap(), + ); + let expected_public = x25519_dalek::PublicKey::from(&secret); + assert_eq!(public_bytes, expected_public.as_bytes()); + } + + #[tokio::test] + async fn test_generate_keypair_unique() { + let manager = WireGuardManager::new("wg0", "10.100.0.0/24", 51820).unwrap(); + let kp1 = manager.generate_keypair().await.unwrap(); + let kp2 = manager.generate_keypair().await.unwrap(); + + // Two keypairs should be different + assert_ne!(kp1.private_key, kp2.private_key); + assert_ne!(kp1.public_key, kp2.public_key); + } + + #[tokio::test] + async fn test_check_available_always_succeeds() { + let manager = WireGuardManager::new("wg0", "10.100.0.0/24", 51820).unwrap(); + // Embedded WireGuard is always available + assert!(manager.check_available().await.is_ok()); + } } diff --git a/examples/node/vercel-ai-tracing/bun.lock b/examples/node/vercel-ai-tracing/bun.lock index 8b036d17..8b1f0577 100644 --- a/examples/node/vercel-ai-tracing/bun.lock +++ b/examples/node/vercel-ai-tracing/bun.lock @@ -8,29 +8,28 @@ "@ai-sdk/anthropic": "^1.2.0", "@ai-sdk/openai": "^1.3.0", "@opentelemetry/api": "^1.9.0", - "@opentelemetry/exporter-trace-otlp-http": "^0.57.0", + "@opentelemetry/exporter-trace-otlp-proto": "^0.57.0", "@opentelemetry/resources": "^1.30.0", "@opentelemetry/sdk-node": "^0.57.0", "@opentelemetry/sdk-trace-node": "^1.30.0", "@opentelemetry/semantic-conventions": "^1.30.0", - "ai": "^4.3.0", + "ai": "^5.0.52", "tsx": "^4.19.0", + "zod": "^3.25.76", }, }, }, "packages": { "@ai-sdk/anthropic": ["@ai-sdk/anthropic@1.2.12", "", { "dependencies": { "@ai-sdk/provider": "1.1.3", "@ai-sdk/provider-utils": "2.2.8" }, "peerDependencies": { "zod": "^3.0.0" } }, "sha512-YSzjlko7JvuiyQFmI9RN1tNZdEiZxc+6xld/0tq/VkJaHpEzGAb1yiNxxvmYVcjvfu/PcvCxAAYXmTYQQ63IHQ=="], + "@ai-sdk/gateway": ["@ai-sdk/gateway@2.0.58", "", { "dependencies": { "@ai-sdk/provider": "2.0.1", "@ai-sdk/provider-utils": "3.0.22", "@vercel/oidc": "3.1.0" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-HLPfF5k/rUre8fBMSJnJP6UL7ONx+W/BksTGdand0/Jkq5nCVlwmoUC4URc8PtZ+UtnMP6waQlD4b9/rPOHk7A=="], + "@ai-sdk/openai": ["@ai-sdk/openai@1.3.24", "", { "dependencies": { "@ai-sdk/provider": "1.1.3", "@ai-sdk/provider-utils": "2.2.8" }, "peerDependencies": { "zod": "^3.0.0" } }, "sha512-GYXnGJTHRTZc4gJMSmFRgEQudjqd4PUN0ZjQhPwOAYH1yOAvQoG/Ikqs+HyISRbLPCrhbZnPKCNHuRU4OfpW0Q=="], "@ai-sdk/provider": ["@ai-sdk/provider@1.1.3", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-qZMxYJ0qqX/RfnuIaab+zp8UAeJn/ygXXAffR5I4N0n1IrvA6qBsjc8hXLmBiMV2zoXlifkacF7sEFnYnjBcqg=="], "@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@2.2.8", "", { "dependencies": { "@ai-sdk/provider": "1.1.3", "nanoid": "^3.3.8", "secure-json-parse": "^2.7.0" }, "peerDependencies": { "zod": "^3.23.8" } }, "sha512-fqhG+4sCVv8x7nFzYnFo19ryhAa3w096Kmc3hWxMQfW/TubPOmt3A6tYZhl4mUfQWWQMsuSkLrtjlWuXBVSGQA=="], - "@ai-sdk/react": ["@ai-sdk/react@1.2.12", "", { "dependencies": { "@ai-sdk/provider-utils": "2.2.8", "@ai-sdk/ui-utils": "1.2.11", "swr": "^2.2.5", "throttleit": "2.1.0" }, "peerDependencies": { "react": "^18 || ^19 || ^19.0.0-rc", "zod": "^3.23.8" }, "optionalPeers": ["zod"] }, "sha512-jK1IZZ22evPZoQW3vlkZ7wvjYGYF+tRBKXtrcolduIkQ/m/sOAVcVeVDUDvh1T91xCnWCdUGCPZg2avZ90mv3g=="], - - "@ai-sdk/ui-utils": ["@ai-sdk/ui-utils@1.2.11", "", { "dependencies": { "@ai-sdk/provider": "1.1.3", "@ai-sdk/provider-utils": "2.2.8", "zod-to-json-schema": "^3.24.1" }, "peerDependencies": { "zod": "^3.23.8" } }, "sha512-3zcwCc8ezzFlwp3ZD15wAPjf2Au4s3vAbKsXQVyhxODHcmu0iyPO2Eua6D/vicq/AUm/BAo60r97O6HU+EI0+w=="], - "@esbuild/aix-ppc64": ["@esbuild/aix-ppc64@0.27.3", "", { "os": "aix", "cpu": "ppc64" }, "sha512-9fJMTNFTWZMh5qwrBItuziu834eOCUcEqymSH7pY+zoMVEZg3gcPuBNxH1EvfVYe9h0x/Ptw8KBzv7qxb7l8dg=="], "@esbuild/android-arm": ["@esbuild/android-arm@0.27.3", "", { "os": "android", "cpu": "arm" }, "sha512-i5D1hPY7GIQmXlXhs2w8AWHhenb00+GxjxRncS2ZM7YNVGNfaMxgzSGuO8o8SJzRc/oZwU2bcScvVERk03QhzA=="], @@ -165,24 +164,24 @@ "@protobufjs/utf8": ["@protobufjs/utf8@1.1.0", "", {}, "sha512-Vvn3zZrhQZkkBE8LSuW3em98c0FwgO4nxzv6OdSxPKJIEKY2bGbHn+mhGIPerzI4twdxaP8/0+06HBpwf345Lw=="], - "@types/diff-match-patch": ["@types/diff-match-patch@1.0.36", "", {}, "sha512-xFdR6tkm0MWvBfO8xXCSsinYxHcqkQUlcHeSpMC2ukzOb6lwQAfDmW+Qt0AvlGd8HpsS28qKsB+oPeJn9I39jg=="], + "@standard-schema/spec": ["@standard-schema/spec@1.1.0", "", {}, "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="], "@types/node": ["@types/node@25.4.0", "", { "dependencies": { "undici-types": "~7.18.0" } }, "sha512-9wLpoeWuBlcbBpOY3XmzSTG3oscB6xjBEEtn+pYXTfhyXhIxC5FsBer2KTopBlvKEiW9l13po9fq+SJY/5lkhw=="], "@types/shimmer": ["@types/shimmer@1.2.0", "", {}, "sha512-UE7oxhQLLd9gub6JKIAhDq06T0F6FnztwMNRvYgjeQSBeMc1ZG/tA47EwfduvkuQS8apbkM/lpLpWsaCeYsXVg=="], + "@vercel/oidc": ["@vercel/oidc@3.1.0", "", {}, "sha512-Fw28YZpRnA3cAHHDlkt7xQHiJ0fcL+NRcIqsocZQUSmbzeIKRpwttJjik5ZGanXP+vlA4SbTg+AbA3bP363l+w=="], + "acorn": ["acorn@8.16.0", "", { "bin": { "acorn": "bin/acorn" } }, "sha512-UVJyE9MttOsBQIDKw1skb9nAwQuR5wuGD3+82K6JgJlm/Y+KI92oNsMNGZCYdDsVtRHSak0pcV5Dno5+4jh9sw=="], "acorn-import-attributes": ["acorn-import-attributes@1.9.5", "", { "peerDependencies": { "acorn": "^8" } }, "sha512-n02Vykv5uA3eHGM/Z2dQrcD56kL8TyDb2p1+0P83PClMnC/nc+anbQRhIOWnSq4Ke/KvDPrY3C9hDtC/A3eHnQ=="], - "ai": ["ai@4.3.19", "", { "dependencies": { "@ai-sdk/provider": "1.1.3", "@ai-sdk/provider-utils": "2.2.8", "@ai-sdk/react": "1.2.12", "@ai-sdk/ui-utils": "1.2.11", "@opentelemetry/api": "1.9.0", "jsondiffpatch": "0.6.0" }, "peerDependencies": { "react": "^18 || ^19 || ^19.0.0-rc", "zod": "^3.23.8" }, "optionalPeers": ["react"] }, "sha512-dIE2bfNpqHN3r6IINp9znguYdhIOheKW2LDigAMrgt/upT3B8eBGPSCblENvaZGoq+hxaN9fSMzjWpbqloP+7Q=="], + "ai": ["ai@5.0.154", "", { "dependencies": { "@ai-sdk/gateway": "2.0.58", "@ai-sdk/provider": "2.0.1", "@ai-sdk/provider-utils": "3.0.22", "@opentelemetry/api": "1.9.0" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-xdUSSfliDWIQq/9Easp1z5oRIFwNS07Ys4BCGDtroDcyEbNDMw5Rj1m2i7Fh8khPbAEByxFfWyUzlrPi2VIlDA=="], "ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="], "ansi-styles": ["ansi-styles@4.3.0", "", { "dependencies": { "color-convert": "^2.0.1" } }, "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg=="], - "chalk": ["chalk@5.6.2", "", {}, "sha512-7NzBL0rN6fMUW+f7A6Io4h40qQlG+xGmtMxfbnH/K7TAtt8JQWVQK+6g0UXKMeVJoyV5EkkNsErQ8pVD3bLHbA=="], - "cjs-module-lexer": ["cjs-module-lexer@1.4.3", "", {}, "sha512-9z8TZaGM1pfswYeXrUpzPrkx8UnWYdhJclsiYMm6x/w5+nN+8Tf/LnAgfLGQCm59qAOxU8WwHEq2vNwF6i4j+Q=="], "cliui": ["cliui@8.0.1", "", { "dependencies": { "string-width": "^4.2.0", "strip-ansi": "^6.0.1", "wrap-ansi": "^7.0.0" } }, "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ=="], @@ -193,16 +192,14 @@ "debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="], - "dequal": ["dequal@2.0.3", "", {}, "sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA=="], - - "diff-match-patch": ["diff-match-patch@1.0.5", "", {}, "sha512-IayShXAgj/QMXgB0IWmKx+rOPuGMhqm5w6jvFxmVenXKIzRqTAAsbBPT3kWQeGANj3jGgvcvv4yK6SxqYmikgw=="], - "emoji-regex": ["emoji-regex@8.0.0", "", {}, "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A=="], "esbuild": ["esbuild@0.27.3", "", { "optionalDependencies": { "@esbuild/aix-ppc64": "0.27.3", "@esbuild/android-arm": "0.27.3", "@esbuild/android-arm64": "0.27.3", "@esbuild/android-x64": "0.27.3", "@esbuild/darwin-arm64": "0.27.3", "@esbuild/darwin-x64": "0.27.3", "@esbuild/freebsd-arm64": "0.27.3", "@esbuild/freebsd-x64": "0.27.3", "@esbuild/linux-arm": "0.27.3", "@esbuild/linux-arm64": "0.27.3", "@esbuild/linux-ia32": "0.27.3", "@esbuild/linux-loong64": "0.27.3", "@esbuild/linux-mips64el": "0.27.3", "@esbuild/linux-ppc64": "0.27.3", "@esbuild/linux-riscv64": "0.27.3", "@esbuild/linux-s390x": "0.27.3", "@esbuild/linux-x64": "0.27.3", "@esbuild/netbsd-arm64": "0.27.3", "@esbuild/netbsd-x64": "0.27.3", "@esbuild/openbsd-arm64": "0.27.3", "@esbuild/openbsd-x64": "0.27.3", "@esbuild/openharmony-arm64": "0.27.3", "@esbuild/sunos-x64": "0.27.3", "@esbuild/win32-arm64": "0.27.3", "@esbuild/win32-ia32": "0.27.3", "@esbuild/win32-x64": "0.27.3" }, "bin": { "esbuild": "bin/esbuild" } }, "sha512-8VwMnyGCONIs6cWue2IdpHxHnAjzxnw2Zr7MkVxB2vjmQ2ivqGFb4LEG3SMnv0Gb2F/G/2yA8zUaiL1gywDCCg=="], "escalade": ["escalade@3.2.0", "", {}, "sha512-WUj2qlxaQtO4g6Pq5c29GTcWGDyd8itL8zTlipgECz3JesAiiOKotd8JU6otB3PACgG6xkJUyVhboMS+bje/jA=="], + "eventsource-parser": ["eventsource-parser@3.0.6", "", {}, "sha512-Vo1ab+QXPzZ4tCa8SwIHJFaSzy4R6SHf7BY79rFBDf0idraZWAkYrDjDj8uWaSm3S2TK+hJ7/t1CEmZ7jXw+pg=="], + "fsevents": ["fsevents@2.3.3", "", { "os": "darwin" }, "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw=="], "function-bind": ["function-bind@1.1.2", "", {}, "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA=="], @@ -221,8 +218,6 @@ "json-schema": ["json-schema@0.4.0", "", {}, "sha512-es94M3nTIfsEPisRafak+HDLfHXnKBhV3vU5eqPcS3flIWqcxJWgXHXiey3YrpaNsanY5ei1VoYEbOzijuq9BA=="], - "jsondiffpatch": ["jsondiffpatch@0.6.0", "", { "dependencies": { "@types/diff-match-patch": "^1.0.36", "chalk": "^5.3.0", "diff-match-patch": "^1.0.5" }, "bin": { "jsondiffpatch": "bin/jsondiffpatch.js" } }, "sha512-3QItJOXp2AP1uv7waBkao5nCvhEv+QmJAd38Ybq7wNI74Q+BBmnLn4EDKz6yI9xGAIQoUF87qHt+kc1IVxB4zQ=="], - "lodash.camelcase": ["lodash.camelcase@4.3.0", "", {}, "sha512-TwuEnCnxbc3rAvhf/LbG7tJUDzhqXyFnv3dtzLOPgCG/hODL7WFnsbwktkD7yUV0RrreP/l1PALq/YSg6VvjlA=="], "long": ["long@5.3.2", "", {}, "sha512-mNAgZ1GmyNhD7AuqnTG3/VQ26o760+ZYBPKjPvugO8+nLbYfX6TVpJPseBvopbdY+qpZ/lKUnmEc1LeZYS3QAA=="], @@ -237,8 +232,6 @@ "protobufjs": ["protobufjs@7.5.4", "", { "dependencies": { "@protobufjs/aspromise": "^1.1.2", "@protobufjs/base64": "^1.1.2", "@protobufjs/codegen": "^2.0.4", "@protobufjs/eventemitter": "^1.1.0", "@protobufjs/fetch": "^1.1.0", "@protobufjs/float": "^1.0.2", "@protobufjs/inquire": "^1.1.0", "@protobufjs/path": "^1.1.2", "@protobufjs/pool": "^1.1.0", "@protobufjs/utf8": "^1.1.0", "@types/node": ">=13.7.0", "long": "^5.0.0" } }, "sha512-CvexbZtbov6jW2eXAvLukXjXUW1TzFaivC46BpWc/3BpcCysb5Vffu+B3XHMm8lVEuy2Mm4XGex8hBSg1yapPg=="], - "react": ["react@19.2.4", "", {}, "sha512-9nfp2hYpCwOjAN+8TZFGhtWEwgvWHXqESH8qT89AT/lWklpLON22Lc8pEtnpsZz7VmawabSU0gCjnj8aC0euHQ=="], - "require-directory": ["require-directory@2.1.1", "", {}, "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q=="], "require-in-the-middle": ["require-in-the-middle@7.5.2", "", { "dependencies": { "debug": "^4.3.5", "module-details-from-path": "^1.0.3", "resolve": "^1.22.8" } }, "sha512-gAZ+kLqBdHarXB64XpAe2VCjB7rIRv+mU8tfRWziHRJ5umKsIHN2tLLv6EtMw7WCdP19S0ERVMldNvxYCHnhSQ=="], @@ -259,16 +252,10 @@ "supports-preserve-symlinks-flag": ["supports-preserve-symlinks-flag@1.0.0", "", {}, "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w=="], - "swr": ["swr@2.4.1", "", { "dependencies": { "dequal": "^2.0.3", "use-sync-external-store": "^1.6.0" }, "peerDependencies": { "react": "^16.11.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-2CC6CiKQtEwaEeNiqWTAw9PGykW8SR5zZX8MZk6TeAvEAnVS7Visz8WzphqgtQ8v2xz/4Q5K+j+SeMaKXeeQIA=="], - - "throttleit": ["throttleit@2.1.0", "", {}, "sha512-nt6AMGKW1p/70DF/hGBdJB57B8Tspmbp5gfJ8ilhLnt7kkr2ye7hzD6NVG8GGErk2HWF34igrL2CXmNIkzKqKw=="], - "tsx": ["tsx@4.21.0", "", { "dependencies": { "esbuild": "~0.27.0", "get-tsconfig": "^4.7.5" }, "optionalDependencies": { "fsevents": "~2.3.3" }, "bin": { "tsx": "dist/cli.mjs" } }, "sha512-5C1sg4USs1lfG0GFb2RLXsdpXqBSEhAaA/0kPL01wxzpMqLILNxIxIOKiILz+cdg/pLnOUxFYOR5yhHU666wbw=="], "undici-types": ["undici-types@7.18.2", "", {}, "sha512-AsuCzffGHJybSaRrmr5eHr81mwJU3kjw6M+uprWvCXiNeN9SOGwQ3Jn8jb8m3Z6izVgknn1R0FTCEAP2QrLY/w=="], - "use-sync-external-store": ["use-sync-external-store@1.6.0", "", { "peerDependencies": { "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" } }, "sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w=="], - "wrap-ansi": ["wrap-ansi@7.0.0", "", { "dependencies": { "ansi-styles": "^4.0.0", "string-width": "^4.1.0", "strip-ansi": "^6.0.0" } }, "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q=="], "y18n": ["y18n@5.0.8", "", {}, "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA=="], @@ -279,7 +266,9 @@ "zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="], - "zod-to-json-schema": ["zod-to-json-schema@3.25.1", "", { "peerDependencies": { "zod": "^3.25 || ^4" } }, "sha512-pM/SU9d3YAggzi6MtR4h7ruuQlqKtad8e9S0fmxcMi+ueAK5Korys/aWcV9LIIHTVbj01NdzxcnXSN+O74ZIVA=="], + "@ai-sdk/gateway/@ai-sdk/provider": ["@ai-sdk/provider@2.0.1", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-KCUwswvsC5VsW2PWFqF8eJgSCu5Ysj7m1TxiHTVA6g7k360bk0RNQENT8KTMAYEs+8fWPD3Uu4dEmzGHc+jGng=="], + + "@ai-sdk/gateway/@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@3.0.22", "", { "dependencies": { "@ai-sdk/provider": "2.0.1", "@standard-schema/spec": "^1.0.0", "eventsource-parser": "^3.0.6" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-fFT1KfUUKktfAFm5mClJhS1oux9tP2qgzmEZVl5UdwltQ1LO/s8hd7znVrgKzivwv1s1FIPza0s9OpJaNB/vHw=="], "@opentelemetry/core/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="], @@ -290,5 +279,9 @@ "@opentelemetry/sdk-node/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="], "@opentelemetry/sdk-trace-base/@opentelemetry/semantic-conventions": ["@opentelemetry/semantic-conventions@1.28.0", "", {}, "sha512-lp4qAiMTD4sNWW4DbKLBkfiMZ4jbAboJIGOQr5DvciMRI494OapieI9qiODpOt0XBr1LjIDy1xAGAnVs5supTA=="], + + "ai/@ai-sdk/provider": ["@ai-sdk/provider@2.0.1", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-KCUwswvsC5VsW2PWFqF8eJgSCu5Ysj7m1TxiHTVA6g7k360bk0RNQENT8KTMAYEs+8fWPD3Uu4dEmzGHc+jGng=="], + + "ai/@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@3.0.22", "", { "dependencies": { "@ai-sdk/provider": "2.0.1", "@standard-schema/spec": "^1.0.0", "eventsource-parser": "^3.0.6" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-fFT1KfUUKktfAFm5mClJhS1oux9tP2qgzmEZVl5UdwltQ1LO/s8hd7znVrgKzivwv1s1FIPza0s9OpJaNB/vHw=="], } } diff --git a/examples/node/vercel-ai-tracing/package-lock.json b/examples/node/vercel-ai-tracing/package-lock.json index 1836ae7d..8542dc83 100644 --- a/examples/node/vercel-ai-tracing/package-lock.json +++ b/examples/node/vercel-ai-tracing/package-lock.json @@ -16,7 +16,7 @@ "@opentelemetry/sdk-node": "^0.57.0", "@opentelemetry/sdk-trace-node": "^1.30.0", "@opentelemetry/semantic-conventions": "^1.30.0", - "ai": "^4.3.0", + "ai": "^5.0.52", "tsx": "^4.19.0", "zod": "^3.25.76" } @@ -35,22 +35,27 @@ "zod": "^3.0.0" } }, - "node_modules/@ai-sdk/openai": { - "version": "1.3.24", + "node_modules/@ai-sdk/gateway": { + "version": "2.0.58", + "resolved": "https://registry.npmjs.org/@ai-sdk/gateway/-/gateway-2.0.58.tgz", + "integrity": "sha512-HLPfF5k/rUre8fBMSJnJP6UL7ONx+W/BksTGdand0/Jkq5nCVlwmoUC4URc8PtZ+UtnMP6waQlD4b9/rPOHk7A==", "license": "Apache-2.0", "dependencies": { - "@ai-sdk/provider": "1.1.3", - "@ai-sdk/provider-utils": "2.2.8" + "@ai-sdk/provider": "2.0.1", + "@ai-sdk/provider-utils": "3.0.22", + "@vercel/oidc": "3.1.0" }, "engines": { "node": ">=18" }, "peerDependencies": { - "zod": "^3.0.0" + "zod": "^3.25.76 || ^4.1.8" } }, - "node_modules/@ai-sdk/provider": { - "version": "1.1.3", + "node_modules/@ai-sdk/gateway/node_modules/@ai-sdk/provider": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/@ai-sdk/provider/-/provider-2.0.1.tgz", + "integrity": "sha512-KCUwswvsC5VsW2PWFqF8eJgSCu5Ysj7m1TxiHTVA6g7k360bk0RNQENT8KTMAYEs+8fWPD3Uu4dEmzGHc+jGng==", "license": "Apache-2.0", "dependencies": { "json-schema": "^0.4.0" @@ -59,56 +64,56 @@ "node": ">=18" } }, - "node_modules/@ai-sdk/provider-utils": { - "version": "2.2.8", - "resolved": "https://registry.npmjs.org/@ai-sdk/provider-utils/-/provider-utils-2.2.8.tgz", - "integrity": "sha512-fqhG+4sCVv8x7nFzYnFo19ryhAa3w096Kmc3hWxMQfW/TubPOmt3A6tYZhl4mUfQWWQMsuSkLrtjlWuXBVSGQA==", + "node_modules/@ai-sdk/gateway/node_modules/@ai-sdk/provider-utils": { + "version": "3.0.22", + "resolved": "https://registry.npmjs.org/@ai-sdk/provider-utils/-/provider-utils-3.0.22.tgz", + "integrity": "sha512-fFT1KfUUKktfAFm5mClJhS1oux9tP2qgzmEZVl5UdwltQ1LO/s8hd7znVrgKzivwv1s1FIPza0s9OpJaNB/vHw==", "license": "Apache-2.0", "dependencies": { - "@ai-sdk/provider": "1.1.3", - "nanoid": "^3.3.8", - "secure-json-parse": "^2.7.0" + "@ai-sdk/provider": "2.0.1", + "@standard-schema/spec": "^1.0.0", + "eventsource-parser": "^3.0.6" }, "engines": { "node": ">=18" }, "peerDependencies": { - "zod": "^3.23.8" + "zod": "^3.25.76 || ^4.1.8" } }, - "node_modules/@ai-sdk/react": { - "version": "1.2.12", - "resolved": "https://registry.npmjs.org/@ai-sdk/react/-/react-1.2.12.tgz", - "integrity": "sha512-jK1IZZ22evPZoQW3vlkZ7wvjYGYF+tRBKXtrcolduIkQ/m/sOAVcVeVDUDvh1T91xCnWCdUGCPZg2avZ90mv3g==", + "node_modules/@ai-sdk/openai": { + "version": "1.3.24", "license": "Apache-2.0", "dependencies": { - "@ai-sdk/provider-utils": "2.2.8", - "@ai-sdk/ui-utils": "1.2.11", - "swr": "^2.2.5", - "throttleit": "2.1.0" + "@ai-sdk/provider": "1.1.3", + "@ai-sdk/provider-utils": "2.2.8" }, "engines": { "node": ">=18" }, "peerDependencies": { - "react": "^18 || ^19 || ^19.0.0-rc", - "zod": "^3.23.8" + "zod": "^3.0.0" + } + }, + "node_modules/@ai-sdk/provider": { + "version": "1.1.3", + "license": "Apache-2.0", + "dependencies": { + "json-schema": "^0.4.0" }, - "peerDependenciesMeta": { - "zod": { - "optional": true - } + "engines": { + "node": ">=18" } }, - "node_modules/@ai-sdk/ui-utils": { - "version": "1.2.11", - "resolved": "https://registry.npmjs.org/@ai-sdk/ui-utils/-/ui-utils-1.2.11.tgz", - "integrity": "sha512-3zcwCc8ezzFlwp3ZD15wAPjf2Au4s3vAbKsXQVyhxODHcmu0iyPO2Eua6D/vicq/AUm/BAo60r97O6HU+EI0+w==", + "node_modules/@ai-sdk/provider-utils": { + "version": "2.2.8", + "resolved": "https://registry.npmjs.org/@ai-sdk/provider-utils/-/provider-utils-2.2.8.tgz", + "integrity": "sha512-fqhG+4sCVv8x7nFzYnFo19ryhAa3w096Kmc3hWxMQfW/TubPOmt3A6tYZhl4mUfQWWQMsuSkLrtjlWuXBVSGQA==", "license": "Apache-2.0", "dependencies": { "@ai-sdk/provider": "1.1.3", - "@ai-sdk/provider-utils": "2.2.8", - "zod-to-json-schema": "^3.24.1" + "nanoid": "^3.3.8", + "secure-json-parse": "^2.7.0" }, "engines": { "node": ">=18" @@ -686,8 +691,10 @@ "version": "1.1.0", "license": "BSD-3-Clause" }, - "node_modules/@types/diff-match-patch": { - "version": "1.0.36", + "node_modules/@standard-schema/spec": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/@standard-schema/spec/-/spec-1.1.0.tgz", + "integrity": "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==", "license": "MIT" }, "node_modules/@types/node": { @@ -701,6 +708,15 @@ "version": "1.2.0", "license": "MIT" }, + "node_modules/@vercel/oidc": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/@vercel/oidc/-/oidc-3.1.0.tgz", + "integrity": "sha512-Fw28YZpRnA3cAHHDlkt7xQHiJ0fcL+NRcIqsocZQUSmbzeIKRpwttJjik5ZGanXP+vlA4SbTg+AbA3bP363l+w==", + "license": "Apache-2.0", + "engines": { + "node": ">= 20" + } + }, "node_modules/acorn": { "version": "8.16.0", "license": "MIT", @@ -719,27 +735,50 @@ } }, "node_modules/ai": { - "version": "4.3.19", + "version": "5.0.154", + "resolved": "https://registry.npmjs.org/ai/-/ai-5.0.154.tgz", + "integrity": "sha512-xdUSSfliDWIQq/9Easp1z5oRIFwNS07Ys4BCGDtroDcyEbNDMw5Rj1m2i7Fh8khPbAEByxFfWyUzlrPi2VIlDA==", "license": "Apache-2.0", "dependencies": { - "@ai-sdk/provider": "1.1.3", - "@ai-sdk/provider-utils": "2.2.8", - "@ai-sdk/react": "1.2.12", - "@ai-sdk/ui-utils": "1.2.11", - "@opentelemetry/api": "1.9.0", - "jsondiffpatch": "0.6.0" + "@ai-sdk/gateway": "2.0.58", + "@ai-sdk/provider": "2.0.1", + "@ai-sdk/provider-utils": "3.0.22", + "@opentelemetry/api": "1.9.0" }, "engines": { "node": ">=18" }, "peerDependencies": { - "react": "^18 || ^19 || ^19.0.0-rc", - "zod": "^3.23.8" + "zod": "^3.25.76 || ^4.1.8" + } + }, + "node_modules/ai/node_modules/@ai-sdk/provider": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/@ai-sdk/provider/-/provider-2.0.1.tgz", + "integrity": "sha512-KCUwswvsC5VsW2PWFqF8eJgSCu5Ysj7m1TxiHTVA6g7k360bk0RNQENT8KTMAYEs+8fWPD3Uu4dEmzGHc+jGng==", + "license": "Apache-2.0", + "dependencies": { + "json-schema": "^0.4.0" }, - "peerDependenciesMeta": { - "react": { - "optional": true - } + "engines": { + "node": ">=18" + } + }, + "node_modules/ai/node_modules/@ai-sdk/provider-utils": { + "version": "3.0.22", + "resolved": "https://registry.npmjs.org/@ai-sdk/provider-utils/-/provider-utils-3.0.22.tgz", + "integrity": "sha512-fFT1KfUUKktfAFm5mClJhS1oux9tP2qgzmEZVl5UdwltQ1LO/s8hd7znVrgKzivwv1s1FIPza0s9OpJaNB/vHw==", + "license": "Apache-2.0", + "dependencies": { + "@ai-sdk/provider": "2.0.1", + "@standard-schema/spec": "^1.0.0", + "eventsource-parser": "^3.0.6" + }, + "engines": { + "node": ">=18" + }, + "peerDependencies": { + "zod": "^3.25.76 || ^4.1.8" } }, "node_modules/ansi-regex": { @@ -762,16 +801,6 @@ "url": "https://github.com/chalk/ansi-styles?sponsor=1" } }, - "node_modules/chalk": { - "version": "5.6.2", - "license": "MIT", - "engines": { - "node": "^12.17.0 || ^14.13 || >=16.0.0" - }, - "funding": { - "url": "https://github.com/chalk/chalk?sponsor=1" - } - }, "node_modules/cjs-module-lexer": { "version": "1.4.3", "license": "MIT" @@ -817,19 +846,6 @@ } } }, - "node_modules/dequal": { - "version": "2.0.3", - "resolved": "https://registry.npmjs.org/dequal/-/dequal-2.0.3.tgz", - "integrity": "sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==", - "license": "MIT", - "engines": { - "node": ">=6" - } - }, - "node_modules/diff-match-patch": { - "version": "1.0.5", - "license": "Apache-2.0" - }, "node_modules/emoji-regex": { "version": "8.0.0", "license": "MIT" @@ -880,6 +896,15 @@ "node": ">=6" } }, + "node_modules/eventsource-parser": { + "version": "3.0.6", + "resolved": "https://registry.npmjs.org/eventsource-parser/-/eventsource-parser-3.0.6.tgz", + "integrity": "sha512-Vo1ab+QXPzZ4tCa8SwIHJFaSzy4R6SHf7BY79rFBDf0idraZWAkYrDjDj8uWaSm3S2TK+hJ7/t1CEmZ7jXw+pg==", + "license": "MIT", + "engines": { + "node": ">=18.0.0" + } + }, "node_modules/fsevents": { "version": "2.3.3", "license": "MIT", @@ -959,21 +984,6 @@ "version": "0.4.0", "license": "(AFL-2.1 OR BSD-3-Clause)" }, - "node_modules/jsondiffpatch": { - "version": "0.6.0", - "license": "MIT", - "dependencies": { - "@types/diff-match-patch": "^1.0.36", - "chalk": "^5.3.0", - "diff-match-patch": "^1.0.5" - }, - "bin": { - "jsondiffpatch": "bin/jsondiffpatch.js" - }, - "engines": { - "node": "^18.0.0 || >=20.0.0" - } - }, "node_modules/lodash.camelcase": { "version": "4.3.0", "license": "MIT" @@ -1034,14 +1044,6 @@ "node": ">=12.0.0" } }, - "node_modules/react": { - "version": "19.2.4", - "license": "MIT", - "peer": true, - "engines": { - "node": ">=0.10.0" - } - }, "node_modules/require-directory": { "version": "2.1.1", "license": "MIT", @@ -1138,31 +1140,6 @@ "url": "https://github.com/sponsors/ljharb" } }, - "node_modules/swr": { - "version": "2.4.1", - "resolved": "https://registry.npmjs.org/swr/-/swr-2.4.1.tgz", - "integrity": "sha512-2CC6CiKQtEwaEeNiqWTAw9PGykW8SR5zZX8MZk6TeAvEAnVS7Visz8WzphqgtQ8v2xz/4Q5K+j+SeMaKXeeQIA==", - "license": "MIT", - "dependencies": { - "dequal": "^2.0.3", - "use-sync-external-store": "^1.6.0" - }, - "peerDependencies": { - "react": "^16.11.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" - } - }, - "node_modules/throttleit": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/throttleit/-/throttleit-2.1.0.tgz", - "integrity": "sha512-nt6AMGKW1p/70DF/hGBdJB57B8Tspmbp5gfJ8ilhLnt7kkr2ye7hzD6NVG8GGErk2HWF34igrL2CXmNIkzKqKw==", - "license": "MIT", - "engines": { - "node": ">=18" - }, - "funding": { - "url": "https://github.com/sponsors/sindresorhus" - } - }, "node_modules/tsx": { "version": "4.21.0", "license": "MIT", @@ -1184,15 +1161,6 @@ "version": "7.18.2", "license": "MIT" }, - "node_modules/use-sync-external-store": { - "version": "1.6.0", - "resolved": "https://registry.npmjs.org/use-sync-external-store/-/use-sync-external-store-1.6.0.tgz", - "integrity": "sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w==", - "license": "MIT", - "peerDependencies": { - "react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0" - } - }, "node_modules/wrap-ansi": { "version": "7.0.0", "license": "MIT", @@ -1246,15 +1214,6 @@ "funding": { "url": "https://github.com/sponsors/colinhacks" } - }, - "node_modules/zod-to-json-schema": { - "version": "3.25.1", - "resolved": "https://registry.npmjs.org/zod-to-json-schema/-/zod-to-json-schema-3.25.1.tgz", - "integrity": "sha512-pM/SU9d3YAggzi6MtR4h7ruuQlqKtad8e9S0fmxcMi+ueAK5Korys/aWcV9LIIHTVbj01NdzxcnXSN+O74ZIVA==", - "license": "ISC", - "peerDependencies": { - "zod": "^3.25 || ^4" - } } } } diff --git a/examples/node/vercel-ai-tracing/package.json b/examples/node/vercel-ai-tracing/package.json index c79545d1..a073ee91 100644 --- a/examples/node/vercel-ai-tracing/package.json +++ b/examples/node/vercel-ai-tracing/package.json @@ -16,7 +16,7 @@ "@opentelemetry/sdk-node": "^0.57.0", "@opentelemetry/sdk-trace-node": "^1.30.0", "@opentelemetry/semantic-conventions": "^1.30.0", - "ai": "^4.3.0", + "ai": "^5.0.52", "tsx": "^4.19.0", "zod": "^3.25.76" } diff --git a/web/src/api/client/@tanstack/react-query.gen.ts b/web/src/api/client/@tanstack/react-query.gen.ts index 93c318ec..f854f6b0 100644 --- a/web/src/api/client/@tanstack/react-query.gen.ts +++ b/web/src/api/client/@tanstack/react-query.gen.ts @@ -1,8 +1,8 @@ // This file is auto-generated by @hey-api/openapi-ts -import { type Options, getPlatformInfo, chunkUploadOptions, createRelease, listReleaseFiles, uploadReleaseFile, recordEventMetrics, addSessionReplayEvents, initSessionReplay, recordSpeedMetrics, updateSpeedMetrics, getPricing, listProviderKeys, createProviderKey, testProviderKeyInline, deleteProviderKey, updateProviderKey, testProviderKeyById, getUsageByProvider, getConversations, getConversationDetail, getUsageRecent, getUsageSummary, getUsageTimeseries, getUsageTopModels, chatCompletions, embeddings, listModels, getActiveVisitors, getEventDetail, getEventVisitors, getEventsCount, getGeneralStats, getLiveVisitorsList, getPageFlow, getPageHourlySessions, getPagePathDetail, getPagePathVisitors, getPagePaths, getPagePathsSparklines, getRecentActivity, getSessionDetails, getSessionEvents, getSessionLogs, getVisitors, getVisitorByGuid, getVisitorById, getVisitorDetails, enrichVisitor, getVisitorInfo, getVisitorJourney, getVisitorSessions, getVisitorStats, listApiKeys, createApiKey, getApiKeyPermissions, deleteApiKey, getApiKey, updateApiKey, activateApiKey, deactivateApiKey, emailStatus, login, requestMagicLink, verifyMagicLink, requestPasswordReset, resetPassword, verifyEmail, verifyMfaChallenge, runExternalServiceBackup, listS3Sources, createS3Source, deleteS3Source, getS3Source, updateS3Source, listSourceBackups, runBackupForSource, listBackupSchedules, createBackupSchedule, deleteBackupSchedule, getBackupSchedule, listBackupsForSchedule, disableBackupSchedule, enableBackupSchedule, getBackup, blobDelete, blobList, blobPut, blobCopy, blobDisable, blobEnable, blobStatus, blobUpdate, blobDownload, getDashboardProjectsAnalytics, getActivityGraph, getScanByDeployment, listProviders, createProvider, deleteProvider, getProvider, updateProvider, listManagedDomains, addManagedDomain, testProviderConnection, listProviderZones, removeManagedDomain, verifyManagedDomain, lookupDnsARecords, listDomains, createDomain, getDomainByHost, cancelDomainOrder, getDomainOrder, createOrRecreateOrder, finalizeOrder, setupDnsChallenge, deleteDomain, getDomainById, getChallengeToken, getHttpChallengeDebug, provisionDomain, renewDomain, checkDomainStatus, listDomains2, createDomain2, getDomainByName, deleteDomain2, getDomain, getDomainDnsRecords, setupDns, verifyDomain, listProviders2, createProvider2, deleteProvider2, getProvider2, testProvider, listEmails, sendEmail, getEmailStats, validateEmail, getEmail, listServices, createService, listAvailableContainers, getServiceBySlug, importExternalService, listProjectServices, getProjectServiceEnvironmentVariables, getProvidersMetadata, getProviderMetadata, getServiceTypes, getServiceTypeParameters, deleteService, getService, updateService, getServicePreviewEnvironmentVariablesMasked, getServicePreviewEnvironmentVariableNames, listServiceProjects, linkServiceToProject, unlinkServiceFromProject, getServiceEnvironmentVariables, getServiceEnvironmentVariable, startService, stopService, upgradeService, listRootContainers, listContainersAtPath, listEntities, getEntityInfo, queryData, downloadObject, getContainerInfo, checkExplorerSupport, getFile, getIpGeolocation, listConnections, deleteConnection, activateConnection, deactivateConnection, listRepositoriesByConnection, syncRepositories, updateConnectionToken, validateConnection, listGitProviders, createGitProvider, createGithubPatProvider, createGitlabOauthProvider, createGitlabPatProvider, deleteProvider3, getGitProvider, activateProvider, handleGitProviderOauthCallback, getProviderConnections, deactivateProvider, checkProviderDeletionSafety, startGitProviderOauth, deleteProviderSafely, getPublicRepository, getPublicBranches, detectPublicPresets, discoverWorkloads, executeImport, createPlan, listSources, getImportStatus, getIncident, updateIncidentStatus, getIncidentUpdates, adminListNodes, registerNode, adminRemoveNode, adminGetNode, adminListNodeContainers, adminUndrainNode, adminDrainStatus, adminDrainNode, nodeHeartbeat, listIpAccessControl, createIpAccessControl, checkIpBlocked, deleteIpAccessControl, getIpAccessControl, updateIpAccessControl, kvDel, kvDisable, kvEnable, kvExpire, kvGet, kvIncr, kvKeys, kvSet, kvStatus, kvTtl, kvUpdate, listRoutes, createRoute, deleteRoute, getRoute, updateRoute, logout, getLogContext, searchLogs, tailLogs, deleteMonitor, getMonitor, getBucketedStatus, getCurrentMonitorStatus, getUptimeHistory, deletePreferences, getPreferences, updatePreferences, listNotificationProviders, createNotificationProvider, createEmailProvider, updateEmailProvider, createSlackProvider, updateSlackProvider, createWebhookProvider, updateWebhookProvider, deleteProvider4, getNotificationProvider, updateProvider2, testProvider2, listOrders, queryGenaiTraces, getGenaiTrace, getHealth, listInsights, queryLogs, listMetricNames, queryMetrics, getPipelineStats, getQuota, queryTraceSummaries, queryTraces, getTrace, ingestLogs, ingestMetrics, ingestTraces, ingestLogsByPath, ingestMetricsByPath, ingestTracesByPath, hasPerformanceMetrics, getPerformanceMetrics, getMetricsOverTime, getGroupedPageMetrics, getAccessInfo, getPrivateIp, getPublicIp, listPresets, generatePresetDockerfile, getProjects, createProject, getProjectBySlug, createProjectFromTemplate, getProjectStatistics, deleteProject, getProject, updateProject, getProjectDeployments, getLastDeployment, triggerProjectPipeline, getActiveVisitors2, getAggregatedBuckets, updateAutomaticDeploy, listCustomDomainsForProject, createCustomDomain, deleteCustomDomain, getCustomDomain, updateCustomDomain, linkCustomDomainToCertificate, updateProjectDeploymentConfig, getDeployment, cancelDeployment, getDeploymentJobs, getDeploymentJobLogs, tailDeploymentJobLogs, getDeploymentOperations, executeDeploymentOperation, getDeploymentOperationStatus, pauseDeployment, promoteDeployment, resumeDeployment, rollbackToDeployment, teardownDeployment, listDsns, createDsn, getOrCreateDsn, regenerateDsn, revokeDsn, getEnvironmentVariables, createEnvironmentVariable, getEnvironmentVariableValue, deleteEnvironmentVariable, updateEnvironmentVariable, getEnvironments, createEnvironment, deleteEnvironment, getEnvironment, getEnvironmentCrons, getCronById, getCronExecutions, getEnvironmentDomains, addEnvironmentDomain, deleteEnvironmentDomain, updateEnvironmentSettings, sleepEnvironment, teardownEnvironment, wakeEnvironment, getContainerLogs, listContainers, getContainerDetail, getContainerLogsById, getContainerMetrics, streamContainerMetrics, restartContainer, startContainer, stopContainer, deployFromImage, deployFromImageUpload, deployFromStatic, getErrorDashboardStats, listErrorGroups, getErrorGroup, updateErrorGroup, listErrorEvents, getErrorEvent, getErrorStats, getErrorTimeSeries, getEventsCount2, getEventTypeBreakdown, recordConsoleEvent, getPropertyBreakdown, getPropertyTimeline, getEventsTimeline, getUniqueEvents, listExternalImages, registerExternalImage, deleteExternalImage, getExternalImage, listFunnels, createFunnel, previewFunnelMetrics, deleteFunnel, updateFunnel, getFunnelMetrics, updateGitSettings, hasErrorGroups, hasAnalyticsEvents, getHourlyVisits, listExternalImages2, pushExternalImage, getExternalImage2, listIncidents, createIncident, getBucketedIncidents, purgeProjectLogs, listMonitors, createMonitor, deleteReleaseSourceMaps, listSourceMaps, uploadSourceMap, updateProjectSettings, listReleases, deleteSourceMap, listStaticBundles, deleteStaticBundle, getStaticBundle, getStatusOverview, getUniqueCounts, uploadStaticBundle, listProjectScans, triggerScan, getLatestScansPerEnvironment, getLatestScan, listWebhooks, createWebhook, deleteWebhook, getWebhook, updateWebhook, listDeliveries, getDelivery, retryDelivery, getProxyLogs, getProxyLogByRequestId, getTimeBucketStats, getTodayStats, getProxyLogById, listSyncedRepositories, getRepositoryByName, getAllRepositoriesByName, getRepositoryPresetByName, getRepositoryBranches, getRepositoryTags, getRepositoryPresetLive, getBranchesByRepositoryId, listCommitsByRepositoryId, checkCommitExists, getTagsByRepositoryId, getProjectSessionReplays, getSessionEvents2, getSettings, updateSettings, revokeJoinToken, generateJoinToken, getJoinTokenStatus, listTemplates, listTemplateTags, getTemplate, getCurrentUser, listUsers, createUser, updateSelf, disableMfa, setupMfa, verifyAndEnableMfa, deleteUser, updateUser, restoreUser, assignRole, removeRole, getVisitorSessions2, deleteSessionReplay, getSessionReplay, updateSessionDuration, getSessionReplayEvents, addEvents, deleteScan, getScan, getScanVulnerabilities, listEventTypes, triggerWeeklyDigest, listExternalPlugins, reloadPlugins, ingestSentryEnvelope, ingestSentryEvent, listAuditLogs, getAuditLog } from '../sdk.gen'; +import { type Options, getPlatformInfo, chunkUploadOptions, createRelease, listReleaseFiles, uploadReleaseFile, recordEventMetrics, addSessionReplayEvents, initSessionReplay, recordSpeedMetrics, updateSpeedMetrics, getPricing, listProviderKeys, createProviderKey, testProviderKeyInline, deleteProviderKey, updateProviderKey, testProviderKeyById, getUsageByProvider, getConversations, getConversationDetail, getUsageRecent, getUsageSummary, getUsageTimeseries, getUsageTopModels, chatCompletions, embeddings, listModels, getActiveVisitors, getEventDetail, getEventVisitors, getEventsCount, getGeneralStats, getLiveVisitorsList, getPageFlow, getPageHourlySessions, getPagePathDetail, getPagePathVisitors, getPagePaths, getPagePathsSparklines, getRecentActivity, getSessionDetails, getSessionEvents, getSessionLogs, getVisitors, getVisitorByGuid, getVisitorById, getVisitorDetails, enrichVisitor, getVisitorInfo, getVisitorJourney, getVisitorSessions, getVisitorStats, listApiKeys, createApiKey, getApiKeyPermissions, deleteApiKey, getApiKey, updateApiKey, activateApiKey, deactivateApiKey, emailStatus, login, requestMagicLink, verifyMagicLink, requestPasswordReset, resetPassword, verifyEmail, verifyMfaChallenge, runExternalServiceBackup, listS3Sources, createS3Source, deleteS3Source, getS3Source, updateS3Source, listSourceBackups, runBackupForSource, listBackupSchedules, createBackupSchedule, deleteBackupSchedule, getBackupSchedule, listBackupsForSchedule, disableBackupSchedule, enableBackupSchedule, getBackup, blobDelete, blobList, blobPut, blobCopy, blobDisable, blobEnable, blobStatus, blobUpdate, blobDownload, getDashboardProjectsAnalytics, getActivityGraph, getScanByDeployment, listProviders, createProvider, deleteProvider, getProvider, updateProvider, listManagedDomains, addManagedDomain, testProviderConnection, listProviderZones, removeManagedDomain, verifyManagedDomain, lookupDnsARecords, listDomains, createDomain, getDomainByHost, cancelDomainOrder, getDomainOrder, createOrRecreateOrder, finalizeOrder, setupDnsChallenge, deleteDomain, getDomainById, getChallengeToken, getHttpChallengeDebug, provisionDomain, renewDomain, checkDomainStatus, listDomains2, createDomain2, getDomainByName, deleteDomain2, getDomain, getDomainDnsRecords, setupDns, verifyDomain, listProviders2, createProvider2, deleteProvider2, getProvider2, testProvider, listEmails, sendEmail, getEmailStats, validateEmail, getEmail, listServices, createService, listAvailableContainers, getServiceBySlug, importExternalService, listProjectServices, getProjectServiceEnvironmentVariables, getProvidersMetadata, getProviderMetadata, getServiceTypes, getServiceTypeParameters, deleteService, getService, updateService, getServicePreviewEnvironmentVariablesMasked, getServicePreviewEnvironmentVariableNames, listServiceProjects, linkServiceToProject, unlinkServiceFromProject, getServiceEnvironmentVariables, getServiceEnvironmentVariable, retryCluster, startService, stopService, upgradeService, listRootContainers, listContainersAtPath, listEntities, getEntityInfo, queryData, downloadObject, getContainerInfo, checkExplorerSupport, getFile, getIpGeolocation, listConnections, deleteConnection, activateConnection, deactivateConnection, listRepositoriesByConnection, syncRepositories, updateConnectionToken, validateConnection, listGitProviders, createGitProvider, createGithubPatProvider, createGitlabOauthProvider, createGitlabPatProvider, deleteProvider3, getGitProvider, activateProvider, handleGitProviderOauthCallback, getProviderConnections, deactivateProvider, checkProviderDeletionSafety, startGitProviderOauth, deleteProviderSafely, getPublicRepository, getPublicBranches, detectPublicPresets, discoverWorkloads, executeImport, createPlan, listSources, getImportStatus, getIncident, updateIncidentStatus, getIncidentUpdates, adminListNodes, registerNode, adminRemoveNode, adminGetNode, adminListNodeContainers, adminUndrainNode, adminDrainStatus, adminDrainNode, nodeHeartbeat, getS3Credentials, listIpAccessControl, createIpAccessControl, checkIpBlocked, deleteIpAccessControl, getIpAccessControl, updateIpAccessControl, kvDel, kvDisable, kvEnable, kvExpire, kvGet, kvIncr, kvKeys, kvSet, kvStatus, kvTtl, kvUpdate, listRoutes, createRoute, deleteRoute, getRoute, updateRoute, logout, getLogContext, searchLogs, tailLogs, deleteMonitor, getMonitor, getBucketedStatus, getCurrentMonitorStatus, getUptimeHistory, deletePreferences, getPreferences, updatePreferences, listNotificationProviders, createNotificationProvider, createEmailProvider, updateEmailProvider, createSlackProvider, updateSlackProvider, createWebhookProvider, updateWebhookProvider, deleteProvider4, getNotificationProvider, updateProvider2, testProvider2, listOrders, queryGenaiTraces, getGenaiTrace, getHealth, listInsights, queryLogs, listMetricNames, queryMetrics, getPipelineStats, getQuota, queryTraceSummaries, queryTraces, getTrace, ingestLogs, ingestMetrics, ingestTraces, ingestLogsByPath, ingestMetricsByPath, ingestTracesByPath, hasPerformanceMetrics, getPerformanceMetrics, getMetricsOverTime, getGroupedPageMetrics, getAccessInfo, getPrivateIp, getPublicIp, listPresets, generatePresetDockerfile, getProjects, createProject, getProjectBySlug, createProjectFromTemplate, getProjectStatistics, deleteProject, getProject, updateProject, getProjectDeployments, getLastDeployment, triggerProjectPipeline, getActiveVisitors2, getAggregatedBuckets, updateAutomaticDeploy, listCustomDomainsForProject, createCustomDomain, deleteCustomDomain, getCustomDomain, updateCustomDomain, linkCustomDomainToCertificate, updateProjectDeploymentConfig, getDeployment, cancelDeployment, getDeploymentJobs, getDeploymentJobLogs, tailDeploymentJobLogs, getDeploymentOperations, executeDeploymentOperation, getDeploymentOperationStatus, pauseDeployment, promoteDeployment, resumeDeployment, rollbackToDeployment, teardownDeployment, listDsns, createDsn, getOrCreateDsn, regenerateDsn, revokeDsn, getEnvironmentVariables, createEnvironmentVariable, getEnvironmentVariableValue, deleteEnvironmentVariable, updateEnvironmentVariable, getEnvironments, createEnvironment, deleteEnvironment, getEnvironment, getEnvironmentCrons, getCronById, getCronExecutions, getEnvironmentDomains, addEnvironmentDomain, deleteEnvironmentDomain, updateEnvironmentSettings, sleepEnvironment, teardownEnvironment, wakeEnvironment, getContainerLogs, listContainers, getContainerDetail, getContainerLogsById, getContainerMetrics, streamContainerMetrics, restartContainer, startContainer, stopContainer, deployFromImage, deployFromImageUpload, deployFromStatic, listAlertRules, createAlertRule, deleteAlertRule, getAlertRule, updateAlertRule, getErrorDashboardStats, listErrorGroups, getErrorGroup, updateErrorGroup, listErrorEvents, getErrorEvent, getErrorStats, getErrorTimeSeries, getEventsCount2, getEventTypeBreakdown, recordConsoleEvent, getPropertyBreakdown, getPropertyTimeline, getEventsTimeline, getUniqueEvents, listExternalImages, registerExternalImage, deleteExternalImage, getExternalImage, listFunnels, createFunnel, previewFunnelMetrics, deleteFunnel, updateFunnel, getFunnelMetrics, updateGitSettings, hasErrorGroups, hasAnalyticsEvents, getHourlyVisits, listExternalImages2, pushExternalImage, getExternalImage2, listIncidents, createIncident, getBucketedIncidents, purgeProjectLogs, listMonitors, createMonitor, deleteReleaseSourceMaps, listSourceMaps, uploadSourceMap, updateProjectSettings, listReleases, deleteSourceMap, listStaticBundles, deleteStaticBundle, getStaticBundle, getStatusOverview, getUniqueCounts, uploadStaticBundle, listProjectScans, triggerScan, getLatestScansPerEnvironment, getLatestScan, listWebhooks, createWebhook, deleteWebhook, getWebhook, updateWebhook, listDeliveries, getDelivery, retryDelivery, getProxyLogs, getProxyLogByRequestId, getTimeBucketStats, getTodayStats, getProxyLogById, listSyncedRepositories, getRepositoryByName, getAllRepositoriesByName, getRepositoryPresetByName, getRepositoryBranches, getRepositoryTags, getRepositoryPresetLive, getBranchesByRepositoryId, listCommitsByRepositoryId, checkCommitExists, getTagsByRepositoryId, getProjectSessionReplays, getSessionEvents2, getSettings, updateSettings, revokeJoinToken, generateJoinToken, getJoinTokenStatus, getPublicSettings, listTemplates, listTemplateTags, getTemplate, getCurrentUser, listUsers, createUser, updateSelf, disableMfa, setupMfa, verifyAndEnableMfa, deleteUser, updateUser, restoreUser, assignRole, removeRole, getVisitorSessions2, deleteSessionReplay, getSessionReplay, updateSessionDuration, getSessionReplayEvents, addEvents, deleteScan, getScan, getScanVulnerabilities, listEventTypes, triggerWeeklyDigest, listExternalPlugins, reloadPlugins, ingestSentryEnvelope, ingestSentryEvent, listAuditLogs, getAuditLog } from '../sdk.gen'; import { queryOptions, type UseMutationOptions, type DefaultError, infiniteQueryOptions, type InfiniteData } from '@tanstack/react-query'; -import type { GetPlatformInfoData, ChunkUploadOptionsData, CreateReleaseData, CreateReleaseResponse, ListReleaseFilesData, UploadReleaseFileData, UploadReleaseFileResponse, RecordEventMetricsData, AddSessionReplayEventsData, AddSessionReplayEventsError, AddSessionReplayEventsResponse, InitSessionReplayData, InitSessionReplayError, InitSessionReplayResponse, RecordSpeedMetricsData, RecordSpeedMetricsError, UpdateSpeedMetricsData, UpdateSpeedMetricsError, GetPricingData, ListProviderKeysData, CreateProviderKeyData, CreateProviderKeyError, CreateProviderKeyResponse, TestProviderKeyInlineData, TestProviderKeyInlineError, TestProviderKeyInlineResponse, DeleteProviderKeyData, DeleteProviderKeyError, DeleteProviderKeyResponse, UpdateProviderKeyData, UpdateProviderKeyError, UpdateProviderKeyResponse, TestProviderKeyByIdData, TestProviderKeyByIdError, TestProviderKeyByIdResponse, GetUsageByProviderData, GetConversationsData, GetConversationDetailData, GetUsageRecentData, GetUsageSummaryData, GetUsageTimeseriesData, GetUsageTopModelsData, ChatCompletionsData, ChatCompletionsError, ChatCompletionsResponse, EmbeddingsData, EmbeddingsError, EmbeddingsResponse, ListModelsData, GetActiveVisitorsData, GetEventDetailData, GetEventVisitorsData, GetEventVisitorsResponse, GetEventsCountData, GetGeneralStatsData, GetLiveVisitorsListData, GetPageFlowData, GetPageHourlySessionsData, GetPagePathDetailData, GetPagePathVisitorsData, GetPagePathVisitorsResponse, GetPagePathsData, GetPagePathsSparklinesData, GetRecentActivityData, GetSessionDetailsData, GetSessionEventsData, GetSessionEventsResponse, GetSessionLogsData, GetSessionLogsResponse, GetVisitorsData, GetVisitorsResponse, GetVisitorByGuidData, GetVisitorByIdData, GetVisitorDetailsData, EnrichVisitorData, EnrichVisitorResponse2 as EnrichVisitorResponse, GetVisitorInfoData, GetVisitorJourneyData, GetVisitorSessionsData, GetVisitorStatsData, ListApiKeysData, ListApiKeysResponse, CreateApiKeyData, CreateApiKeyResponse2 as CreateApiKeyResponse, GetApiKeyPermissionsData, DeleteApiKeyData, DeleteApiKeyResponse, GetApiKeyData, UpdateApiKeyData, UpdateApiKeyResponse, ActivateApiKeyData, ActivateApiKeyResponse, DeactivateApiKeyData, DeactivateApiKeyResponse, EmailStatusData, LoginData, LoginResponse, RequestMagicLinkData, RequestMagicLinkResponse, VerifyMagicLinkData, RequestPasswordResetData, RequestPasswordResetResponse, ResetPasswordData, ResetPasswordResponse, VerifyEmailData, VerifyMfaChallengeData, VerifyMfaChallengeResponse, RunExternalServiceBackupData, RunExternalServiceBackupError, RunExternalServiceBackupResponse, ListS3SourcesData, CreateS3SourceData, CreateS3SourceError, CreateS3SourceResponse, DeleteS3SourceData, DeleteS3SourceError, DeleteS3SourceResponse, GetS3SourceData, UpdateS3SourceData, UpdateS3SourceError, UpdateS3SourceResponse, ListSourceBackupsData, RunBackupForSourceData, RunBackupForSourceError, RunBackupForSourceResponse, ListBackupSchedulesData, CreateBackupScheduleData, CreateBackupScheduleError, CreateBackupScheduleResponse, DeleteBackupScheduleData, DeleteBackupScheduleError, DeleteBackupScheduleResponse, GetBackupScheduleData, ListBackupsForScheduleData, DisableBackupScheduleData, DisableBackupScheduleResponse, EnableBackupScheduleData, EnableBackupScheduleResponse, GetBackupData, BlobDeleteData, BlobDeleteError, BlobDeleteResponse, BlobListData, BlobListError, BlobListResponse, BlobPutData, BlobPutError, BlobPutResponse, BlobCopyData, BlobCopyError, BlobCopyResponse, BlobDisableData, BlobDisableResponse, BlobEnableData, BlobEnableResponse, BlobStatusData, BlobUpdateData, BlobUpdateResponse, BlobDownloadData, GetDashboardProjectsAnalyticsData, GetActivityGraphData, GetScanByDeploymentData, ListProvidersData, CreateProviderData, CreateProviderResponse, DeleteProviderData, DeleteProviderResponse, GetProviderData, UpdateProviderData, UpdateProviderResponse, ListManagedDomainsData, AddManagedDomainData, AddManagedDomainResponse, TestProviderConnectionData, TestProviderConnectionResponse, ListProviderZonesData, RemoveManagedDomainData, RemoveManagedDomainResponse, VerifyManagedDomainData, VerifyManagedDomainResponse, LookupDnsARecordsData, ListDomainsData, ListDomainsResponse2 as ListDomainsResponse, CreateDomainData, CreateDomainResponse, GetDomainByHostData, CancelDomainOrderData, CancelDomainOrderResponse, GetDomainOrderData, CreateOrRecreateOrderData, CreateOrRecreateOrderResponse, FinalizeOrderData, FinalizeOrderResponse, SetupDnsChallengeData, SetupDnsChallengeResponse2 as SetupDnsChallengeResponse, DeleteDomainData, DeleteDomainResponse, GetDomainByIdData, GetChallengeTokenData, GetHttpChallengeDebugData, ProvisionDomainData, ProvisionDomainResponse, RenewDomainData, RenewDomainResponse, CheckDomainStatusData, ListDomains2Data, CreateDomain2Data, CreateDomain2Response, GetDomainByNameData, DeleteDomain2Data, DeleteDomain2Response, GetDomainData, GetDomainDnsRecordsData, SetupDnsData, SetupDnsResponse2 as SetupDnsResponse, VerifyDomainData, VerifyDomainResponse, ListProviders2Data, CreateProvider2Data, CreateProvider2Response, DeleteProvider2Data, DeleteProvider2Response, GetProvider2Data, TestProviderData, TestProviderResponse2 as TestProviderResponse, ListEmailsData, ListEmailsResponse, SendEmailData, SendEmailResponse, GetEmailStatsData, ValidateEmailData, ValidateEmailResponse2 as ValidateEmailResponse, GetEmailData, ListServicesData, ListServicesResponse, CreateServiceData, CreateServiceResponse, ListAvailableContainersData, GetServiceBySlugData, ImportExternalServiceData, ImportExternalServiceResponse, ListProjectServicesData, ListProjectServicesResponse, GetProjectServiceEnvironmentVariablesData, GetProvidersMetadataData, GetProviderMetadataData, GetServiceTypesData, GetServiceTypeParametersData, DeleteServiceData, DeleteServiceResponse, GetServiceData, UpdateServiceData, UpdateServiceResponse, GetServicePreviewEnvironmentVariablesMaskedData, GetServicePreviewEnvironmentVariableNamesData, ListServiceProjectsData, ListServiceProjectsResponse, LinkServiceToProjectData, LinkServiceToProjectResponse, UnlinkServiceFromProjectData, UnlinkServiceFromProjectResponse, GetServiceEnvironmentVariablesData, GetServiceEnvironmentVariableData, StartServiceData, StartServiceResponse, StopServiceData, StopServiceResponse, UpgradeServiceData, UpgradeServiceResponse, ListRootContainersData, ListContainersAtPathData, ListEntitiesData, GetEntityInfoData, QueryDataData, QueryDataResponse2 as QueryDataResponse, DownloadObjectData, GetContainerInfoData, CheckExplorerSupportData, GetFileData, GetIpGeolocationData, ListConnectionsData, ListConnectionsResponse, DeleteConnectionData, DeleteConnectionResponse, ActivateConnectionData, DeactivateConnectionData, ListRepositoriesByConnectionData, ListRepositoriesByConnectionResponse, SyncRepositoriesData, SyncRepositoriesResponse, UpdateConnectionTokenData, UpdateConnectionTokenResponse, ValidateConnectionData, ListGitProvidersData, CreateGitProviderData, CreateGitProviderResponse, CreateGithubPatProviderData, CreateGithubPatProviderResponse, CreateGitlabOauthProviderData, CreateGitlabOauthProviderResponse, CreateGitlabPatProviderData, CreateGitlabPatProviderResponse, DeleteProvider3Data, DeleteProvider3Response, GetGitProviderData, ActivateProviderData, HandleGitProviderOauthCallbackData, GetProviderConnectionsData, DeactivateProviderData, CheckProviderDeletionSafetyData, StartGitProviderOauthData, DeleteProviderSafelyData, DeleteProviderSafelyResponse, GetPublicRepositoryData, GetPublicBranchesData, DetectPublicPresetsData, DiscoverWorkloadsData, DiscoverWorkloadsResponse, ExecuteImportData, ExecuteImportResponse2 as ExecuteImportResponse, CreatePlanData, CreatePlanResponse2 as CreatePlanResponse, ListSourcesData, GetImportStatusData, GetIncidentData, UpdateIncidentStatusData, UpdateIncidentStatusResponse, GetIncidentUpdatesData, AdminListNodesData, RegisterNodeData, RegisterNodeResponse2 as RegisterNodeResponse, AdminRemoveNodeData, AdminRemoveNodeResponse, AdminGetNodeData, AdminListNodeContainersData, AdminUndrainNodeData, AdminUndrainNodeResponse, AdminDrainStatusData, AdminDrainNodeData, AdminDrainNodeResponse, NodeHeartbeatData, NodeHeartbeatResponse, ListIpAccessControlData, CreateIpAccessControlData, CreateIpAccessControlError, CreateIpAccessControlResponse, CheckIpBlockedData, DeleteIpAccessControlData, DeleteIpAccessControlError, DeleteIpAccessControlResponse, GetIpAccessControlData, UpdateIpAccessControlData, UpdateIpAccessControlError, UpdateIpAccessControlResponse, KvDelData, KvDelResponse, KvDisableData, KvDisableResponse, KvEnableData, KvEnableResponse, KvExpireData, KvExpireResponse, KvGetData, KvGetResponse, KvIncrData, KvIncrResponse, KvKeysData, KvKeysResponse, KvSetData, KvSetResponse, KvStatusData, KvTtlData, KvTtlResponse, KvUpdateData, KvUpdateResponse, ListRoutesData, CreateRouteData, CreateRouteResponse, DeleteRouteData, DeleteRouteResponse, GetRouteData, UpdateRouteData, UpdateRouteResponse, LogoutData, GetLogContextData, SearchLogsData, SearchLogsError, SearchLogsResponse2 as SearchLogsResponse, TailLogsData, DeleteMonitorData, DeleteMonitorResponse, GetMonitorData, GetBucketedStatusData, GetCurrentMonitorStatusData, GetUptimeHistoryData, DeletePreferencesData, DeletePreferencesResponse, GetPreferencesData, UpdatePreferencesData, UpdatePreferencesResponse, ListNotificationProvidersData, ListNotificationProvidersResponse, CreateNotificationProviderData, CreateNotificationProviderResponse, CreateEmailProviderData, CreateEmailProviderResponse, UpdateEmailProviderData, UpdateEmailProviderResponse, CreateSlackProviderData, CreateSlackProviderResponse, UpdateSlackProviderData, UpdateSlackProviderResponse, CreateWebhookProviderData, CreateWebhookProviderResponse, UpdateWebhookProviderData, UpdateWebhookProviderResponse, DeleteProvider4Data, DeleteProvider4Response, GetNotificationProviderData, UpdateProvider2Data, UpdateProvider2Response, TestProvider2Data, TestProvider2Response, ListOrdersData, ListOrdersResponse2 as ListOrdersResponse, QueryGenaiTracesData, QueryGenaiTracesError, QueryGenaiTracesResponse, GetGenaiTraceData, GetHealthData, ListInsightsData, ListInsightsError, ListInsightsResponse, QueryLogsData, QueryLogsError, QueryLogsResponse, ListMetricNamesData, QueryMetricsData, GetPipelineStatsData, GetQuotaData, QueryTraceSummariesData, QueryTraceSummariesError, QueryTraceSummariesResponse, QueryTracesData, QueryTracesError, QueryTracesResponse, GetTraceData, IngestLogsData, IngestLogsError, IngestMetricsData, IngestMetricsError, IngestTracesData, IngestTracesError, IngestLogsByPathData, IngestLogsByPathError, IngestMetricsByPathData, IngestMetricsByPathError, IngestTracesByPathData, IngestTracesByPathError, HasPerformanceMetricsData, GetPerformanceMetricsData, GetMetricsOverTimeData, GetGroupedPageMetricsData, GetAccessInfoData, GetPrivateIpData, GetPublicIpData, ListPresetsData, GeneratePresetDockerfileData, GeneratePresetDockerfileResponse, GetProjectsData, GetProjectsResponse, CreateProjectData, CreateProjectResponse, GetProjectBySlugData, CreateProjectFromTemplateData, CreateProjectFromTemplateResponse2 as CreateProjectFromTemplateResponse, GetProjectStatisticsData, DeleteProjectData, DeleteProjectResponse, GetProjectData, UpdateProjectData, UpdateProjectResponse, GetProjectDeploymentsData, GetProjectDeploymentsResponse, GetLastDeploymentData, TriggerProjectPipelineData, TriggerProjectPipelineResponse, GetActiveVisitors2Data, GetAggregatedBucketsData, UpdateAutomaticDeployData, UpdateAutomaticDeployResponse, ListCustomDomainsForProjectData, CreateCustomDomainData, CreateCustomDomainResponse, DeleteCustomDomainData, DeleteCustomDomainResponse, GetCustomDomainData, UpdateCustomDomainData, UpdateCustomDomainResponse, LinkCustomDomainToCertificateData, LinkCustomDomainToCertificateResponse, UpdateProjectDeploymentConfigData, UpdateProjectDeploymentConfigResponse, GetDeploymentData, CancelDeploymentData, CancelDeploymentResponse, GetDeploymentJobsData, GetDeploymentJobLogsData, TailDeploymentJobLogsData, GetDeploymentOperationsData, ExecuteDeploymentOperationData, ExecuteDeploymentOperationResponse, GetDeploymentOperationStatusData, PauseDeploymentData, PauseDeploymentResponse, PromoteDeploymentData, PromoteDeploymentResponse, ResumeDeploymentData, ResumeDeploymentResponse, RollbackToDeploymentData, RollbackToDeploymentResponse, TeardownDeploymentData, TeardownDeploymentResponse, ListDsnsData, CreateDsnData, CreateDsnResponse, GetOrCreateDsnData, GetOrCreateDsnResponse, RegenerateDsnData, RegenerateDsnResponse, RevokeDsnData, RevokeDsnResponse, GetEnvironmentVariablesData, CreateEnvironmentVariableData, CreateEnvironmentVariableResponse, GetEnvironmentVariableValueData, DeleteEnvironmentVariableData, DeleteEnvironmentVariableResponse, UpdateEnvironmentVariableData, UpdateEnvironmentVariableResponse, GetEnvironmentsData, CreateEnvironmentData, CreateEnvironmentResponse, DeleteEnvironmentData, DeleteEnvironmentResponse, GetEnvironmentData, GetEnvironmentCronsData, GetCronByIdData, GetCronExecutionsData, GetCronExecutionsResponse, GetEnvironmentDomainsData, AddEnvironmentDomainData, AddEnvironmentDomainResponse, DeleteEnvironmentDomainData, DeleteEnvironmentDomainResponse, UpdateEnvironmentSettingsData, UpdateEnvironmentSettingsResponse, SleepEnvironmentData, SleepEnvironmentResponse, TeardownEnvironmentData, TeardownEnvironmentResponse, WakeEnvironmentData, WakeEnvironmentResponse, GetContainerLogsData, ListContainersData, GetContainerDetailData, GetContainerLogsByIdData, GetContainerMetricsData, StreamContainerMetricsData, RestartContainerData, RestartContainerResponse, StartContainerData, StartContainerResponse, StopContainerData, StopContainerResponse, DeployFromImageData, DeployFromImageResponse, DeployFromImageUploadData, DeployFromImageUploadResponse, DeployFromStaticData, DeployFromStaticResponse, GetErrorDashboardStatsData, ListErrorGroupsData, ListErrorGroupsResponse, GetErrorGroupData, UpdateErrorGroupData, ListErrorEventsData, ListErrorEventsResponse, GetErrorEventData, GetErrorStatsData, GetErrorTimeSeriesData, GetEventsCount2Data, GetEventTypeBreakdownData, RecordConsoleEventData, GetPropertyBreakdownData, GetPropertyTimelineData, GetEventsTimelineData, GetUniqueEventsData, GetUniqueEventsResponse, ListExternalImagesData, ListExternalImagesResponse, RegisterExternalImageData, RegisterExternalImageResponse, DeleteExternalImageData, DeleteExternalImageResponse, GetExternalImageData, ListFunnelsData, CreateFunnelData, CreateFunnelResponse2 as CreateFunnelResponse, PreviewFunnelMetricsData, PreviewFunnelMetricsResponse, DeleteFunnelData, UpdateFunnelData, GetFunnelMetricsData, UpdateGitSettingsData, UpdateGitSettingsResponse, HasErrorGroupsData, HasAnalyticsEventsData, GetHourlyVisitsData, ListExternalImages2Data, PushExternalImageData, PushExternalImageResponse, GetExternalImage2Data, ListIncidentsData, CreateIncidentData, CreateIncidentResponse, GetBucketedIncidentsData, PurgeProjectLogsData, PurgeProjectLogsError, ListMonitorsData, CreateMonitorData, CreateMonitorResponse, DeleteReleaseSourceMapsData, DeleteReleaseSourceMapsResponse, ListSourceMapsData, UploadSourceMapData, UploadSourceMapResponse, UpdateProjectSettingsData, UpdateProjectSettingsResponse, ListReleasesData, DeleteSourceMapData, DeleteSourceMapResponse, ListStaticBundlesData, ListStaticBundlesResponse, DeleteStaticBundleData, DeleteStaticBundleResponse, GetStaticBundleData, GetStatusOverviewData, GetUniqueCountsData, UploadStaticBundleData, UploadStaticBundleResponse, ListProjectScansData, ListProjectScansError, ListProjectScansResponse, TriggerScanData, TriggerScanError, TriggerScanResponse2 as TriggerScanResponse, GetLatestScansPerEnvironmentData, GetLatestScanData, ListWebhooksData, ListWebhooksResponse, CreateWebhookData, CreateWebhookResponse, DeleteWebhookData, DeleteWebhookResponse, GetWebhookData, UpdateWebhookData, UpdateWebhookResponse, ListDeliveriesData, GetDeliveryData, RetryDeliveryData, RetryDeliveryResponse, GetProxyLogsData, GetProxyLogsResponse, GetProxyLogByRequestIdData, GetTimeBucketStatsData, GetTodayStatsData, GetProxyLogByIdData, ListSyncedRepositoriesData, ListSyncedRepositoriesResponse, GetRepositoryByNameData, GetAllRepositoriesByNameData, GetRepositoryPresetByNameData, GetRepositoryBranchesData, GetRepositoryTagsData, GetRepositoryPresetLiveData, GetBranchesByRepositoryIdData, ListCommitsByRepositoryIdData, CheckCommitExistsData, GetTagsByRepositoryIdData, GetProjectSessionReplaysData, GetProjectSessionReplaysError, GetProjectSessionReplaysResponse2 as GetProjectSessionReplaysResponse, GetSessionEvents2Data, GetSettingsData, UpdateSettingsData, UpdateSettingsResponse, RevokeJoinTokenData, RevokeJoinTokenResponse, GenerateJoinTokenData, GenerateJoinTokenResponse2 as GenerateJoinTokenResponse, GetJoinTokenStatusData, ListTemplatesData, ListTemplateTagsData, GetTemplateData, GetCurrentUserData, ListUsersData, CreateUserData, CreateUserResponse, UpdateSelfData, UpdateSelfResponse, DisableMfaData, DisableMfaResponse, SetupMfaData, SetupMfaResponse, VerifyAndEnableMfaData, VerifyAndEnableMfaResponse, DeleteUserData, DeleteUserResponse, UpdateUserData, UpdateUserResponse, RestoreUserData, RestoreUserResponse, AssignRoleData, RemoveRoleData, RemoveRoleResponse, GetVisitorSessions2Data, GetVisitorSessions2Error, GetVisitorSessions2Response, DeleteSessionReplayData, DeleteSessionReplayError, GetSessionReplayData, UpdateSessionDurationData, UpdateSessionDurationError, UpdateSessionDurationResponse2 as UpdateSessionDurationResponse, GetSessionReplayEventsData, AddEventsData, AddEventsError, AddEventsResponse2 as AddEventsResponse, DeleteScanData, DeleteScanError, DeleteScanResponse, GetScanData, GetScanVulnerabilitiesData, GetScanVulnerabilitiesError, GetScanVulnerabilitiesResponse, ListEventTypesData, TriggerWeeklyDigestData, TriggerWeeklyDigestResponse, ListExternalPluginsData, ReloadPluginsData, ReloadPluginsResponse, IngestSentryEnvelopeData, IngestSentryEventData, IngestSentryEventResponse, ListAuditLogsData, ListAuditLogsResponse, GetAuditLogData } from '../types.gen'; +import type { GetPlatformInfoData, ChunkUploadOptionsData, CreateReleaseData, CreateReleaseResponse, ListReleaseFilesData, UploadReleaseFileData, UploadReleaseFileResponse, RecordEventMetricsData, AddSessionReplayEventsData, AddSessionReplayEventsError, AddSessionReplayEventsResponse, InitSessionReplayData, InitSessionReplayError, InitSessionReplayResponse, RecordSpeedMetricsData, RecordSpeedMetricsError, UpdateSpeedMetricsData, UpdateSpeedMetricsError, GetPricingData, ListProviderKeysData, CreateProviderKeyData, CreateProviderKeyError, CreateProviderKeyResponse, TestProviderKeyInlineData, TestProviderKeyInlineError, TestProviderKeyInlineResponse, DeleteProviderKeyData, DeleteProviderKeyError, DeleteProviderKeyResponse, UpdateProviderKeyData, UpdateProviderKeyError, UpdateProviderKeyResponse, TestProviderKeyByIdData, TestProviderKeyByIdError, TestProviderKeyByIdResponse, GetUsageByProviderData, GetConversationsData, GetConversationDetailData, GetUsageRecentData, GetUsageSummaryData, GetUsageTimeseriesData, GetUsageTopModelsData, ChatCompletionsData, ChatCompletionsError, ChatCompletionsResponse, EmbeddingsData, EmbeddingsError, EmbeddingsResponse, ListModelsData, GetActiveVisitorsData, GetEventDetailData, GetEventVisitorsData, GetEventVisitorsResponse, GetEventsCountData, GetGeneralStatsData, GetLiveVisitorsListData, GetPageFlowData, GetPageHourlySessionsData, GetPagePathDetailData, GetPagePathVisitorsData, GetPagePathVisitorsResponse, GetPagePathsData, GetPagePathsSparklinesData, GetRecentActivityData, GetSessionDetailsData, GetSessionEventsData, GetSessionEventsResponse, GetSessionLogsData, GetSessionLogsResponse, GetVisitorsData, GetVisitorsResponse, GetVisitorByGuidData, GetVisitorByIdData, GetVisitorDetailsData, EnrichVisitorData, EnrichVisitorResponse2 as EnrichVisitorResponse, GetVisitorInfoData, GetVisitorJourneyData, GetVisitorSessionsData, GetVisitorStatsData, ListApiKeysData, ListApiKeysResponse, CreateApiKeyData, CreateApiKeyResponse2 as CreateApiKeyResponse, GetApiKeyPermissionsData, DeleteApiKeyData, DeleteApiKeyResponse, GetApiKeyData, UpdateApiKeyData, UpdateApiKeyResponse, ActivateApiKeyData, ActivateApiKeyResponse, DeactivateApiKeyData, DeactivateApiKeyResponse, EmailStatusData, LoginData, LoginResponse, RequestMagicLinkData, RequestMagicLinkResponse, VerifyMagicLinkData, RequestPasswordResetData, RequestPasswordResetResponse, ResetPasswordData, ResetPasswordResponse, VerifyEmailData, VerifyMfaChallengeData, VerifyMfaChallengeResponse, RunExternalServiceBackupData, RunExternalServiceBackupError, RunExternalServiceBackupResponse, ListS3SourcesData, CreateS3SourceData, CreateS3SourceError, CreateS3SourceResponse, DeleteS3SourceData, DeleteS3SourceError, DeleteS3SourceResponse, GetS3SourceData, UpdateS3SourceData, UpdateS3SourceError, UpdateS3SourceResponse, ListSourceBackupsData, RunBackupForSourceData, RunBackupForSourceError, RunBackupForSourceResponse, ListBackupSchedulesData, CreateBackupScheduleData, CreateBackupScheduleError, CreateBackupScheduleResponse, DeleteBackupScheduleData, DeleteBackupScheduleError, DeleteBackupScheduleResponse, GetBackupScheduleData, ListBackupsForScheduleData, DisableBackupScheduleData, DisableBackupScheduleResponse, EnableBackupScheduleData, EnableBackupScheduleResponse, GetBackupData, BlobDeleteData, BlobDeleteError, BlobDeleteResponse, BlobListData, BlobListError, BlobListResponse, BlobPutData, BlobPutError, BlobPutResponse, BlobCopyData, BlobCopyError, BlobCopyResponse, BlobDisableData, BlobDisableResponse, BlobEnableData, BlobEnableResponse, BlobStatusData, BlobUpdateData, BlobUpdateResponse, BlobDownloadData, GetDashboardProjectsAnalyticsData, GetActivityGraphData, GetScanByDeploymentData, ListProvidersData, CreateProviderData, CreateProviderResponse, DeleteProviderData, DeleteProviderResponse, GetProviderData, UpdateProviderData, UpdateProviderResponse, ListManagedDomainsData, AddManagedDomainData, AddManagedDomainResponse, TestProviderConnectionData, TestProviderConnectionResponse, ListProviderZonesData, RemoveManagedDomainData, RemoveManagedDomainResponse, VerifyManagedDomainData, VerifyManagedDomainResponse, LookupDnsARecordsData, ListDomainsData, ListDomainsResponse2 as ListDomainsResponse, CreateDomainData, CreateDomainResponse, GetDomainByHostData, CancelDomainOrderData, CancelDomainOrderResponse, GetDomainOrderData, CreateOrRecreateOrderData, CreateOrRecreateOrderResponse, FinalizeOrderData, FinalizeOrderResponse, SetupDnsChallengeData, SetupDnsChallengeResponse2 as SetupDnsChallengeResponse, DeleteDomainData, DeleteDomainResponse, GetDomainByIdData, GetChallengeTokenData, GetHttpChallengeDebugData, ProvisionDomainData, ProvisionDomainResponse, RenewDomainData, RenewDomainResponse, CheckDomainStatusData, ListDomains2Data, CreateDomain2Data, CreateDomain2Response, GetDomainByNameData, DeleteDomain2Data, DeleteDomain2Response, GetDomainData, GetDomainDnsRecordsData, SetupDnsData, SetupDnsResponse2 as SetupDnsResponse, VerifyDomainData, VerifyDomainResponse, ListProviders2Data, CreateProvider2Data, CreateProvider2Response, DeleteProvider2Data, DeleteProvider2Response, GetProvider2Data, TestProviderData, TestProviderResponse2 as TestProviderResponse, ListEmailsData, ListEmailsResponse, SendEmailData, SendEmailResponse, GetEmailStatsData, ValidateEmailData, ValidateEmailResponse2 as ValidateEmailResponse, GetEmailData, ListServicesData, ListServicesResponse, CreateServiceData, CreateServiceResponse, ListAvailableContainersData, GetServiceBySlugData, ImportExternalServiceData, ImportExternalServiceResponse, ListProjectServicesData, ListProjectServicesResponse, GetProjectServiceEnvironmentVariablesData, GetProvidersMetadataData, GetProviderMetadataData, GetServiceTypesData, GetServiceTypeParametersData, DeleteServiceData, DeleteServiceResponse, GetServiceData, UpdateServiceData, UpdateServiceResponse, GetServicePreviewEnvironmentVariablesMaskedData, GetServicePreviewEnvironmentVariableNamesData, ListServiceProjectsData, ListServiceProjectsResponse, LinkServiceToProjectData, LinkServiceToProjectResponse, UnlinkServiceFromProjectData, UnlinkServiceFromProjectResponse, GetServiceEnvironmentVariablesData, GetServiceEnvironmentVariableData, RetryClusterData, RetryClusterResponse, StartServiceData, StartServiceResponse, StopServiceData, StopServiceResponse, UpgradeServiceData, UpgradeServiceResponse, ListRootContainersData, ListContainersAtPathData, ListEntitiesData, GetEntityInfoData, QueryDataData, QueryDataResponse2 as QueryDataResponse, DownloadObjectData, GetContainerInfoData, CheckExplorerSupportData, GetFileData, GetIpGeolocationData, ListConnectionsData, ListConnectionsResponse, DeleteConnectionData, DeleteConnectionResponse, ActivateConnectionData, DeactivateConnectionData, ListRepositoriesByConnectionData, ListRepositoriesByConnectionResponse, SyncRepositoriesData, SyncRepositoriesResponse, UpdateConnectionTokenData, UpdateConnectionTokenResponse, ValidateConnectionData, ListGitProvidersData, CreateGitProviderData, CreateGitProviderResponse, CreateGithubPatProviderData, CreateGithubPatProviderResponse, CreateGitlabOauthProviderData, CreateGitlabOauthProviderResponse, CreateGitlabPatProviderData, CreateGitlabPatProviderResponse, DeleteProvider3Data, DeleteProvider3Response, GetGitProviderData, ActivateProviderData, HandleGitProviderOauthCallbackData, GetProviderConnectionsData, DeactivateProviderData, CheckProviderDeletionSafetyData, StartGitProviderOauthData, DeleteProviderSafelyData, DeleteProviderSafelyResponse, GetPublicRepositoryData, GetPublicBranchesData, DetectPublicPresetsData, DiscoverWorkloadsData, DiscoverWorkloadsResponse, ExecuteImportData, ExecuteImportResponse2 as ExecuteImportResponse, CreatePlanData, CreatePlanResponse2 as CreatePlanResponse, ListSourcesData, GetImportStatusData, GetIncidentData, UpdateIncidentStatusData, UpdateIncidentStatusResponse, GetIncidentUpdatesData, AdminListNodesData, RegisterNodeData, RegisterNodeResponse2 as RegisterNodeResponse, AdminRemoveNodeData, AdminRemoveNodeResponse, AdminGetNodeData, AdminListNodeContainersData, AdminUndrainNodeData, AdminUndrainNodeResponse, AdminDrainStatusData, AdminDrainNodeData, AdminDrainNodeResponse, NodeHeartbeatData, NodeHeartbeatResponse, GetS3CredentialsData, ListIpAccessControlData, CreateIpAccessControlData, CreateIpAccessControlError, CreateIpAccessControlResponse, CheckIpBlockedData, DeleteIpAccessControlData, DeleteIpAccessControlError, DeleteIpAccessControlResponse, GetIpAccessControlData, UpdateIpAccessControlData, UpdateIpAccessControlError, UpdateIpAccessControlResponse, KvDelData, KvDelResponse, KvDisableData, KvDisableResponse, KvEnableData, KvEnableResponse, KvExpireData, KvExpireResponse, KvGetData, KvGetResponse, KvIncrData, KvIncrResponse, KvKeysData, KvKeysResponse, KvSetData, KvSetResponse, KvStatusData, KvTtlData, KvTtlResponse, KvUpdateData, KvUpdateResponse, ListRoutesData, CreateRouteData, CreateRouteResponse, DeleteRouteData, DeleteRouteResponse, GetRouteData, UpdateRouteData, UpdateRouteResponse, LogoutData, GetLogContextData, SearchLogsData, SearchLogsError, SearchLogsResponse2 as SearchLogsResponse, TailLogsData, DeleteMonitorData, DeleteMonitorResponse, GetMonitorData, GetBucketedStatusData, GetCurrentMonitorStatusData, GetUptimeHistoryData, DeletePreferencesData, DeletePreferencesResponse, GetPreferencesData, UpdatePreferencesData, UpdatePreferencesResponse, ListNotificationProvidersData, ListNotificationProvidersResponse, CreateNotificationProviderData, CreateNotificationProviderResponse, CreateEmailProviderData, CreateEmailProviderResponse, UpdateEmailProviderData, UpdateEmailProviderResponse, CreateSlackProviderData, CreateSlackProviderResponse, UpdateSlackProviderData, UpdateSlackProviderResponse, CreateWebhookProviderData, CreateWebhookProviderResponse, UpdateWebhookProviderData, UpdateWebhookProviderResponse, DeleteProvider4Data, DeleteProvider4Response, GetNotificationProviderData, UpdateProvider2Data, UpdateProvider2Response, TestProvider2Data, TestProvider2Response, ListOrdersData, ListOrdersResponse2 as ListOrdersResponse, QueryGenaiTracesData, QueryGenaiTracesError, QueryGenaiTracesResponse, GetGenaiTraceData, GetHealthData, ListInsightsData, ListInsightsError, ListInsightsResponse, QueryLogsData, QueryLogsError, QueryLogsResponse, ListMetricNamesData, QueryMetricsData, GetPipelineStatsData, GetQuotaData, QueryTraceSummariesData, QueryTraceSummariesError, QueryTraceSummariesResponse, QueryTracesData, QueryTracesError, QueryTracesResponse, GetTraceData, IngestLogsData, IngestLogsError, IngestMetricsData, IngestMetricsError, IngestTracesData, IngestTracesError, IngestLogsByPathData, IngestLogsByPathError, IngestMetricsByPathData, IngestMetricsByPathError, IngestTracesByPathData, IngestTracesByPathError, HasPerformanceMetricsData, GetPerformanceMetricsData, GetMetricsOverTimeData, GetGroupedPageMetricsData, GetAccessInfoData, GetPrivateIpData, GetPublicIpData, ListPresetsData, GeneratePresetDockerfileData, GeneratePresetDockerfileResponse, GetProjectsData, GetProjectsResponse, CreateProjectData, CreateProjectResponse, GetProjectBySlugData, CreateProjectFromTemplateData, CreateProjectFromTemplateResponse2 as CreateProjectFromTemplateResponse, GetProjectStatisticsData, DeleteProjectData, DeleteProjectResponse, GetProjectData, UpdateProjectData, UpdateProjectResponse, GetProjectDeploymentsData, GetProjectDeploymentsResponse, GetLastDeploymentData, TriggerProjectPipelineData, TriggerProjectPipelineResponse, GetActiveVisitors2Data, GetAggregatedBucketsData, UpdateAutomaticDeployData, UpdateAutomaticDeployResponse, ListCustomDomainsForProjectData, CreateCustomDomainData, CreateCustomDomainResponse, DeleteCustomDomainData, DeleteCustomDomainResponse, GetCustomDomainData, UpdateCustomDomainData, UpdateCustomDomainResponse, LinkCustomDomainToCertificateData, LinkCustomDomainToCertificateResponse, UpdateProjectDeploymentConfigData, UpdateProjectDeploymentConfigResponse, GetDeploymentData, CancelDeploymentData, CancelDeploymentResponse, GetDeploymentJobsData, GetDeploymentJobLogsData, TailDeploymentJobLogsData, GetDeploymentOperationsData, ExecuteDeploymentOperationData, ExecuteDeploymentOperationResponse, GetDeploymentOperationStatusData, PauseDeploymentData, PauseDeploymentResponse, PromoteDeploymentData, PromoteDeploymentResponse, ResumeDeploymentData, ResumeDeploymentResponse, RollbackToDeploymentData, RollbackToDeploymentResponse, TeardownDeploymentData, TeardownDeploymentResponse, ListDsnsData, CreateDsnData, CreateDsnResponse, GetOrCreateDsnData, GetOrCreateDsnResponse, RegenerateDsnData, RegenerateDsnResponse, RevokeDsnData, RevokeDsnResponse, GetEnvironmentVariablesData, CreateEnvironmentVariableData, CreateEnvironmentVariableResponse, GetEnvironmentVariableValueData, DeleteEnvironmentVariableData, DeleteEnvironmentVariableResponse, UpdateEnvironmentVariableData, UpdateEnvironmentVariableResponse, GetEnvironmentsData, CreateEnvironmentData, CreateEnvironmentResponse, DeleteEnvironmentData, DeleteEnvironmentResponse, GetEnvironmentData, GetEnvironmentCronsData, GetCronByIdData, GetCronExecutionsData, GetCronExecutionsResponse, GetEnvironmentDomainsData, AddEnvironmentDomainData, AddEnvironmentDomainResponse, DeleteEnvironmentDomainData, DeleteEnvironmentDomainResponse, UpdateEnvironmentSettingsData, UpdateEnvironmentSettingsResponse, SleepEnvironmentData, SleepEnvironmentResponse, TeardownEnvironmentData, TeardownEnvironmentResponse, WakeEnvironmentData, WakeEnvironmentResponse, GetContainerLogsData, ListContainersData, GetContainerDetailData, GetContainerLogsByIdData, GetContainerMetricsData, StreamContainerMetricsData, RestartContainerData, RestartContainerResponse, StartContainerData, StartContainerResponse, StopContainerData, StopContainerResponse, DeployFromImageData, DeployFromImageResponse, DeployFromImageUploadData, DeployFromImageUploadResponse, DeployFromStaticData, DeployFromStaticResponse, ListAlertRulesData, CreateAlertRuleData, CreateAlertRuleResponse, DeleteAlertRuleData, DeleteAlertRuleResponse, GetAlertRuleData, UpdateAlertRuleData, UpdateAlertRuleResponse, GetErrorDashboardStatsData, ListErrorGroupsData, ListErrorGroupsResponse, GetErrorGroupData, UpdateErrorGroupData, ListErrorEventsData, ListErrorEventsResponse, GetErrorEventData, GetErrorStatsData, GetErrorTimeSeriesData, GetEventsCount2Data, GetEventTypeBreakdownData, RecordConsoleEventData, GetPropertyBreakdownData, GetPropertyTimelineData, GetEventsTimelineData, GetUniqueEventsData, GetUniqueEventsResponse, ListExternalImagesData, ListExternalImagesResponse, RegisterExternalImageData, RegisterExternalImageResponse, DeleteExternalImageData, DeleteExternalImageResponse, GetExternalImageData, ListFunnelsData, CreateFunnelData, CreateFunnelResponse2 as CreateFunnelResponse, PreviewFunnelMetricsData, PreviewFunnelMetricsResponse, DeleteFunnelData, UpdateFunnelData, GetFunnelMetricsData, UpdateGitSettingsData, UpdateGitSettingsResponse, HasErrorGroupsData, HasAnalyticsEventsData, GetHourlyVisitsData, ListExternalImages2Data, PushExternalImageData, PushExternalImageResponse, GetExternalImage2Data, ListIncidentsData, CreateIncidentData, CreateIncidentResponse, GetBucketedIncidentsData, PurgeProjectLogsData, PurgeProjectLogsError, ListMonitorsData, CreateMonitorData, CreateMonitorResponse, DeleteReleaseSourceMapsData, DeleteReleaseSourceMapsResponse, ListSourceMapsData, UploadSourceMapData, UploadSourceMapResponse, UpdateProjectSettingsData, UpdateProjectSettingsResponse, ListReleasesData, DeleteSourceMapData, DeleteSourceMapResponse, ListStaticBundlesData, ListStaticBundlesResponse, DeleteStaticBundleData, DeleteStaticBundleResponse, GetStaticBundleData, GetStatusOverviewData, GetUniqueCountsData, UploadStaticBundleData, UploadStaticBundleResponse, ListProjectScansData, ListProjectScansError, ListProjectScansResponse, TriggerScanData, TriggerScanError, TriggerScanResponse2 as TriggerScanResponse, GetLatestScansPerEnvironmentData, GetLatestScanData, ListWebhooksData, ListWebhooksResponse, CreateWebhookData, CreateWebhookResponse, DeleteWebhookData, DeleteWebhookResponse, GetWebhookData, UpdateWebhookData, UpdateWebhookResponse, ListDeliveriesData, GetDeliveryData, RetryDeliveryData, RetryDeliveryResponse, GetProxyLogsData, GetProxyLogsResponse, GetProxyLogByRequestIdData, GetTimeBucketStatsData, GetTodayStatsData, GetProxyLogByIdData, ListSyncedRepositoriesData, ListSyncedRepositoriesResponse, GetRepositoryByNameData, GetAllRepositoriesByNameData, GetRepositoryPresetByNameData, GetRepositoryBranchesData, GetRepositoryTagsData, GetRepositoryPresetLiveData, GetBranchesByRepositoryIdData, ListCommitsByRepositoryIdData, CheckCommitExistsData, GetTagsByRepositoryIdData, GetProjectSessionReplaysData, GetProjectSessionReplaysError, GetProjectSessionReplaysResponse2 as GetProjectSessionReplaysResponse, GetSessionEvents2Data, GetSettingsData, UpdateSettingsData, UpdateSettingsResponse, RevokeJoinTokenData, RevokeJoinTokenResponse, GenerateJoinTokenData, GenerateJoinTokenResponse2 as GenerateJoinTokenResponse, GetJoinTokenStatusData, GetPublicSettingsData, ListTemplatesData, ListTemplateTagsData, GetTemplateData, GetCurrentUserData, ListUsersData, CreateUserData, CreateUserResponse, UpdateSelfData, UpdateSelfResponse, DisableMfaData, DisableMfaResponse, SetupMfaData, SetupMfaResponse, VerifyAndEnableMfaData, VerifyAndEnableMfaResponse, DeleteUserData, DeleteUserResponse, UpdateUserData, UpdateUserResponse, RestoreUserData, RestoreUserResponse, AssignRoleData, RemoveRoleData, RemoveRoleResponse, GetVisitorSessions2Data, GetVisitorSessions2Error, GetVisitorSessions2Response, DeleteSessionReplayData, DeleteSessionReplayError, GetSessionReplayData, UpdateSessionDurationData, UpdateSessionDurationError, UpdateSessionDurationResponse2 as UpdateSessionDurationResponse, GetSessionReplayEventsData, AddEventsData, AddEventsError, AddEventsResponse2 as AddEventsResponse, DeleteScanData, DeleteScanError, DeleteScanResponse, GetScanData, GetScanVulnerabilitiesData, GetScanVulnerabilitiesError, GetScanVulnerabilitiesResponse, ListEventTypesData, TriggerWeeklyDigestData, TriggerWeeklyDigestResponse, ListExternalPluginsData, ReloadPluginsData, ReloadPluginsResponse, IngestSentryEnvelopeData, IngestSentryEventData, IngestSentryEventResponse, ListAuditLogsData, ListAuditLogsResponse, GetAuditLogData } from '../types.gen'; import { client } from '../client.gen'; export type QueryKey = [ @@ -3357,6 +3357,25 @@ export const getServiceEnvironmentVariableOptions = (options: Options>): UseMutationOptions> => { + const mutationOptions: UseMutationOptions> = { + mutationFn: async (fnOptions) => { + const { data } = await retryCluster({ + ...options, + ...fnOptions, + throwOnError: true + }); + return data; + } + }; + return mutationOptions; +}; + /** * Start an external service */ @@ -4451,6 +4470,29 @@ export const nodeHeartbeatMutation = (options?: Partial) => createQueryKey('getS3Credentials', options); + +/** + * Get decrypted S3 credentials for a backup/restore operation. + * Agents call this endpoint to receive the S3 credentials they need to upload + * or download backups. The credentials are decrypted from the stored S3 source + * and returned over the authenticated TLS/WireGuard channel. + */ +export const getS3CredentialsOptions = (options: Options) => { + return queryOptions({ + queryFn: async ({ queryKey, signal }) => { + const { data } = await getS3Credentials({ + ...options, + ...queryKey[0], + signal, + throwOnError: true + }); + return data; + }, + queryKey: getS3CredentialsQueryKey(options) + }); +}; + export const listIpAccessControlQueryKey = (options?: Options) => createQueryKey('listIpAccessControl', options); /** @@ -7115,8 +7157,8 @@ export const updateEnvironmentSettingsMutation = (options?: Partial>): UseMutationOptions> => { const mutationOptions: UseMutationOptions> = { @@ -7152,8 +7194,9 @@ export const teardownEnvironmentMutation = (options?: Partial>): UseMutationOptions> => { const mutationOptions: UseMutationOptions> = { @@ -7400,6 +7443,97 @@ export const deployFromStaticMutation = (options?: Partial) => createQueryKey('listAlertRules', options); + +/** + * List all alert rules for a project + */ +export const listAlertRulesOptions = (options: Options) => { + return queryOptions({ + queryFn: async ({ queryKey, signal }) => { + const { data } = await listAlertRules({ + ...options, + ...queryKey[0], + signal, + throwOnError: true + }); + return data; + }, + queryKey: listAlertRulesQueryKey(options) + }); +}; + +/** + * Create a new alert rule + */ +export const createAlertRuleMutation = (options?: Partial>): UseMutationOptions> => { + const mutationOptions: UseMutationOptions> = { + mutationFn: async (fnOptions) => { + const { data } = await createAlertRule({ + ...options, + ...fnOptions, + throwOnError: true + }); + return data; + } + }; + return mutationOptions; +}; + +/** + * Delete an alert rule + */ +export const deleteAlertRuleMutation = (options?: Partial>): UseMutationOptions> => { + const mutationOptions: UseMutationOptions> = { + mutationFn: async (fnOptions) => { + const { data } = await deleteAlertRule({ + ...options, + ...fnOptions, + throwOnError: true + }); + return data; + } + }; + return mutationOptions; +}; + +export const getAlertRuleQueryKey = (options: Options) => createQueryKey('getAlertRule', options); + +/** + * Get a specific alert rule + */ +export const getAlertRuleOptions = (options: Options) => { + return queryOptions({ + queryFn: async ({ queryKey, signal }) => { + const { data } = await getAlertRule({ + ...options, + ...queryKey[0], + signal, + throwOnError: true + }); + return data; + }, + queryKey: getAlertRuleQueryKey(options) + }); +}; + +/** + * Update an existing alert rule + */ +export const updateAlertRuleMutation = (options?: Partial>): UseMutationOptions> => { + const mutationOptions: UseMutationOptions> = { + mutationFn: async (fnOptions) => { + const { data } = await updateAlertRule({ + ...options, + ...fnOptions, + throwOnError: true + }); + return data; + } + }; + return mutationOptions; +}; + export const getErrorDashboardStatsQueryKey = (options: Options) => createQueryKey('getErrorDashboardStats', options); /** @@ -9343,6 +9477,28 @@ export const getJoinTokenStatusOptions = (options?: Options) => createQueryKey('getPublicSettings', options); + +/** + * Get public settings (no authentication required) + * Returns non-sensitive feature flags like demo mode status. + * This endpoint is intentionally unauthenticated so the login page can use it. + */ +export const getPublicSettingsOptions = (options?: Options) => { + return queryOptions({ + queryFn: async ({ queryKey, signal }) => { + const { data } = await getPublicSettings({ + ...options, + ...queryKey[0], + signal, + throwOnError: true + }); + return data; + }, + queryKey: getPublicSettingsQueryKey(options) + }); +}; + export const listTemplatesQueryKey = (options?: Options) => createQueryKey('listTemplates', options); /** diff --git a/web/src/api/client/sdk.gen.ts b/web/src/api/client/sdk.gen.ts index 98f481fc..b9559891 100644 --- a/web/src/api/client/sdk.gen.ts +++ b/web/src/api/client/sdk.gen.ts @@ -1,7 +1,7 @@ // This file is auto-generated by @hey-api/openapi-ts import type { Options as ClientOptions, Client, TDataShape } from './client'; -import type { GetPlatformInfoData, GetPlatformInfoResponses, GetPlatformInfoErrors, ChunkUploadOptionsData, ChunkUploadOptionsResponses, CreateReleaseData, CreateReleaseResponses, CreateReleaseErrors, ListReleaseFilesData, ListReleaseFilesResponses, ListReleaseFilesErrors, UploadReleaseFileData, UploadReleaseFileResponses, UploadReleaseFileErrors, RecordEventMetricsData, RecordEventMetricsResponses, RecordEventMetricsErrors, AddSessionReplayEventsData, AddSessionReplayEventsResponses, AddSessionReplayEventsErrors, InitSessionReplayData, InitSessionReplayResponses, InitSessionReplayErrors, RecordSpeedMetricsData, RecordSpeedMetricsResponses, RecordSpeedMetricsErrors, UpdateSpeedMetricsData, UpdateSpeedMetricsResponses, UpdateSpeedMetricsErrors, GetPricingData, GetPricingResponses, GetPricingErrors, ListProviderKeysData, ListProviderKeysResponses, ListProviderKeysErrors, CreateProviderKeyData, CreateProviderKeyResponses, CreateProviderKeyErrors, TestProviderKeyInlineData, TestProviderKeyInlineResponses, TestProviderKeyInlineErrors, DeleteProviderKeyData, DeleteProviderKeyResponses, DeleteProviderKeyErrors, UpdateProviderKeyData, UpdateProviderKeyResponses, UpdateProviderKeyErrors, TestProviderKeyByIdData, TestProviderKeyByIdResponses, TestProviderKeyByIdErrors, GetUsageByProviderData, GetUsageByProviderResponses, GetUsageByProviderErrors, GetConversationsData, GetConversationsResponses, GetConversationsErrors, GetConversationDetailData, GetConversationDetailResponses, GetConversationDetailErrors, GetUsageRecentData, GetUsageRecentResponses, GetUsageRecentErrors, GetUsageSummaryData, GetUsageSummaryResponses, GetUsageSummaryErrors, GetUsageTimeseriesData, GetUsageTimeseriesResponses, GetUsageTimeseriesErrors, GetUsageTopModelsData, GetUsageTopModelsResponses, GetUsageTopModelsErrors, ChatCompletionsData, ChatCompletionsResponses, ChatCompletionsErrors, EmbeddingsData, EmbeddingsResponses, EmbeddingsErrors, ListModelsData, ListModelsResponses, ListModelsErrors, GetActiveVisitorsData, GetActiveVisitorsResponses, GetActiveVisitorsErrors, GetEventDetailData, GetEventDetailResponses, GetEventDetailErrors, GetEventVisitorsData, GetEventVisitorsResponses, GetEventVisitorsErrors, GetEventsCountData, GetEventsCountResponses, GetEventsCountErrors, GetGeneralStatsData, GetGeneralStatsResponses, GetGeneralStatsErrors, GetLiveVisitorsListData, GetLiveVisitorsListResponses, GetLiveVisitorsListErrors, GetPageFlowData, GetPageFlowResponses, GetPageFlowErrors, GetPageHourlySessionsData, GetPageHourlySessionsResponses, GetPageHourlySessionsErrors, GetPagePathDetailData, GetPagePathDetailResponses, GetPagePathDetailErrors, GetPagePathVisitorsData, GetPagePathVisitorsResponses, GetPagePathVisitorsErrors, GetPagePathsData, GetPagePathsResponses, GetPagePathsErrors, GetPagePathsSparklinesData, GetPagePathsSparklinesResponses, GetPagePathsSparklinesErrors, GetRecentActivityData, GetRecentActivityResponses, GetRecentActivityErrors, GetSessionDetailsData, GetSessionDetailsResponses, GetSessionDetailsErrors, GetSessionEventsData, GetSessionEventsResponses, GetSessionEventsErrors, GetSessionLogsData, GetSessionLogsResponses, GetSessionLogsErrors, GetVisitorsData, GetVisitorsResponses, GetVisitorsErrors, GetVisitorByGuidData, GetVisitorByGuidResponses, GetVisitorByGuidErrors, GetVisitorByIdData, GetVisitorByIdResponses, GetVisitorByIdErrors, GetVisitorDetailsData, GetVisitorDetailsResponses, GetVisitorDetailsErrors, EnrichVisitorData, EnrichVisitorResponses, EnrichVisitorErrors, GetVisitorInfoData, GetVisitorInfoResponses, GetVisitorInfoErrors, GetVisitorJourneyData, GetVisitorJourneyResponses, GetVisitorJourneyErrors, GetVisitorSessionsData, GetVisitorSessionsResponses, GetVisitorSessionsErrors, GetVisitorStatsData, GetVisitorStatsResponses, GetVisitorStatsErrors, ListApiKeysData, ListApiKeysResponses, ListApiKeysErrors, CreateApiKeyData, CreateApiKeyResponses, CreateApiKeyErrors, GetApiKeyPermissionsData, GetApiKeyPermissionsResponses, GetApiKeyPermissionsErrors, DeleteApiKeyData, DeleteApiKeyResponses, DeleteApiKeyErrors, GetApiKeyData, GetApiKeyResponses, GetApiKeyErrors, UpdateApiKeyData, UpdateApiKeyResponses, UpdateApiKeyErrors, ActivateApiKeyData, ActivateApiKeyResponses, ActivateApiKeyErrors, DeactivateApiKeyData, DeactivateApiKeyResponses, DeactivateApiKeyErrors, EmailStatusData, EmailStatusResponses, EmailStatusErrors, LoginData, LoginResponses, LoginErrors, RequestMagicLinkData, RequestMagicLinkResponses, RequestMagicLinkErrors, VerifyMagicLinkData, VerifyMagicLinkResponses, VerifyMagicLinkErrors, RequestPasswordResetData, RequestPasswordResetResponses, RequestPasswordResetErrors, ResetPasswordData, ResetPasswordResponses, ResetPasswordErrors, VerifyEmailData, VerifyEmailResponses, VerifyEmailErrors, VerifyMfaChallengeData, VerifyMfaChallengeResponses, VerifyMfaChallengeErrors, RunExternalServiceBackupData, RunExternalServiceBackupResponses, RunExternalServiceBackupErrors, ListS3SourcesData, ListS3SourcesResponses, ListS3SourcesErrors, CreateS3SourceData, CreateS3SourceResponses, CreateS3SourceErrors, DeleteS3SourceData, DeleteS3SourceResponses, DeleteS3SourceErrors, GetS3SourceData, GetS3SourceResponses, GetS3SourceErrors, UpdateS3SourceData, UpdateS3SourceResponses, UpdateS3SourceErrors, ListSourceBackupsData, ListSourceBackupsResponses, ListSourceBackupsErrors, RunBackupForSourceData, RunBackupForSourceResponses, RunBackupForSourceErrors, ListBackupSchedulesData, ListBackupSchedulesResponses, ListBackupSchedulesErrors, CreateBackupScheduleData, CreateBackupScheduleResponses, CreateBackupScheduleErrors, DeleteBackupScheduleData, DeleteBackupScheduleResponses, DeleteBackupScheduleErrors, GetBackupScheduleData, GetBackupScheduleResponses, GetBackupScheduleErrors, ListBackupsForScheduleData, ListBackupsForScheduleResponses, ListBackupsForScheduleErrors, DisableBackupScheduleData, DisableBackupScheduleResponses, DisableBackupScheduleErrors, EnableBackupScheduleData, EnableBackupScheduleResponses, EnableBackupScheduleErrors, GetBackupData, GetBackupResponses, GetBackupErrors, BlobDeleteData, BlobDeleteResponses, BlobDeleteErrors, BlobListData, BlobListResponses, BlobListErrors, BlobPutData, BlobPutResponses, BlobPutErrors, BlobCopyData, BlobCopyResponses, BlobCopyErrors, BlobDisableData, BlobDisableResponses, BlobDisableErrors, BlobEnableData, BlobEnableResponses, BlobEnableErrors, BlobStatusData, BlobStatusResponses, BlobStatusErrors, BlobUpdateData, BlobUpdateResponses, BlobUpdateErrors, BlobDownloadData, BlobDownloadResponses, BlobDownloadErrors, BlobHeadData, BlobHeadResponses, BlobHeadErrors, GetDashboardProjectsAnalyticsData, GetDashboardProjectsAnalyticsResponses, GetDashboardProjectsAnalyticsErrors, GetActivityGraphData, GetActivityGraphResponses, GetActivityGraphErrors, GetScanByDeploymentData, GetScanByDeploymentResponses, GetScanByDeploymentErrors, ListProvidersData, ListProvidersResponses, ListProvidersErrors, CreateProviderData, CreateProviderResponses, CreateProviderErrors, DeleteProviderData, DeleteProviderResponses, DeleteProviderErrors, GetProviderData, GetProviderResponses, GetProviderErrors, UpdateProviderData, UpdateProviderResponses, UpdateProviderErrors, ListManagedDomainsData, ListManagedDomainsResponses, ListManagedDomainsErrors, AddManagedDomainData, AddManagedDomainResponses, AddManagedDomainErrors, TestProviderConnectionData, TestProviderConnectionResponses, TestProviderConnectionErrors, ListProviderZonesData, ListProviderZonesResponses, ListProviderZonesErrors, RemoveManagedDomainData, RemoveManagedDomainResponses, RemoveManagedDomainErrors, VerifyManagedDomainData, VerifyManagedDomainResponses, VerifyManagedDomainErrors, LookupDnsARecordsData, LookupDnsARecordsResponses, LookupDnsARecordsErrors, ListDomainsData, ListDomainsResponses, ListDomainsErrors, CreateDomainData, CreateDomainResponses, CreateDomainErrors, GetDomainByHostData, GetDomainByHostResponses, GetDomainByHostErrors, CancelDomainOrderData, CancelDomainOrderResponses, CancelDomainOrderErrors, GetDomainOrderData, GetDomainOrderResponses, GetDomainOrderErrors, CreateOrRecreateOrderData, CreateOrRecreateOrderResponses, CreateOrRecreateOrderErrors, FinalizeOrderData, FinalizeOrderResponses, FinalizeOrderErrors, SetupDnsChallengeData, SetupDnsChallengeResponses, SetupDnsChallengeErrors, DeleteDomainData, DeleteDomainResponses, DeleteDomainErrors, GetDomainByIdData, GetDomainByIdResponses, GetDomainByIdErrors, GetChallengeTokenData, GetChallengeTokenResponses, GetChallengeTokenErrors, GetHttpChallengeDebugData, GetHttpChallengeDebugResponses, GetHttpChallengeDebugErrors, ProvisionDomainData, ProvisionDomainResponses, ProvisionDomainErrors, RenewDomainData, RenewDomainResponses, RenewDomainErrors, CheckDomainStatusData, CheckDomainStatusResponses, CheckDomainStatusErrors, ListDomains2Data, ListDomains2Responses, ListDomains2Errors, CreateDomain2Data, CreateDomain2Responses, CreateDomain2Errors, GetDomainByNameData, GetDomainByNameResponses, GetDomainByNameErrors, DeleteDomain2Data, DeleteDomain2Responses, DeleteDomain2Errors, GetDomainData, GetDomainResponses, GetDomainErrors, GetDomainDnsRecordsData, GetDomainDnsRecordsResponses, GetDomainDnsRecordsErrors, SetupDnsData, SetupDnsResponses, SetupDnsErrors, VerifyDomainData, VerifyDomainResponses, VerifyDomainErrors, ListProviders2Data, ListProviders2Responses, ListProviders2Errors, CreateProvider2Data, CreateProvider2Responses, CreateProvider2Errors, DeleteProvider2Data, DeleteProvider2Responses, DeleteProvider2Errors, GetProvider2Data, GetProvider2Responses, GetProvider2Errors, TestProviderData, TestProviderResponses, TestProviderErrors, ListEmailsData, ListEmailsResponses, ListEmailsErrors, SendEmailData, SendEmailResponses, SendEmailErrors, GetEmailStatsData, GetEmailStatsResponses, GetEmailStatsErrors, ValidateEmailData, ValidateEmailResponses, ValidateEmailErrors, GetEmailData, GetEmailResponses, GetEmailErrors, ListServicesData, ListServicesResponses, ListServicesErrors, CreateServiceData, CreateServiceResponses, CreateServiceErrors, ListAvailableContainersData, ListAvailableContainersResponses, ListAvailableContainersErrors, GetServiceBySlugData, GetServiceBySlugResponses, GetServiceBySlugErrors, ImportExternalServiceData, ImportExternalServiceResponses, ImportExternalServiceErrors, ListProjectServicesData, ListProjectServicesResponses, ListProjectServicesErrors, GetProjectServiceEnvironmentVariablesData, GetProjectServiceEnvironmentVariablesResponses, GetProjectServiceEnvironmentVariablesErrors, GetProvidersMetadataData, GetProvidersMetadataResponses, GetProvidersMetadataErrors, GetProviderMetadataData, GetProviderMetadataResponses, GetProviderMetadataErrors, GetServiceTypesData, GetServiceTypesResponses, GetServiceTypesErrors, GetServiceTypeParametersData, GetServiceTypeParametersResponses, GetServiceTypeParametersErrors, DeleteServiceData, DeleteServiceResponses, DeleteServiceErrors, GetServiceData, GetServiceResponses, GetServiceErrors, UpdateServiceData, UpdateServiceResponses, UpdateServiceErrors, GetServicePreviewEnvironmentVariablesMaskedData, GetServicePreviewEnvironmentVariablesMaskedResponses, GetServicePreviewEnvironmentVariablesMaskedErrors, GetServicePreviewEnvironmentVariableNamesData, GetServicePreviewEnvironmentVariableNamesResponses, GetServicePreviewEnvironmentVariableNamesErrors, ListServiceProjectsData, ListServiceProjectsResponses, ListServiceProjectsErrors, LinkServiceToProjectData, LinkServiceToProjectResponses, LinkServiceToProjectErrors, UnlinkServiceFromProjectData, UnlinkServiceFromProjectResponses, UnlinkServiceFromProjectErrors, GetServiceEnvironmentVariablesData, GetServiceEnvironmentVariablesResponses, GetServiceEnvironmentVariablesErrors, GetServiceEnvironmentVariableData, GetServiceEnvironmentVariableResponses, GetServiceEnvironmentVariableErrors, StartServiceData, StartServiceResponses, StartServiceErrors, StopServiceData, StopServiceResponses, StopServiceErrors, UpgradeServiceData, UpgradeServiceResponses, UpgradeServiceErrors, ListRootContainersData, ListRootContainersResponses, ListRootContainersErrors, ListContainersAtPathData, ListContainersAtPathResponses, ListContainersAtPathErrors, ListEntitiesData, ListEntitiesResponses, ListEntitiesErrors, GetEntityInfoData, GetEntityInfoResponses, GetEntityInfoErrors, QueryDataData, QueryDataResponses, QueryDataErrors, DownloadObjectData, DownloadObjectResponses, DownloadObjectErrors, GetContainerInfoData, GetContainerInfoResponses, GetContainerInfoErrors, CheckExplorerSupportData, CheckExplorerSupportResponses, CheckExplorerSupportErrors, GetFileData, GetFileResponses, GetFileErrors, GetIpGeolocationData, GetIpGeolocationResponses, GetIpGeolocationErrors, ListConnectionsData, ListConnectionsResponses, ListConnectionsErrors, DeleteConnectionData, DeleteConnectionResponses, DeleteConnectionErrors, ActivateConnectionData, ActivateConnectionResponses, ActivateConnectionErrors, DeactivateConnectionData, DeactivateConnectionResponses, DeactivateConnectionErrors, ListRepositoriesByConnectionData, ListRepositoriesByConnectionResponses, ListRepositoriesByConnectionErrors, SyncRepositoriesData, SyncRepositoriesResponses, SyncRepositoriesErrors, UpdateConnectionTokenData, UpdateConnectionTokenResponses, UpdateConnectionTokenErrors, ValidateConnectionData, ValidateConnectionResponses, ValidateConnectionErrors, ListGitProvidersData, ListGitProvidersResponses, ListGitProvidersErrors, CreateGitProviderData, CreateGitProviderResponses, CreateGitProviderErrors, CreateGithubPatProviderData, CreateGithubPatProviderResponses, CreateGithubPatProviderErrors, CreateGitlabOauthProviderData, CreateGitlabOauthProviderResponses, CreateGitlabOauthProviderErrors, CreateGitlabPatProviderData, CreateGitlabPatProviderResponses, CreateGitlabPatProviderErrors, DeleteProvider3Data, DeleteProvider3Responses, DeleteProvider3Errors, GetGitProviderData, GetGitProviderResponses, GetGitProviderErrors, ActivateProviderData, ActivateProviderResponses, ActivateProviderErrors, HandleGitProviderOauthCallbackData, HandleGitProviderOauthCallbackErrors, GetProviderConnectionsData, GetProviderConnectionsResponses, GetProviderConnectionsErrors, DeactivateProviderData, DeactivateProviderResponses, DeactivateProviderErrors, CheckProviderDeletionSafetyData, CheckProviderDeletionSafetyResponses, CheckProviderDeletionSafetyErrors, StartGitProviderOauthData, StartGitProviderOauthErrors, DeleteProviderSafelyData, DeleteProviderSafelyResponses, DeleteProviderSafelyErrors, GetPublicRepositoryData, GetPublicRepositoryResponses, GetPublicRepositoryErrors, GetPublicBranchesData, GetPublicBranchesResponses, GetPublicBranchesErrors, DetectPublicPresetsData, DetectPublicPresetsResponses, DetectPublicPresetsErrors, DiscoverWorkloadsData, DiscoverWorkloadsResponses, DiscoverWorkloadsErrors, ExecuteImportData, ExecuteImportResponses, ExecuteImportErrors, CreatePlanData, CreatePlanResponses, CreatePlanErrors, ListSourcesData, ListSourcesResponses, ListSourcesErrors, GetImportStatusData, GetImportStatusResponses, GetImportStatusErrors, GetIncidentData, GetIncidentResponses, GetIncidentErrors, UpdateIncidentStatusData, UpdateIncidentStatusResponses, UpdateIncidentStatusErrors, GetIncidentUpdatesData, GetIncidentUpdatesResponses, GetIncidentUpdatesErrors, AdminListNodesData, AdminListNodesResponses, AdminListNodesErrors, RegisterNodeData, RegisterNodeResponses, RegisterNodeErrors, AdminRemoveNodeData, AdminRemoveNodeResponses, AdminRemoveNodeErrors, AdminGetNodeData, AdminGetNodeResponses, AdminGetNodeErrors, AdminListNodeContainersData, AdminListNodeContainersResponses, AdminListNodeContainersErrors, AdminUndrainNodeData, AdminUndrainNodeResponses, AdminUndrainNodeErrors, AdminDrainStatusData, AdminDrainStatusResponses, AdminDrainStatusErrors, AdminDrainNodeData, AdminDrainNodeResponses, AdminDrainNodeErrors, NodeHeartbeatData, NodeHeartbeatResponses, NodeHeartbeatErrors, ListIpAccessControlData, ListIpAccessControlResponses, ListIpAccessControlErrors, CreateIpAccessControlData, CreateIpAccessControlResponses, CreateIpAccessControlErrors, CheckIpBlockedData, CheckIpBlockedResponses, CheckIpBlockedErrors, DeleteIpAccessControlData, DeleteIpAccessControlResponses, DeleteIpAccessControlErrors, GetIpAccessControlData, GetIpAccessControlResponses, GetIpAccessControlErrors, UpdateIpAccessControlData, UpdateIpAccessControlResponses, UpdateIpAccessControlErrors, KvDelData, KvDelResponses, KvDelErrors, KvDisableData, KvDisableResponses, KvDisableErrors, KvEnableData, KvEnableResponses, KvEnableErrors, KvExpireData, KvExpireResponses, KvExpireErrors, KvGetData, KvGetResponses, KvGetErrors, KvIncrData, KvIncrResponses, KvIncrErrors, KvKeysData, KvKeysResponses, KvKeysErrors, KvSetData, KvSetResponses, KvSetErrors, KvStatusData, KvStatusResponses, KvStatusErrors, KvTtlData, KvTtlResponses, KvTtlErrors, KvUpdateData, KvUpdateResponses, KvUpdateErrors, ListRoutesData, ListRoutesResponses, ListRoutesErrors, CreateRouteData, CreateRouteResponses, CreateRouteErrors, DeleteRouteData, DeleteRouteResponses, DeleteRouteErrors, GetRouteData, GetRouteResponses, GetRouteErrors, UpdateRouteData, UpdateRouteResponses, UpdateRouteErrors, LogoutData, LogoutResponses, LogoutErrors, GetLogContextData, GetLogContextResponses, GetLogContextErrors, SearchLogsData, SearchLogsResponses, SearchLogsErrors, TailLogsData, TailLogsResponses, TailLogsErrors, DeleteMonitorData, DeleteMonitorResponses, DeleteMonitorErrors, GetMonitorData, GetMonitorResponses, GetMonitorErrors, GetBucketedStatusData, GetBucketedStatusResponses, GetBucketedStatusErrors, GetCurrentMonitorStatusData, GetCurrentMonitorStatusResponses, GetCurrentMonitorStatusErrors, GetUptimeHistoryData, GetUptimeHistoryResponses, GetUptimeHistoryErrors, DeletePreferencesData, DeletePreferencesResponses, DeletePreferencesErrors, GetPreferencesData, GetPreferencesResponses, GetPreferencesErrors, UpdatePreferencesData, UpdatePreferencesResponses, UpdatePreferencesErrors, ListNotificationProvidersData, ListNotificationProvidersResponses, ListNotificationProvidersErrors, CreateNotificationProviderData, CreateNotificationProviderResponses, CreateNotificationProviderErrors, CreateEmailProviderData, CreateEmailProviderResponses, CreateEmailProviderErrors, UpdateEmailProviderData, UpdateEmailProviderResponses, UpdateEmailProviderErrors, CreateSlackProviderData, CreateSlackProviderResponses, CreateSlackProviderErrors, UpdateSlackProviderData, UpdateSlackProviderResponses, UpdateSlackProviderErrors, CreateWebhookProviderData, CreateWebhookProviderResponses, CreateWebhookProviderErrors, UpdateWebhookProviderData, UpdateWebhookProviderResponses, UpdateWebhookProviderErrors, DeleteProvider4Data, DeleteProvider4Responses, DeleteProvider4Errors, GetNotificationProviderData, GetNotificationProviderResponses, GetNotificationProviderErrors, UpdateProvider2Data, UpdateProvider2Responses, UpdateProvider2Errors, TestProvider2Data, TestProvider2Responses, TestProvider2Errors, ListOrdersData, ListOrdersResponses, ListOrdersErrors, QueryGenaiTracesData, QueryGenaiTracesResponses, QueryGenaiTracesErrors, GetGenaiTraceData, GetGenaiTraceResponses, GetGenaiTraceErrors, GetHealthData, GetHealthResponses, GetHealthErrors, ListInsightsData, ListInsightsResponses, ListInsightsErrors, QueryLogsData, QueryLogsResponses, QueryLogsErrors, ListMetricNamesData, ListMetricNamesResponses, ListMetricNamesErrors, QueryMetricsData, QueryMetricsResponses, QueryMetricsErrors, GetPipelineStatsData, GetPipelineStatsResponses, GetPipelineStatsErrors, GetQuotaData, GetQuotaResponses, GetQuotaErrors, QueryTraceSummariesData, QueryTraceSummariesResponses, QueryTraceSummariesErrors, QueryTracesData, QueryTracesResponses, QueryTracesErrors, GetTraceData, GetTraceResponses, GetTraceErrors, IngestLogsData, IngestLogsResponses, IngestLogsErrors, IngestMetricsData, IngestMetricsResponses, IngestMetricsErrors, IngestTracesData, IngestTracesResponses, IngestTracesErrors, IngestLogsByPathData, IngestLogsByPathResponses, IngestLogsByPathErrors, IngestMetricsByPathData, IngestMetricsByPathResponses, IngestMetricsByPathErrors, IngestTracesByPathData, IngestTracesByPathResponses, IngestTracesByPathErrors, HasPerformanceMetricsData, HasPerformanceMetricsResponses, HasPerformanceMetricsErrors, GetPerformanceMetricsData, GetPerformanceMetricsResponses, GetPerformanceMetricsErrors, GetMetricsOverTimeData, GetMetricsOverTimeResponses, GetMetricsOverTimeErrors, GetGroupedPageMetricsData, GetGroupedPageMetricsResponses, GetGroupedPageMetricsErrors, GetAccessInfoData, GetAccessInfoResponses, GetAccessInfoErrors, GetPrivateIpData, GetPrivateIpResponses, GetPrivateIpErrors, GetPublicIpData, GetPublicIpResponses, GetPublicIpErrors, ListPresetsData, ListPresetsResponses, ListPresetsErrors, GeneratePresetDockerfileData, GeneratePresetDockerfileResponses, GeneratePresetDockerfileErrors, GetProjectsData, GetProjectsResponses, GetProjectsErrors, CreateProjectData, CreateProjectResponses, CreateProjectErrors, GetProjectBySlugData, GetProjectBySlugResponses, GetProjectBySlugErrors, CreateProjectFromTemplateData, CreateProjectFromTemplateResponses, CreateProjectFromTemplateErrors, GetProjectStatisticsData, GetProjectStatisticsResponses, GetProjectStatisticsErrors, DeleteProjectData, DeleteProjectResponses, DeleteProjectErrors, GetProjectData, GetProjectResponses, GetProjectErrors, UpdateProjectData, UpdateProjectResponses, UpdateProjectErrors, GetProjectDeploymentsData, GetProjectDeploymentsResponses, GetProjectDeploymentsErrors, GetLastDeploymentData, GetLastDeploymentResponses, GetLastDeploymentErrors, TriggerProjectPipelineData, TriggerProjectPipelineResponses, TriggerProjectPipelineErrors, GetActiveVisitors2Data, GetActiveVisitors2Responses, GetActiveVisitors2Errors, GetAggregatedBucketsData, GetAggregatedBucketsResponses, GetAggregatedBucketsErrors, UpdateAutomaticDeployData, UpdateAutomaticDeployResponses, UpdateAutomaticDeployErrors, ListCustomDomainsForProjectData, ListCustomDomainsForProjectResponses, ListCustomDomainsForProjectErrors, CreateCustomDomainData, CreateCustomDomainResponses, CreateCustomDomainErrors, DeleteCustomDomainData, DeleteCustomDomainResponses, DeleteCustomDomainErrors, GetCustomDomainData, GetCustomDomainResponses, GetCustomDomainErrors, UpdateCustomDomainData, UpdateCustomDomainResponses, UpdateCustomDomainErrors, LinkCustomDomainToCertificateData, LinkCustomDomainToCertificateResponses, LinkCustomDomainToCertificateErrors, UpdateProjectDeploymentConfigData, UpdateProjectDeploymentConfigResponses, UpdateProjectDeploymentConfigErrors, GetDeploymentData, GetDeploymentResponses, GetDeploymentErrors, CancelDeploymentData, CancelDeploymentResponses, CancelDeploymentErrors, GetDeploymentJobsData, GetDeploymentJobsResponses, GetDeploymentJobsErrors, GetDeploymentJobLogsData, GetDeploymentJobLogsResponses, GetDeploymentJobLogsErrors, TailDeploymentJobLogsData, TailDeploymentJobLogsErrors, GetDeploymentOperationsData, GetDeploymentOperationsResponses, GetDeploymentOperationsErrors, ExecuteDeploymentOperationData, ExecuteDeploymentOperationResponses, ExecuteDeploymentOperationErrors, GetDeploymentOperationStatusData, GetDeploymentOperationStatusResponses, GetDeploymentOperationStatusErrors, PauseDeploymentData, PauseDeploymentResponses, PauseDeploymentErrors, PromoteDeploymentData, PromoteDeploymentResponses, PromoteDeploymentErrors, ResumeDeploymentData, ResumeDeploymentResponses, ResumeDeploymentErrors, RollbackToDeploymentData, RollbackToDeploymentResponses, RollbackToDeploymentErrors, TeardownDeploymentData, TeardownDeploymentResponses, TeardownDeploymentErrors, ListDsnsData, ListDsnsResponses, CreateDsnData, CreateDsnResponses, CreateDsnErrors, GetOrCreateDsnData, GetOrCreateDsnResponses, GetOrCreateDsnErrors, RegenerateDsnData, RegenerateDsnResponses, RegenerateDsnErrors, RevokeDsnData, RevokeDsnResponses, RevokeDsnErrors, GetEnvironmentVariablesData, GetEnvironmentVariablesResponses, GetEnvironmentVariablesErrors, CreateEnvironmentVariableData, CreateEnvironmentVariableResponses, CreateEnvironmentVariableErrors, GetEnvironmentVariableValueData, GetEnvironmentVariableValueResponses, GetEnvironmentVariableValueErrors, DeleteEnvironmentVariableData, DeleteEnvironmentVariableResponses, DeleteEnvironmentVariableErrors, UpdateEnvironmentVariableData, UpdateEnvironmentVariableResponses, UpdateEnvironmentVariableErrors, GetEnvironmentsData, GetEnvironmentsResponses, GetEnvironmentsErrors, CreateEnvironmentData, CreateEnvironmentResponses, CreateEnvironmentErrors, DeleteEnvironmentData, DeleteEnvironmentResponses, DeleteEnvironmentErrors, GetEnvironmentData, GetEnvironmentResponses, GetEnvironmentErrors, GetEnvironmentCronsData, GetEnvironmentCronsResponses, GetEnvironmentCronsErrors, GetCronByIdData, GetCronByIdResponses, GetCronByIdErrors, GetCronExecutionsData, GetCronExecutionsResponses, GetCronExecutionsErrors, GetEnvironmentDomainsData, GetEnvironmentDomainsResponses, GetEnvironmentDomainsErrors, AddEnvironmentDomainData, AddEnvironmentDomainResponses, AddEnvironmentDomainErrors, DeleteEnvironmentDomainData, DeleteEnvironmentDomainResponses, DeleteEnvironmentDomainErrors, UpdateEnvironmentSettingsData, UpdateEnvironmentSettingsResponses, UpdateEnvironmentSettingsErrors, SleepEnvironmentData, SleepEnvironmentResponses, SleepEnvironmentErrors, TeardownEnvironmentData, TeardownEnvironmentResponses, TeardownEnvironmentErrors, WakeEnvironmentData, WakeEnvironmentResponses, WakeEnvironmentErrors, GetContainerLogsData, GetContainerLogsErrors, ListContainersData, ListContainersResponses, ListContainersErrors, GetContainerDetailData, GetContainerDetailResponses, GetContainerDetailErrors, GetContainerLogsByIdData, GetContainerLogsByIdErrors, GetContainerMetricsData, GetContainerMetricsResponses, GetContainerMetricsErrors, StreamContainerMetricsData, StreamContainerMetricsResponses, StreamContainerMetricsErrors, RestartContainerData, RestartContainerResponses, RestartContainerErrors, StartContainerData, StartContainerResponses, StartContainerErrors, StopContainerData, StopContainerResponses, StopContainerErrors, DeployFromImageData, DeployFromImageResponses, DeployFromImageErrors, DeployFromImageUploadData, DeployFromImageUploadResponses, DeployFromImageUploadErrors, DeployFromStaticData, DeployFromStaticResponses, DeployFromStaticErrors, GetErrorDashboardStatsData, GetErrorDashboardStatsResponses, GetErrorDashboardStatsErrors, ListErrorGroupsData, ListErrorGroupsResponses, ListErrorGroupsErrors, GetErrorGroupData, GetErrorGroupResponses, GetErrorGroupErrors, UpdateErrorGroupData, UpdateErrorGroupResponses, UpdateErrorGroupErrors, ListErrorEventsData, ListErrorEventsResponses, ListErrorEventsErrors, GetErrorEventData, GetErrorEventResponses, GetErrorEventErrors, GetErrorStatsData, GetErrorStatsResponses, GetErrorStatsErrors, GetErrorTimeSeriesData, GetErrorTimeSeriesResponses, GetErrorTimeSeriesErrors, GetEventsCount2Data, GetEventsCount2Responses, GetEventsCount2Errors, GetEventTypeBreakdownData, GetEventTypeBreakdownResponses, GetEventTypeBreakdownErrors, RecordConsoleEventData, RecordConsoleEventResponses, RecordConsoleEventErrors, GetPropertyBreakdownData, GetPropertyBreakdownResponses, GetPropertyBreakdownErrors, GetPropertyTimelineData, GetPropertyTimelineResponses, GetPropertyTimelineErrors, GetEventsTimelineData, GetEventsTimelineResponses, GetEventsTimelineErrors, GetUniqueEventsData, GetUniqueEventsResponses, GetUniqueEventsErrors, ListExternalImagesData, ListExternalImagesResponses, ListExternalImagesErrors, RegisterExternalImageData, RegisterExternalImageResponses, RegisterExternalImageErrors, DeleteExternalImageData, DeleteExternalImageResponses, DeleteExternalImageErrors, GetExternalImageData, GetExternalImageResponses, GetExternalImageErrors, ListFunnelsData, ListFunnelsResponses, ListFunnelsErrors, CreateFunnelData, CreateFunnelResponses, CreateFunnelErrors, PreviewFunnelMetricsData, PreviewFunnelMetricsResponses, PreviewFunnelMetricsErrors, DeleteFunnelData, DeleteFunnelResponses, DeleteFunnelErrors, UpdateFunnelData, UpdateFunnelResponses, UpdateFunnelErrors, GetFunnelMetricsData, GetFunnelMetricsResponses, GetFunnelMetricsErrors, UpdateGitSettingsData, UpdateGitSettingsResponses, UpdateGitSettingsErrors, HasErrorGroupsData, HasErrorGroupsResponses, HasErrorGroupsErrors, HasAnalyticsEventsData, HasAnalyticsEventsResponses, HasAnalyticsEventsErrors, GetHourlyVisitsData, GetHourlyVisitsResponses, GetHourlyVisitsErrors, ListExternalImages2Data, ListExternalImages2Responses, ListExternalImages2Errors, PushExternalImageData, PushExternalImageResponses, PushExternalImageErrors, GetExternalImage2Data, GetExternalImage2Responses, GetExternalImage2Errors, ListIncidentsData, ListIncidentsResponses, ListIncidentsErrors, CreateIncidentData, CreateIncidentResponses, CreateIncidentErrors, GetBucketedIncidentsData, GetBucketedIncidentsResponses, GetBucketedIncidentsErrors, PurgeProjectLogsData, PurgeProjectLogsResponses, PurgeProjectLogsErrors, ListMonitorsData, ListMonitorsResponses, ListMonitorsErrors, CreateMonitorData, CreateMonitorResponses, CreateMonitorErrors, DeleteReleaseSourceMapsData, DeleteReleaseSourceMapsResponses, DeleteReleaseSourceMapsErrors, ListSourceMapsData, ListSourceMapsResponses, ListSourceMapsErrors, UploadSourceMapData, UploadSourceMapResponses, UploadSourceMapErrors, UpdateProjectSettingsData, UpdateProjectSettingsResponses, UpdateProjectSettingsErrors, ListReleasesData, ListReleasesResponses, ListReleasesErrors, DeleteSourceMapData, DeleteSourceMapResponses, DeleteSourceMapErrors, ListStaticBundlesData, ListStaticBundlesResponses, ListStaticBundlesErrors, DeleteStaticBundleData, DeleteStaticBundleResponses, DeleteStaticBundleErrors, GetStaticBundleData, GetStaticBundleResponses, GetStaticBundleErrors, GetStatusOverviewData, GetStatusOverviewResponses, GetStatusOverviewErrors, GetUniqueCountsData, GetUniqueCountsResponses, GetUniqueCountsErrors, UploadStaticBundleData, UploadStaticBundleResponses, UploadStaticBundleErrors, ListProjectScansData, ListProjectScansResponses, ListProjectScansErrors, TriggerScanData, TriggerScanResponses, TriggerScanErrors, GetLatestScansPerEnvironmentData, GetLatestScansPerEnvironmentResponses, GetLatestScansPerEnvironmentErrors, GetLatestScanData, GetLatestScanResponses, GetLatestScanErrors, ListWebhooksData, ListWebhooksResponses, ListWebhooksErrors, CreateWebhookData, CreateWebhookResponses, CreateWebhookErrors, DeleteWebhookData, DeleteWebhookResponses, DeleteWebhookErrors, GetWebhookData, GetWebhookResponses, GetWebhookErrors, UpdateWebhookData, UpdateWebhookResponses, UpdateWebhookErrors, ListDeliveriesData, ListDeliveriesResponses, ListDeliveriesErrors, GetDeliveryData, GetDeliveryResponses, GetDeliveryErrors, RetryDeliveryData, RetryDeliveryResponses, RetryDeliveryErrors, GetProxyLogsData, GetProxyLogsResponses, GetProxyLogsErrors, GetProxyLogByRequestIdData, GetProxyLogByRequestIdResponses, GetProxyLogByRequestIdErrors, GetTimeBucketStatsData, GetTimeBucketStatsResponses, GetTimeBucketStatsErrors, GetTodayStatsData, GetTodayStatsResponses, GetTodayStatsErrors, GetProxyLogByIdData, GetProxyLogByIdResponses, GetProxyLogByIdErrors, ListSyncedRepositoriesData, ListSyncedRepositoriesResponses, ListSyncedRepositoriesErrors, GetRepositoryByNameData, GetRepositoryByNameResponses, GetRepositoryByNameErrors, GetAllRepositoriesByNameData, GetAllRepositoriesByNameResponses, GetAllRepositoriesByNameErrors, GetRepositoryPresetByNameData, GetRepositoryPresetByNameResponses, GetRepositoryPresetByNameErrors, GetRepositoryBranchesData, GetRepositoryBranchesResponses, GetRepositoryBranchesErrors, GetRepositoryTagsData, GetRepositoryTagsResponses, GetRepositoryTagsErrors, GetRepositoryPresetLiveData, GetRepositoryPresetLiveResponses, GetRepositoryPresetLiveErrors, GetBranchesByRepositoryIdData, GetBranchesByRepositoryIdResponses, GetBranchesByRepositoryIdErrors, ListCommitsByRepositoryIdData, ListCommitsByRepositoryIdResponses, ListCommitsByRepositoryIdErrors, CheckCommitExistsData, CheckCommitExistsResponses, CheckCommitExistsErrors, GetTagsByRepositoryIdData, GetTagsByRepositoryIdResponses, GetTagsByRepositoryIdErrors, GetProjectSessionReplaysData, GetProjectSessionReplaysResponses, GetProjectSessionReplaysErrors, GetSessionEvents2Data, GetSessionEvents2Responses, GetSessionEvents2Errors, GetSettingsData, GetSettingsResponses, GetSettingsErrors, UpdateSettingsData, UpdateSettingsResponses, UpdateSettingsErrors, RevokeJoinTokenData, RevokeJoinTokenResponses, RevokeJoinTokenErrors, GenerateJoinTokenData, GenerateJoinTokenResponses, GenerateJoinTokenErrors, GetJoinTokenStatusData, GetJoinTokenStatusResponses, GetJoinTokenStatusErrors, ListTemplatesData, ListTemplatesResponses, ListTemplatesErrors, ListTemplateTagsData, ListTemplateTagsResponses, ListTemplateTagsErrors, GetTemplateData, GetTemplateResponses, GetTemplateErrors, GetCurrentUserData, GetCurrentUserResponses, GetCurrentUserErrors, ListUsersData, ListUsersResponses, ListUsersErrors, CreateUserData, CreateUserResponses, CreateUserErrors, UpdateSelfData, UpdateSelfResponses, UpdateSelfErrors, DisableMfaData, DisableMfaResponses, DisableMfaErrors, SetupMfaData, SetupMfaResponses, SetupMfaErrors, VerifyAndEnableMfaData, VerifyAndEnableMfaResponses, VerifyAndEnableMfaErrors, DeleteUserData, DeleteUserResponses, DeleteUserErrors, UpdateUserData, UpdateUserResponses, UpdateUserErrors, RestoreUserData, RestoreUserResponses, RestoreUserErrors, AssignRoleData, AssignRoleResponses, AssignRoleErrors, RemoveRoleData, RemoveRoleResponses, RemoveRoleErrors, GetVisitorSessions2Data, GetVisitorSessions2Responses, GetVisitorSessions2Errors, DeleteSessionReplayData, DeleteSessionReplayResponses, DeleteSessionReplayErrors, GetSessionReplayData, GetSessionReplayResponses, GetSessionReplayErrors, UpdateSessionDurationData, UpdateSessionDurationResponses, UpdateSessionDurationErrors, GetSessionReplayEventsData, GetSessionReplayEventsResponses, GetSessionReplayEventsErrors, AddEventsData, AddEventsResponses, AddEventsErrors, DeleteScanData, DeleteScanResponses, DeleteScanErrors, GetScanData, GetScanResponses, GetScanErrors, GetScanVulnerabilitiesData, GetScanVulnerabilitiesResponses, GetScanVulnerabilitiesErrors, ListEventTypesData, ListEventTypesResponses, TriggerWeeklyDigestData, TriggerWeeklyDigestResponses, TriggerWeeklyDigestErrors, ListExternalPluginsData, ListExternalPluginsResponses, ReloadPluginsData, ReloadPluginsResponses, ReloadPluginsErrors, IngestSentryEnvelopeData, IngestSentryEnvelopeResponses, IngestSentryEnvelopeErrors, IngestSentryEventData, IngestSentryEventResponses, IngestSentryEventErrors, ListAuditLogsData, ListAuditLogsResponses, ListAuditLogsErrors, GetAuditLogData, GetAuditLogResponses, GetAuditLogErrors } from './types.gen'; +import type { GetPlatformInfoData, GetPlatformInfoResponses, GetPlatformInfoErrors, ChunkUploadOptionsData, ChunkUploadOptionsResponses, CreateReleaseData, CreateReleaseResponses, CreateReleaseErrors, ListReleaseFilesData, ListReleaseFilesResponses, ListReleaseFilesErrors, UploadReleaseFileData, UploadReleaseFileResponses, UploadReleaseFileErrors, RecordEventMetricsData, RecordEventMetricsResponses, RecordEventMetricsErrors, AddSessionReplayEventsData, AddSessionReplayEventsResponses, AddSessionReplayEventsErrors, InitSessionReplayData, InitSessionReplayResponses, InitSessionReplayErrors, RecordSpeedMetricsData, RecordSpeedMetricsResponses, RecordSpeedMetricsErrors, UpdateSpeedMetricsData, UpdateSpeedMetricsResponses, UpdateSpeedMetricsErrors, GetPricingData, GetPricingResponses, GetPricingErrors, ListProviderKeysData, ListProviderKeysResponses, ListProviderKeysErrors, CreateProviderKeyData, CreateProviderKeyResponses, CreateProviderKeyErrors, TestProviderKeyInlineData, TestProviderKeyInlineResponses, TestProviderKeyInlineErrors, DeleteProviderKeyData, DeleteProviderKeyResponses, DeleteProviderKeyErrors, UpdateProviderKeyData, UpdateProviderKeyResponses, UpdateProviderKeyErrors, TestProviderKeyByIdData, TestProviderKeyByIdResponses, TestProviderKeyByIdErrors, GetUsageByProviderData, GetUsageByProviderResponses, GetUsageByProviderErrors, GetConversationsData, GetConversationsResponses, GetConversationsErrors, GetConversationDetailData, GetConversationDetailResponses, GetConversationDetailErrors, GetUsageRecentData, GetUsageRecentResponses, GetUsageRecentErrors, GetUsageSummaryData, GetUsageSummaryResponses, GetUsageSummaryErrors, GetUsageTimeseriesData, GetUsageTimeseriesResponses, GetUsageTimeseriesErrors, GetUsageTopModelsData, GetUsageTopModelsResponses, GetUsageTopModelsErrors, ChatCompletionsData, ChatCompletionsResponses, ChatCompletionsErrors, EmbeddingsData, EmbeddingsResponses, EmbeddingsErrors, ListModelsData, ListModelsResponses, ListModelsErrors, GetActiveVisitorsData, GetActiveVisitorsResponses, GetActiveVisitorsErrors, GetEventDetailData, GetEventDetailResponses, GetEventDetailErrors, GetEventVisitorsData, GetEventVisitorsResponses, GetEventVisitorsErrors, GetEventsCountData, GetEventsCountResponses, GetEventsCountErrors, GetGeneralStatsData, GetGeneralStatsResponses, GetGeneralStatsErrors, GetLiveVisitorsListData, GetLiveVisitorsListResponses, GetLiveVisitorsListErrors, GetPageFlowData, GetPageFlowResponses, GetPageFlowErrors, GetPageHourlySessionsData, GetPageHourlySessionsResponses, GetPageHourlySessionsErrors, GetPagePathDetailData, GetPagePathDetailResponses, GetPagePathDetailErrors, GetPagePathVisitorsData, GetPagePathVisitorsResponses, GetPagePathVisitorsErrors, GetPagePathsData, GetPagePathsResponses, GetPagePathsErrors, GetPagePathsSparklinesData, GetPagePathsSparklinesResponses, GetPagePathsSparklinesErrors, GetRecentActivityData, GetRecentActivityResponses, GetRecentActivityErrors, GetSessionDetailsData, GetSessionDetailsResponses, GetSessionDetailsErrors, GetSessionEventsData, GetSessionEventsResponses, GetSessionEventsErrors, GetSessionLogsData, GetSessionLogsResponses, GetSessionLogsErrors, GetVisitorsData, GetVisitorsResponses, GetVisitorsErrors, GetVisitorByGuidData, GetVisitorByGuidResponses, GetVisitorByGuidErrors, GetVisitorByIdData, GetVisitorByIdResponses, GetVisitorByIdErrors, GetVisitorDetailsData, GetVisitorDetailsResponses, GetVisitorDetailsErrors, EnrichVisitorData, EnrichVisitorResponses, EnrichVisitorErrors, GetVisitorInfoData, GetVisitorInfoResponses, GetVisitorInfoErrors, GetVisitorJourneyData, GetVisitorJourneyResponses, GetVisitorJourneyErrors, GetVisitorSessionsData, GetVisitorSessionsResponses, GetVisitorSessionsErrors, GetVisitorStatsData, GetVisitorStatsResponses, GetVisitorStatsErrors, ListApiKeysData, ListApiKeysResponses, ListApiKeysErrors, CreateApiKeyData, CreateApiKeyResponses, CreateApiKeyErrors, GetApiKeyPermissionsData, GetApiKeyPermissionsResponses, GetApiKeyPermissionsErrors, DeleteApiKeyData, DeleteApiKeyResponses, DeleteApiKeyErrors, GetApiKeyData, GetApiKeyResponses, GetApiKeyErrors, UpdateApiKeyData, UpdateApiKeyResponses, UpdateApiKeyErrors, ActivateApiKeyData, ActivateApiKeyResponses, ActivateApiKeyErrors, DeactivateApiKeyData, DeactivateApiKeyResponses, DeactivateApiKeyErrors, EmailStatusData, EmailStatusResponses, EmailStatusErrors, LoginData, LoginResponses, LoginErrors, RequestMagicLinkData, RequestMagicLinkResponses, RequestMagicLinkErrors, VerifyMagicLinkData, VerifyMagicLinkResponses, VerifyMagicLinkErrors, RequestPasswordResetData, RequestPasswordResetResponses, RequestPasswordResetErrors, ResetPasswordData, ResetPasswordResponses, ResetPasswordErrors, VerifyEmailData, VerifyEmailResponses, VerifyEmailErrors, VerifyMfaChallengeData, VerifyMfaChallengeResponses, VerifyMfaChallengeErrors, RunExternalServiceBackupData, RunExternalServiceBackupResponses, RunExternalServiceBackupErrors, ListS3SourcesData, ListS3SourcesResponses, ListS3SourcesErrors, CreateS3SourceData, CreateS3SourceResponses, CreateS3SourceErrors, DeleteS3SourceData, DeleteS3SourceResponses, DeleteS3SourceErrors, GetS3SourceData, GetS3SourceResponses, GetS3SourceErrors, UpdateS3SourceData, UpdateS3SourceResponses, UpdateS3SourceErrors, ListSourceBackupsData, ListSourceBackupsResponses, ListSourceBackupsErrors, RunBackupForSourceData, RunBackupForSourceResponses, RunBackupForSourceErrors, ListBackupSchedulesData, ListBackupSchedulesResponses, ListBackupSchedulesErrors, CreateBackupScheduleData, CreateBackupScheduleResponses, CreateBackupScheduleErrors, DeleteBackupScheduleData, DeleteBackupScheduleResponses, DeleteBackupScheduleErrors, GetBackupScheduleData, GetBackupScheduleResponses, GetBackupScheduleErrors, ListBackupsForScheduleData, ListBackupsForScheduleResponses, ListBackupsForScheduleErrors, DisableBackupScheduleData, DisableBackupScheduleResponses, DisableBackupScheduleErrors, EnableBackupScheduleData, EnableBackupScheduleResponses, EnableBackupScheduleErrors, GetBackupData, GetBackupResponses, GetBackupErrors, BlobDeleteData, BlobDeleteResponses, BlobDeleteErrors, BlobListData, BlobListResponses, BlobListErrors, BlobPutData, BlobPutResponses, BlobPutErrors, BlobCopyData, BlobCopyResponses, BlobCopyErrors, BlobDisableData, BlobDisableResponses, BlobDisableErrors, BlobEnableData, BlobEnableResponses, BlobEnableErrors, BlobStatusData, BlobStatusResponses, BlobStatusErrors, BlobUpdateData, BlobUpdateResponses, BlobUpdateErrors, BlobDownloadData, BlobDownloadResponses, BlobDownloadErrors, BlobHeadData, BlobHeadResponses, BlobHeadErrors, GetDashboardProjectsAnalyticsData, GetDashboardProjectsAnalyticsResponses, GetDashboardProjectsAnalyticsErrors, GetActivityGraphData, GetActivityGraphResponses, GetActivityGraphErrors, GetScanByDeploymentData, GetScanByDeploymentResponses, GetScanByDeploymentErrors, ListProvidersData, ListProvidersResponses, ListProvidersErrors, CreateProviderData, CreateProviderResponses, CreateProviderErrors, DeleteProviderData, DeleteProviderResponses, DeleteProviderErrors, GetProviderData, GetProviderResponses, GetProviderErrors, UpdateProviderData, UpdateProviderResponses, UpdateProviderErrors, ListManagedDomainsData, ListManagedDomainsResponses, ListManagedDomainsErrors, AddManagedDomainData, AddManagedDomainResponses, AddManagedDomainErrors, TestProviderConnectionData, TestProviderConnectionResponses, TestProviderConnectionErrors, ListProviderZonesData, ListProviderZonesResponses, ListProviderZonesErrors, RemoveManagedDomainData, RemoveManagedDomainResponses, RemoveManagedDomainErrors, VerifyManagedDomainData, VerifyManagedDomainResponses, VerifyManagedDomainErrors, LookupDnsARecordsData, LookupDnsARecordsResponses, LookupDnsARecordsErrors, ListDomainsData, ListDomainsResponses, ListDomainsErrors, CreateDomainData, CreateDomainResponses, CreateDomainErrors, GetDomainByHostData, GetDomainByHostResponses, GetDomainByHostErrors, CancelDomainOrderData, CancelDomainOrderResponses, CancelDomainOrderErrors, GetDomainOrderData, GetDomainOrderResponses, GetDomainOrderErrors, CreateOrRecreateOrderData, CreateOrRecreateOrderResponses, CreateOrRecreateOrderErrors, FinalizeOrderData, FinalizeOrderResponses, FinalizeOrderErrors, SetupDnsChallengeData, SetupDnsChallengeResponses, SetupDnsChallengeErrors, DeleteDomainData, DeleteDomainResponses, DeleteDomainErrors, GetDomainByIdData, GetDomainByIdResponses, GetDomainByIdErrors, GetChallengeTokenData, GetChallengeTokenResponses, GetChallengeTokenErrors, GetHttpChallengeDebugData, GetHttpChallengeDebugResponses, GetHttpChallengeDebugErrors, ProvisionDomainData, ProvisionDomainResponses, ProvisionDomainErrors, RenewDomainData, RenewDomainResponses, RenewDomainErrors, CheckDomainStatusData, CheckDomainStatusResponses, CheckDomainStatusErrors, ListDomains2Data, ListDomains2Responses, ListDomains2Errors, CreateDomain2Data, CreateDomain2Responses, CreateDomain2Errors, GetDomainByNameData, GetDomainByNameResponses, GetDomainByNameErrors, DeleteDomain2Data, DeleteDomain2Responses, DeleteDomain2Errors, GetDomainData, GetDomainResponses, GetDomainErrors, GetDomainDnsRecordsData, GetDomainDnsRecordsResponses, GetDomainDnsRecordsErrors, SetupDnsData, SetupDnsResponses, SetupDnsErrors, VerifyDomainData, VerifyDomainResponses, VerifyDomainErrors, ListProviders2Data, ListProviders2Responses, ListProviders2Errors, CreateProvider2Data, CreateProvider2Responses, CreateProvider2Errors, DeleteProvider2Data, DeleteProvider2Responses, DeleteProvider2Errors, GetProvider2Data, GetProvider2Responses, GetProvider2Errors, TestProviderData, TestProviderResponses, TestProviderErrors, ListEmailsData, ListEmailsResponses, ListEmailsErrors, SendEmailData, SendEmailResponses, SendEmailErrors, GetEmailStatsData, GetEmailStatsResponses, GetEmailStatsErrors, ValidateEmailData, ValidateEmailResponses, ValidateEmailErrors, GetEmailData, GetEmailResponses, GetEmailErrors, ListServicesData, ListServicesResponses, ListServicesErrors, CreateServiceData, CreateServiceResponses, CreateServiceErrors, ListAvailableContainersData, ListAvailableContainersResponses, ListAvailableContainersErrors, GetServiceBySlugData, GetServiceBySlugResponses, GetServiceBySlugErrors, ImportExternalServiceData, ImportExternalServiceResponses, ImportExternalServiceErrors, ListProjectServicesData, ListProjectServicesResponses, ListProjectServicesErrors, GetProjectServiceEnvironmentVariablesData, GetProjectServiceEnvironmentVariablesResponses, GetProjectServiceEnvironmentVariablesErrors, GetProvidersMetadataData, GetProvidersMetadataResponses, GetProvidersMetadataErrors, GetProviderMetadataData, GetProviderMetadataResponses, GetProviderMetadataErrors, GetServiceTypesData, GetServiceTypesResponses, GetServiceTypesErrors, GetServiceTypeParametersData, GetServiceTypeParametersResponses, GetServiceTypeParametersErrors, DeleteServiceData, DeleteServiceResponses, DeleteServiceErrors, GetServiceData, GetServiceResponses, GetServiceErrors, UpdateServiceData, UpdateServiceResponses, UpdateServiceErrors, GetServicePreviewEnvironmentVariablesMaskedData, GetServicePreviewEnvironmentVariablesMaskedResponses, GetServicePreviewEnvironmentVariablesMaskedErrors, GetServicePreviewEnvironmentVariableNamesData, GetServicePreviewEnvironmentVariableNamesResponses, GetServicePreviewEnvironmentVariableNamesErrors, ListServiceProjectsData, ListServiceProjectsResponses, ListServiceProjectsErrors, LinkServiceToProjectData, LinkServiceToProjectResponses, LinkServiceToProjectErrors, UnlinkServiceFromProjectData, UnlinkServiceFromProjectResponses, UnlinkServiceFromProjectErrors, GetServiceEnvironmentVariablesData, GetServiceEnvironmentVariablesResponses, GetServiceEnvironmentVariablesErrors, GetServiceEnvironmentVariableData, GetServiceEnvironmentVariableResponses, GetServiceEnvironmentVariableErrors, RetryClusterData, RetryClusterResponses, RetryClusterErrors, StartServiceData, StartServiceResponses, StartServiceErrors, StopServiceData, StopServiceResponses, StopServiceErrors, UpgradeServiceData, UpgradeServiceResponses, UpgradeServiceErrors, ListRootContainersData, ListRootContainersResponses, ListRootContainersErrors, ListContainersAtPathData, ListContainersAtPathResponses, ListContainersAtPathErrors, ListEntitiesData, ListEntitiesResponses, ListEntitiesErrors, GetEntityInfoData, GetEntityInfoResponses, GetEntityInfoErrors, QueryDataData, QueryDataResponses, QueryDataErrors, DownloadObjectData, DownloadObjectResponses, DownloadObjectErrors, GetContainerInfoData, GetContainerInfoResponses, GetContainerInfoErrors, CheckExplorerSupportData, CheckExplorerSupportResponses, CheckExplorerSupportErrors, GetFileData, GetFileResponses, GetFileErrors, GetIpGeolocationData, GetIpGeolocationResponses, GetIpGeolocationErrors, ListConnectionsData, ListConnectionsResponses, ListConnectionsErrors, DeleteConnectionData, DeleteConnectionResponses, DeleteConnectionErrors, ActivateConnectionData, ActivateConnectionResponses, ActivateConnectionErrors, DeactivateConnectionData, DeactivateConnectionResponses, DeactivateConnectionErrors, ListRepositoriesByConnectionData, ListRepositoriesByConnectionResponses, ListRepositoriesByConnectionErrors, SyncRepositoriesData, SyncRepositoriesResponses, SyncRepositoriesErrors, UpdateConnectionTokenData, UpdateConnectionTokenResponses, UpdateConnectionTokenErrors, ValidateConnectionData, ValidateConnectionResponses, ValidateConnectionErrors, ListGitProvidersData, ListGitProvidersResponses, ListGitProvidersErrors, CreateGitProviderData, CreateGitProviderResponses, CreateGitProviderErrors, CreateGithubPatProviderData, CreateGithubPatProviderResponses, CreateGithubPatProviderErrors, CreateGitlabOauthProviderData, CreateGitlabOauthProviderResponses, CreateGitlabOauthProviderErrors, CreateGitlabPatProviderData, CreateGitlabPatProviderResponses, CreateGitlabPatProviderErrors, DeleteProvider3Data, DeleteProvider3Responses, DeleteProvider3Errors, GetGitProviderData, GetGitProviderResponses, GetGitProviderErrors, ActivateProviderData, ActivateProviderResponses, ActivateProviderErrors, HandleGitProviderOauthCallbackData, HandleGitProviderOauthCallbackErrors, GetProviderConnectionsData, GetProviderConnectionsResponses, GetProviderConnectionsErrors, DeactivateProviderData, DeactivateProviderResponses, DeactivateProviderErrors, CheckProviderDeletionSafetyData, CheckProviderDeletionSafetyResponses, CheckProviderDeletionSafetyErrors, StartGitProviderOauthData, StartGitProviderOauthErrors, DeleteProviderSafelyData, DeleteProviderSafelyResponses, DeleteProviderSafelyErrors, GetPublicRepositoryData, GetPublicRepositoryResponses, GetPublicRepositoryErrors, GetPublicBranchesData, GetPublicBranchesResponses, GetPublicBranchesErrors, DetectPublicPresetsData, DetectPublicPresetsResponses, DetectPublicPresetsErrors, DiscoverWorkloadsData, DiscoverWorkloadsResponses, DiscoverWorkloadsErrors, ExecuteImportData, ExecuteImportResponses, ExecuteImportErrors, CreatePlanData, CreatePlanResponses, CreatePlanErrors, ListSourcesData, ListSourcesResponses, ListSourcesErrors, GetImportStatusData, GetImportStatusResponses, GetImportStatusErrors, GetIncidentData, GetIncidentResponses, GetIncidentErrors, UpdateIncidentStatusData, UpdateIncidentStatusResponses, UpdateIncidentStatusErrors, GetIncidentUpdatesData, GetIncidentUpdatesResponses, GetIncidentUpdatesErrors, AdminListNodesData, AdminListNodesResponses, AdminListNodesErrors, RegisterNodeData, RegisterNodeResponses, RegisterNodeErrors, AdminRemoveNodeData, AdminRemoveNodeResponses, AdminRemoveNodeErrors, AdminGetNodeData, AdminGetNodeResponses, AdminGetNodeErrors, AdminListNodeContainersData, AdminListNodeContainersResponses, AdminListNodeContainersErrors, AdminUndrainNodeData, AdminUndrainNodeResponses, AdminUndrainNodeErrors, AdminDrainStatusData, AdminDrainStatusResponses, AdminDrainStatusErrors, AdminDrainNodeData, AdminDrainNodeResponses, AdminDrainNodeErrors, NodeHeartbeatData, NodeHeartbeatResponses, NodeHeartbeatErrors, GetS3CredentialsData, GetS3CredentialsResponses, GetS3CredentialsErrors, ListIpAccessControlData, ListIpAccessControlResponses, ListIpAccessControlErrors, CreateIpAccessControlData, CreateIpAccessControlResponses, CreateIpAccessControlErrors, CheckIpBlockedData, CheckIpBlockedResponses, CheckIpBlockedErrors, DeleteIpAccessControlData, DeleteIpAccessControlResponses, DeleteIpAccessControlErrors, GetIpAccessControlData, GetIpAccessControlResponses, GetIpAccessControlErrors, UpdateIpAccessControlData, UpdateIpAccessControlResponses, UpdateIpAccessControlErrors, KvDelData, KvDelResponses, KvDelErrors, KvDisableData, KvDisableResponses, KvDisableErrors, KvEnableData, KvEnableResponses, KvEnableErrors, KvExpireData, KvExpireResponses, KvExpireErrors, KvGetData, KvGetResponses, KvGetErrors, KvIncrData, KvIncrResponses, KvIncrErrors, KvKeysData, KvKeysResponses, KvKeysErrors, KvSetData, KvSetResponses, KvSetErrors, KvStatusData, KvStatusResponses, KvStatusErrors, KvTtlData, KvTtlResponses, KvTtlErrors, KvUpdateData, KvUpdateResponses, KvUpdateErrors, ListRoutesData, ListRoutesResponses, ListRoutesErrors, CreateRouteData, CreateRouteResponses, CreateRouteErrors, DeleteRouteData, DeleteRouteResponses, DeleteRouteErrors, GetRouteData, GetRouteResponses, GetRouteErrors, UpdateRouteData, UpdateRouteResponses, UpdateRouteErrors, LogoutData, LogoutResponses, LogoutErrors, GetLogContextData, GetLogContextResponses, GetLogContextErrors, SearchLogsData, SearchLogsResponses, SearchLogsErrors, TailLogsData, TailLogsResponses, TailLogsErrors, DeleteMonitorData, DeleteMonitorResponses, DeleteMonitorErrors, GetMonitorData, GetMonitorResponses, GetMonitorErrors, GetBucketedStatusData, GetBucketedStatusResponses, GetBucketedStatusErrors, GetCurrentMonitorStatusData, GetCurrentMonitorStatusResponses, GetCurrentMonitorStatusErrors, GetUptimeHistoryData, GetUptimeHistoryResponses, GetUptimeHistoryErrors, DeletePreferencesData, DeletePreferencesResponses, DeletePreferencesErrors, GetPreferencesData, GetPreferencesResponses, GetPreferencesErrors, UpdatePreferencesData, UpdatePreferencesResponses, UpdatePreferencesErrors, ListNotificationProvidersData, ListNotificationProvidersResponses, ListNotificationProvidersErrors, CreateNotificationProviderData, CreateNotificationProviderResponses, CreateNotificationProviderErrors, CreateEmailProviderData, CreateEmailProviderResponses, CreateEmailProviderErrors, UpdateEmailProviderData, UpdateEmailProviderResponses, UpdateEmailProviderErrors, CreateSlackProviderData, CreateSlackProviderResponses, CreateSlackProviderErrors, UpdateSlackProviderData, UpdateSlackProviderResponses, UpdateSlackProviderErrors, CreateWebhookProviderData, CreateWebhookProviderResponses, CreateWebhookProviderErrors, UpdateWebhookProviderData, UpdateWebhookProviderResponses, UpdateWebhookProviderErrors, DeleteProvider4Data, DeleteProvider4Responses, DeleteProvider4Errors, GetNotificationProviderData, GetNotificationProviderResponses, GetNotificationProviderErrors, UpdateProvider2Data, UpdateProvider2Responses, UpdateProvider2Errors, TestProvider2Data, TestProvider2Responses, TestProvider2Errors, ListOrdersData, ListOrdersResponses, ListOrdersErrors, QueryGenaiTracesData, QueryGenaiTracesResponses, QueryGenaiTracesErrors, GetGenaiTraceData, GetGenaiTraceResponses, GetGenaiTraceErrors, GetHealthData, GetHealthResponses, GetHealthErrors, ListInsightsData, ListInsightsResponses, ListInsightsErrors, QueryLogsData, QueryLogsResponses, QueryLogsErrors, ListMetricNamesData, ListMetricNamesResponses, ListMetricNamesErrors, QueryMetricsData, QueryMetricsResponses, QueryMetricsErrors, GetPipelineStatsData, GetPipelineStatsResponses, GetPipelineStatsErrors, GetQuotaData, GetQuotaResponses, GetQuotaErrors, QueryTraceSummariesData, QueryTraceSummariesResponses, QueryTraceSummariesErrors, QueryTracesData, QueryTracesResponses, QueryTracesErrors, GetTraceData, GetTraceResponses, GetTraceErrors, IngestLogsData, IngestLogsResponses, IngestLogsErrors, IngestMetricsData, IngestMetricsResponses, IngestMetricsErrors, IngestTracesData, IngestTracesResponses, IngestTracesErrors, IngestLogsByPathData, IngestLogsByPathResponses, IngestLogsByPathErrors, IngestMetricsByPathData, IngestMetricsByPathResponses, IngestMetricsByPathErrors, IngestTracesByPathData, IngestTracesByPathResponses, IngestTracesByPathErrors, HasPerformanceMetricsData, HasPerformanceMetricsResponses, HasPerformanceMetricsErrors, GetPerformanceMetricsData, GetPerformanceMetricsResponses, GetPerformanceMetricsErrors, GetMetricsOverTimeData, GetMetricsOverTimeResponses, GetMetricsOverTimeErrors, GetGroupedPageMetricsData, GetGroupedPageMetricsResponses, GetGroupedPageMetricsErrors, GetAccessInfoData, GetAccessInfoResponses, GetAccessInfoErrors, GetPrivateIpData, GetPrivateIpResponses, GetPrivateIpErrors, GetPublicIpData, GetPublicIpResponses, GetPublicIpErrors, ListPresetsData, ListPresetsResponses, ListPresetsErrors, GeneratePresetDockerfileData, GeneratePresetDockerfileResponses, GeneratePresetDockerfileErrors, GetProjectsData, GetProjectsResponses, GetProjectsErrors, CreateProjectData, CreateProjectResponses, CreateProjectErrors, GetProjectBySlugData, GetProjectBySlugResponses, GetProjectBySlugErrors, CreateProjectFromTemplateData, CreateProjectFromTemplateResponses, CreateProjectFromTemplateErrors, GetProjectStatisticsData, GetProjectStatisticsResponses, GetProjectStatisticsErrors, DeleteProjectData, DeleteProjectResponses, DeleteProjectErrors, GetProjectData, GetProjectResponses, GetProjectErrors, UpdateProjectData, UpdateProjectResponses, UpdateProjectErrors, GetProjectDeploymentsData, GetProjectDeploymentsResponses, GetProjectDeploymentsErrors, GetLastDeploymentData, GetLastDeploymentResponses, GetLastDeploymentErrors, TriggerProjectPipelineData, TriggerProjectPipelineResponses, TriggerProjectPipelineErrors, GetActiveVisitors2Data, GetActiveVisitors2Responses, GetActiveVisitors2Errors, GetAggregatedBucketsData, GetAggregatedBucketsResponses, GetAggregatedBucketsErrors, UpdateAutomaticDeployData, UpdateAutomaticDeployResponses, UpdateAutomaticDeployErrors, ListCustomDomainsForProjectData, ListCustomDomainsForProjectResponses, ListCustomDomainsForProjectErrors, CreateCustomDomainData, CreateCustomDomainResponses, CreateCustomDomainErrors, DeleteCustomDomainData, DeleteCustomDomainResponses, DeleteCustomDomainErrors, GetCustomDomainData, GetCustomDomainResponses, GetCustomDomainErrors, UpdateCustomDomainData, UpdateCustomDomainResponses, UpdateCustomDomainErrors, LinkCustomDomainToCertificateData, LinkCustomDomainToCertificateResponses, LinkCustomDomainToCertificateErrors, UpdateProjectDeploymentConfigData, UpdateProjectDeploymentConfigResponses, UpdateProjectDeploymentConfigErrors, GetDeploymentData, GetDeploymentResponses, GetDeploymentErrors, CancelDeploymentData, CancelDeploymentResponses, CancelDeploymentErrors, GetDeploymentJobsData, GetDeploymentJobsResponses, GetDeploymentJobsErrors, GetDeploymentJobLogsData, GetDeploymentJobLogsResponses, GetDeploymentJobLogsErrors, TailDeploymentJobLogsData, TailDeploymentJobLogsErrors, GetDeploymentOperationsData, GetDeploymentOperationsResponses, GetDeploymentOperationsErrors, ExecuteDeploymentOperationData, ExecuteDeploymentOperationResponses, ExecuteDeploymentOperationErrors, GetDeploymentOperationStatusData, GetDeploymentOperationStatusResponses, GetDeploymentOperationStatusErrors, PauseDeploymentData, PauseDeploymentResponses, PauseDeploymentErrors, PromoteDeploymentData, PromoteDeploymentResponses, PromoteDeploymentErrors, ResumeDeploymentData, ResumeDeploymentResponses, ResumeDeploymentErrors, RollbackToDeploymentData, RollbackToDeploymentResponses, RollbackToDeploymentErrors, TeardownDeploymentData, TeardownDeploymentResponses, TeardownDeploymentErrors, ListDsnsData, ListDsnsResponses, CreateDsnData, CreateDsnResponses, CreateDsnErrors, GetOrCreateDsnData, GetOrCreateDsnResponses, GetOrCreateDsnErrors, RegenerateDsnData, RegenerateDsnResponses, RegenerateDsnErrors, RevokeDsnData, RevokeDsnResponses, RevokeDsnErrors, GetEnvironmentVariablesData, GetEnvironmentVariablesResponses, GetEnvironmentVariablesErrors, CreateEnvironmentVariableData, CreateEnvironmentVariableResponses, CreateEnvironmentVariableErrors, GetEnvironmentVariableValueData, GetEnvironmentVariableValueResponses, GetEnvironmentVariableValueErrors, DeleteEnvironmentVariableData, DeleteEnvironmentVariableResponses, DeleteEnvironmentVariableErrors, UpdateEnvironmentVariableData, UpdateEnvironmentVariableResponses, UpdateEnvironmentVariableErrors, GetEnvironmentsData, GetEnvironmentsResponses, GetEnvironmentsErrors, CreateEnvironmentData, CreateEnvironmentResponses, CreateEnvironmentErrors, DeleteEnvironmentData, DeleteEnvironmentResponses, DeleteEnvironmentErrors, GetEnvironmentData, GetEnvironmentResponses, GetEnvironmentErrors, GetEnvironmentCronsData, GetEnvironmentCronsResponses, GetEnvironmentCronsErrors, GetCronByIdData, GetCronByIdResponses, GetCronByIdErrors, GetCronExecutionsData, GetCronExecutionsResponses, GetCronExecutionsErrors, GetEnvironmentDomainsData, GetEnvironmentDomainsResponses, GetEnvironmentDomainsErrors, AddEnvironmentDomainData, AddEnvironmentDomainResponses, AddEnvironmentDomainErrors, DeleteEnvironmentDomainData, DeleteEnvironmentDomainResponses, DeleteEnvironmentDomainErrors, UpdateEnvironmentSettingsData, UpdateEnvironmentSettingsResponses, UpdateEnvironmentSettingsErrors, SleepEnvironmentData, SleepEnvironmentResponses, SleepEnvironmentErrors, TeardownEnvironmentData, TeardownEnvironmentResponses, TeardownEnvironmentErrors, WakeEnvironmentData, WakeEnvironmentResponses, WakeEnvironmentErrors, GetContainerLogsData, GetContainerLogsErrors, ListContainersData, ListContainersResponses, ListContainersErrors, GetContainerDetailData, GetContainerDetailResponses, GetContainerDetailErrors, GetContainerLogsByIdData, GetContainerLogsByIdErrors, GetContainerMetricsData, GetContainerMetricsResponses, GetContainerMetricsErrors, StreamContainerMetricsData, StreamContainerMetricsResponses, StreamContainerMetricsErrors, RestartContainerData, RestartContainerResponses, RestartContainerErrors, StartContainerData, StartContainerResponses, StartContainerErrors, StopContainerData, StopContainerResponses, StopContainerErrors, DeployFromImageData, DeployFromImageResponses, DeployFromImageErrors, DeployFromImageUploadData, DeployFromImageUploadResponses, DeployFromImageUploadErrors, DeployFromStaticData, DeployFromStaticResponses, DeployFromStaticErrors, ListAlertRulesData, ListAlertRulesResponses, ListAlertRulesErrors, CreateAlertRuleData, CreateAlertRuleResponses, CreateAlertRuleErrors, DeleteAlertRuleData, DeleteAlertRuleResponses, DeleteAlertRuleErrors, GetAlertRuleData, GetAlertRuleResponses, GetAlertRuleErrors, UpdateAlertRuleData, UpdateAlertRuleResponses, UpdateAlertRuleErrors, GetErrorDashboardStatsData, GetErrorDashboardStatsResponses, GetErrorDashboardStatsErrors, ListErrorGroupsData, ListErrorGroupsResponses, ListErrorGroupsErrors, GetErrorGroupData, GetErrorGroupResponses, GetErrorGroupErrors, UpdateErrorGroupData, UpdateErrorGroupResponses, UpdateErrorGroupErrors, ListErrorEventsData, ListErrorEventsResponses, ListErrorEventsErrors, GetErrorEventData, GetErrorEventResponses, GetErrorEventErrors, GetErrorStatsData, GetErrorStatsResponses, GetErrorStatsErrors, GetErrorTimeSeriesData, GetErrorTimeSeriesResponses, GetErrorTimeSeriesErrors, GetEventsCount2Data, GetEventsCount2Responses, GetEventsCount2Errors, GetEventTypeBreakdownData, GetEventTypeBreakdownResponses, GetEventTypeBreakdownErrors, RecordConsoleEventData, RecordConsoleEventResponses, RecordConsoleEventErrors, GetPropertyBreakdownData, GetPropertyBreakdownResponses, GetPropertyBreakdownErrors, GetPropertyTimelineData, GetPropertyTimelineResponses, GetPropertyTimelineErrors, GetEventsTimelineData, GetEventsTimelineResponses, GetEventsTimelineErrors, GetUniqueEventsData, GetUniqueEventsResponses, GetUniqueEventsErrors, ListExternalImagesData, ListExternalImagesResponses, ListExternalImagesErrors, RegisterExternalImageData, RegisterExternalImageResponses, RegisterExternalImageErrors, DeleteExternalImageData, DeleteExternalImageResponses, DeleteExternalImageErrors, GetExternalImageData, GetExternalImageResponses, GetExternalImageErrors, ListFunnelsData, ListFunnelsResponses, ListFunnelsErrors, CreateFunnelData, CreateFunnelResponses, CreateFunnelErrors, PreviewFunnelMetricsData, PreviewFunnelMetricsResponses, PreviewFunnelMetricsErrors, DeleteFunnelData, DeleteFunnelResponses, DeleteFunnelErrors, UpdateFunnelData, UpdateFunnelResponses, UpdateFunnelErrors, GetFunnelMetricsData, GetFunnelMetricsResponses, GetFunnelMetricsErrors, UpdateGitSettingsData, UpdateGitSettingsResponses, UpdateGitSettingsErrors, HasErrorGroupsData, HasErrorGroupsResponses, HasErrorGroupsErrors, HasAnalyticsEventsData, HasAnalyticsEventsResponses, HasAnalyticsEventsErrors, GetHourlyVisitsData, GetHourlyVisitsResponses, GetHourlyVisitsErrors, ListExternalImages2Data, ListExternalImages2Responses, ListExternalImages2Errors, PushExternalImageData, PushExternalImageResponses, PushExternalImageErrors, GetExternalImage2Data, GetExternalImage2Responses, GetExternalImage2Errors, ListIncidentsData, ListIncidentsResponses, ListIncidentsErrors, CreateIncidentData, CreateIncidentResponses, CreateIncidentErrors, GetBucketedIncidentsData, GetBucketedIncidentsResponses, GetBucketedIncidentsErrors, PurgeProjectLogsData, PurgeProjectLogsResponses, PurgeProjectLogsErrors, ListMonitorsData, ListMonitorsResponses, ListMonitorsErrors, CreateMonitorData, CreateMonitorResponses, CreateMonitorErrors, DeleteReleaseSourceMapsData, DeleteReleaseSourceMapsResponses, DeleteReleaseSourceMapsErrors, ListSourceMapsData, ListSourceMapsResponses, ListSourceMapsErrors, UploadSourceMapData, UploadSourceMapResponses, UploadSourceMapErrors, UpdateProjectSettingsData, UpdateProjectSettingsResponses, UpdateProjectSettingsErrors, ListReleasesData, ListReleasesResponses, ListReleasesErrors, DeleteSourceMapData, DeleteSourceMapResponses, DeleteSourceMapErrors, ListStaticBundlesData, ListStaticBundlesResponses, ListStaticBundlesErrors, DeleteStaticBundleData, DeleteStaticBundleResponses, DeleteStaticBundleErrors, GetStaticBundleData, GetStaticBundleResponses, GetStaticBundleErrors, GetStatusOverviewData, GetStatusOverviewResponses, GetStatusOverviewErrors, GetUniqueCountsData, GetUniqueCountsResponses, GetUniqueCountsErrors, UploadStaticBundleData, UploadStaticBundleResponses, UploadStaticBundleErrors, ListProjectScansData, ListProjectScansResponses, ListProjectScansErrors, TriggerScanData, TriggerScanResponses, TriggerScanErrors, GetLatestScansPerEnvironmentData, GetLatestScansPerEnvironmentResponses, GetLatestScansPerEnvironmentErrors, GetLatestScanData, GetLatestScanResponses, GetLatestScanErrors, ListWebhooksData, ListWebhooksResponses, ListWebhooksErrors, CreateWebhookData, CreateWebhookResponses, CreateWebhookErrors, DeleteWebhookData, DeleteWebhookResponses, DeleteWebhookErrors, GetWebhookData, GetWebhookResponses, GetWebhookErrors, UpdateWebhookData, UpdateWebhookResponses, UpdateWebhookErrors, ListDeliveriesData, ListDeliveriesResponses, ListDeliveriesErrors, GetDeliveryData, GetDeliveryResponses, GetDeliveryErrors, RetryDeliveryData, RetryDeliveryResponses, RetryDeliveryErrors, GetProxyLogsData, GetProxyLogsResponses, GetProxyLogsErrors, GetProxyLogByRequestIdData, GetProxyLogByRequestIdResponses, GetProxyLogByRequestIdErrors, GetTimeBucketStatsData, GetTimeBucketStatsResponses, GetTimeBucketStatsErrors, GetTodayStatsData, GetTodayStatsResponses, GetTodayStatsErrors, GetProxyLogByIdData, GetProxyLogByIdResponses, GetProxyLogByIdErrors, ListSyncedRepositoriesData, ListSyncedRepositoriesResponses, ListSyncedRepositoriesErrors, GetRepositoryByNameData, GetRepositoryByNameResponses, GetRepositoryByNameErrors, GetAllRepositoriesByNameData, GetAllRepositoriesByNameResponses, GetAllRepositoriesByNameErrors, GetRepositoryPresetByNameData, GetRepositoryPresetByNameResponses, GetRepositoryPresetByNameErrors, GetRepositoryBranchesData, GetRepositoryBranchesResponses, GetRepositoryBranchesErrors, GetRepositoryTagsData, GetRepositoryTagsResponses, GetRepositoryTagsErrors, GetRepositoryPresetLiveData, GetRepositoryPresetLiveResponses, GetRepositoryPresetLiveErrors, GetBranchesByRepositoryIdData, GetBranchesByRepositoryIdResponses, GetBranchesByRepositoryIdErrors, ListCommitsByRepositoryIdData, ListCommitsByRepositoryIdResponses, ListCommitsByRepositoryIdErrors, CheckCommitExistsData, CheckCommitExistsResponses, CheckCommitExistsErrors, GetTagsByRepositoryIdData, GetTagsByRepositoryIdResponses, GetTagsByRepositoryIdErrors, GetProjectSessionReplaysData, GetProjectSessionReplaysResponses, GetProjectSessionReplaysErrors, GetSessionEvents2Data, GetSessionEvents2Responses, GetSessionEvents2Errors, GetSettingsData, GetSettingsResponses, GetSettingsErrors, UpdateSettingsData, UpdateSettingsResponses, UpdateSettingsErrors, RevokeJoinTokenData, RevokeJoinTokenResponses, RevokeJoinTokenErrors, GenerateJoinTokenData, GenerateJoinTokenResponses, GenerateJoinTokenErrors, GetJoinTokenStatusData, GetJoinTokenStatusResponses, GetJoinTokenStatusErrors, GetPublicSettingsData, GetPublicSettingsResponses, GetPublicSettingsErrors, ListTemplatesData, ListTemplatesResponses, ListTemplatesErrors, ListTemplateTagsData, ListTemplateTagsResponses, ListTemplateTagsErrors, GetTemplateData, GetTemplateResponses, GetTemplateErrors, GetCurrentUserData, GetCurrentUserResponses, GetCurrentUserErrors, ListUsersData, ListUsersResponses, ListUsersErrors, CreateUserData, CreateUserResponses, CreateUserErrors, UpdateSelfData, UpdateSelfResponses, UpdateSelfErrors, DisableMfaData, DisableMfaResponses, DisableMfaErrors, SetupMfaData, SetupMfaResponses, SetupMfaErrors, VerifyAndEnableMfaData, VerifyAndEnableMfaResponses, VerifyAndEnableMfaErrors, DeleteUserData, DeleteUserResponses, DeleteUserErrors, UpdateUserData, UpdateUserResponses, UpdateUserErrors, RestoreUserData, RestoreUserResponses, RestoreUserErrors, AssignRoleData, AssignRoleResponses, AssignRoleErrors, RemoveRoleData, RemoveRoleResponses, RemoveRoleErrors, GetVisitorSessions2Data, GetVisitorSessions2Responses, GetVisitorSessions2Errors, DeleteSessionReplayData, DeleteSessionReplayResponses, DeleteSessionReplayErrors, GetSessionReplayData, GetSessionReplayResponses, GetSessionReplayErrors, UpdateSessionDurationData, UpdateSessionDurationResponses, UpdateSessionDurationErrors, GetSessionReplayEventsData, GetSessionReplayEventsResponses, GetSessionReplayEventsErrors, AddEventsData, AddEventsResponses, AddEventsErrors, DeleteScanData, DeleteScanResponses, DeleteScanErrors, GetScanData, GetScanResponses, GetScanErrors, GetScanVulnerabilitiesData, GetScanVulnerabilitiesResponses, GetScanVulnerabilitiesErrors, ListEventTypesData, ListEventTypesResponses, TriggerWeeklyDigestData, TriggerWeeklyDigestResponses, TriggerWeeklyDigestErrors, ListExternalPluginsData, ListExternalPluginsResponses, ReloadPluginsData, ReloadPluginsResponses, ReloadPluginsErrors, IngestSentryEnvelopeData, IngestSentryEnvelopeResponses, IngestSentryEnvelopeErrors, IngestSentryEventData, IngestSentryEventResponses, IngestSentryEventErrors, ListAuditLogsData, ListAuditLogsResponses, ListAuditLogsErrors, GetAuditLogData, GetAuditLogResponses, GetAuditLogErrors } from './types.gen'; import { client } from './client.gen'; export type Options = ClientOptions & { @@ -2498,6 +2498,28 @@ export const getServiceEnvironmentVariable = (options: Options) => { + return (options.client ?? client).post({ + security: [ + { + scheme: 'bearer', + type: 'http' + } + ], + url: '/external-services/{id}/retry', + ...options, + headers: { + 'Content-Type': 'application/json', + ...options.headers + } + }); +}; + /** * Start an external service */ @@ -3376,6 +3398,19 @@ export const nodeHeartbeat = (options: Opt }); }; +/** + * Get decrypted S3 credentials for a backup/restore operation. + * Agents call this endpoint to receive the S3 credentials they need to upload + * or download backups. The credentials are decrypted from the stored S3 source + * and returned over the authenticated TLS/WireGuard channel. + */ +export const getS3Credentials = (options: Options) => { + return (options.client ?? client).get({ + url: '/internal/nodes/{node_id}/s3-credentials/{s3_source_id}', + ...options + }); +}; + /** * List all IP access control rules */ @@ -5335,8 +5370,8 @@ export const updateEnvironmentSettings = ( /** * Sleep an on-demand environment - * Manually put an on-demand environment to sleep. Sets `sleeping = true`. - * The proxy will stop sending traffic and the idle sweep will stop containers. + * Manually put an on-demand environment to sleep. Stops containers and sets + * `sleeping = true`. If no OnDemandWaker is available, falls back to DB flag only. */ export const sleepEnvironment = (options: Options) => { return (options.client ?? client).post({ @@ -5358,8 +5393,9 @@ export const teardownEnvironment = (option /** * Wake a sleeping on-demand environment * Manually wake an environment that has been put to sleep by the on-demand - * idle timeout. Sets `sleeping = false` on the environment. The proxy will - * detect the state change and start containers on the next request. + * idle timeout. Starts containers, waits for health checks, then sets + * `sleeping = false`. If no OnDemandWaker is available (proxy not running + * in same process), falls back to setting the DB flag only. */ export const wakeEnvironment = (options: Options) => { return (options.client ?? client).post({ @@ -5541,6 +5577,64 @@ export const deployFromStatic = (options: }); }; +/** + * List all alert rules for a project + */ +export const listAlertRules = (options: Options) => { + return (options.client ?? client).get({ + url: '/projects/{project_id}/error-alert-rules', + ...options + }); +}; + +/** + * Create a new alert rule + */ +export const createAlertRule = (options: Options) => { + return (options.client ?? client).post({ + url: '/projects/{project_id}/error-alert-rules', + ...options, + headers: { + 'Content-Type': 'application/json', + ...options.headers + } + }); +}; + +/** + * Delete an alert rule + */ +export const deleteAlertRule = (options: Options) => { + return (options.client ?? client).delete({ + url: '/projects/{project_id}/error-alert-rules/{rule_id}', + ...options + }); +}; + +/** + * Get a specific alert rule + */ +export const getAlertRule = (options: Options) => { + return (options.client ?? client).get({ + url: '/projects/{project_id}/error-alert-rules/{rule_id}', + ...options + }); +}; + +/** + * Update an existing alert rule + */ +export const updateAlertRule = (options: Options) => { + return (options.client ?? client).put({ + url: '/projects/{project_id}/error-alert-rules/{rule_id}', + ...options, + headers: { + 'Content-Type': 'application/json', + ...options.headers + } + }); +}; + /** * Get error dashboard statistics */ @@ -6881,6 +6975,18 @@ export const getJoinTokenStatus = (options }); }; +/** + * Get public settings (no authentication required) + * Returns non-sensitive feature flags like demo mode status. + * This endpoint is intentionally unauthenticated so the login page can use it. + */ +export const getPublicSettings = (options?: Options) => { + return (options?.client ?? client).get({ + url: '/settings/public', + ...options + }); +}; + /** * List all available templates * Returns a list of all public templates, optionally filtered by tag or featured status. diff --git a/web/src/api/client/types.gen.ts b/web/src/api/client/types.gen.ts index 5c9d216e..240d9c16 100644 --- a/web/src/api/client/types.gen.ts +++ b/web/src/api/client/types.gen.ts @@ -239,6 +239,21 @@ export type AggregatedBucketsResponse = { export type AggregationLevel = 'events' | 'sessions' | 'visitors'; +export type AlertRuleResponse = { + cooldown_minutes: number; + created_at: string; + enabled: boolean; + environment_filter?: number | null; + error_level_filter?: string | null; + id: number; + name: string; + notification_priority: string; + project_id: number; + trigger_config: unknown; + trigger_type: string; + updated_at: string; +}; + export type AnalyticsMetrics = { average_visit_duration: number; bounce_rate: number; @@ -703,6 +718,20 @@ export type CliLoginRequest = { username: string; }; +/** + * Request spec for a single cluster member. + */ +export type ClusterMemberRequest = { + /** + * Target worker node ID. Omit or null to run on the control plane. + */ + node_id?: number | null; + /** + * Service-type-specific role (e.g., "monitor", "primary", "replica") + */ + role: string; +}; + export type CommitExistsResponse = { commit_sha?: string | null; exists: boolean; @@ -1098,6 +1127,35 @@ export type CopyBlobRequest = { toPathname: string; }; +export type CreateAlertRuleRequest = { + /** + * Minimum minutes between notifications for same rule+group + */ + cooldown_minutes?: number; + enabled?: boolean; + /** + * Optional environment ID to filter alerts + */ + environment_filter?: number | null; + /** + * Optional error type/level filter + */ + error_level_filter?: string | null; + name: string; + /** + * Notification priority: Low, Normal, High, Critical + */ + notification_priority?: string; + /** + * Trigger-specific configuration (e.g., {"count": 100, "window_minutes": 60} for frequency) + */ + trigger_config?: unknown; + /** + * Trigger type: new_issue, regression, frequency, new_user, user_count, status_change + */ + trigger_type: string; +}; + export type CreateApiKeyRequest = { expires_at?: string | null; name: string; @@ -1212,11 +1270,23 @@ export type CreateEnvironmentVariableRequest = { }; export type CreateExternalServiceRequest = { + /** + * Cluster member specifications. Required when topology is "cluster". + */ + members?: Array; name: string; + /** + * Target node ID for the service. Omit or null to run on the control plane. + */ + node_id?: number | null; parameters: { [key: string]: unknown; }; service_type: ServiceTypeRoute; + /** + * Service topology: "standalone" (default) or "cluster" (HA multi-member). + */ + topology?: string; version?: string | null; }; @@ -3062,12 +3132,22 @@ export type EnvironmentResponse = { created_at: number; current_deployment_id?: number | null; deployment_config?: null | DeploymentConfig; + /** + * Estimated time (epoch millis) when the environment will go to sleep + * based on last activity + idle timeout. NULL when sleeping or on-demand disabled. + */ + estimated_sleep_at?: number | null; id: number; /** * Indicates if this is a preview environment (auto-created per branch) * For preview environments, 'branch' contains the feature branch name */ is_preview: boolean; + /** + * Last proxied request timestamp (epoch millis) for on-demand environments. + * NULL when on-demand is disabled or no traffic has been received yet. + */ + last_activity_at?: number | null; main_url: string; name: string; project_id: number; @@ -3726,10 +3806,26 @@ export type ExternalServiceDetails = { export type ExternalServiceInfo = { connection_info?: string | null; created_at: string; + /** + * Error message from failed initialization. + */ + error_message?: string | null; id: number; + /** + * Cluster members (empty for standalone services). + */ + members?: Array; name: string; + /** + * Node ID where the service runs. Null means control plane (local). + */ + node_id?: number | null; service_type: ServiceTypeRoute; status: string; + /** + * Service topology: "standalone" (single container) or "cluster" (HA multi-member). + */ + topology: string; updated_at: string; version?: string | null; }; @@ -6498,6 +6594,24 @@ export type PaginationParams = { per_page?: number; }; +/** + * Password protection configuration + * + * When enabled, the proxy shows an HTML password form before allowing access. + * After the user enters the correct password, an HMAC-signed cookie is set + * so subsequent requests pass through without re-entering the password. + */ +export type PasswordProtectionConfig = { + /** + * Whether password protection is enabled + */ + enabled: boolean; + /** + * The bcrypt-hashed password (never stored or returned in plaintext) + */ + passwordHash: string; +}; + export type PathVisitors = { name: string; percentage: number; @@ -7288,6 +7402,16 @@ export type PublicRepositoryInfo = { stars: number; }; +/** + * Public settings response containing only non-sensitive feature flags + */ +export type PublicSettingsResponse = { + /** + * Whether demo mode is enabled + */ + demo_enabled: boolean; +}; + export type PurgeLogsRequest = { /** * Delete all logs before this timestamp (ISO 8601) @@ -7687,6 +7811,18 @@ export type ResourceLimitsResponse = { memory_request?: number | null; }; +/** + * Request body for retrying a failed cluster initialization. + */ +export type RetryClusterRequest = { + /** + * Cluster member specifications (same format as create). + * If omitted, the original member configuration is reconstructed from + * the preserved service_members records. + */ + members?: Array; +}; + /** * Risk level for a migration step */ @@ -7764,6 +7900,18 @@ export type RunExternalServiceBackupRequest = { s3_source_id: number; }; +/** + * S3 credentials distributed to agents for backup/restore operations. + */ +export type S3CredentialsResponse = { + access_key_id: string; + bucket_name: string; + endpoint?: string | null; + force_path_style: boolean; + region: string; + secret_key: string; +}; + /** * Response type for S3 source */ @@ -7893,6 +8041,7 @@ export type SecurityConfig = { enabled?: boolean | null; geoRestrictions?: null | GeoRestrictionsConfig; headers?: null | SecurityHeadersConfig; + passwordProtection?: null | PasswordProtectionConfig; rateLimiting?: null | RateLimitConfig; }; @@ -8086,6 +8235,20 @@ export type ServiceAccessInfo = { */ export type ServiceAction = 'create' | 'link-external' | 'skip'; +/** + * Public info about a cluster member. + */ +export type ServiceMemberInfo = { + container_name: string; + hostname?: string | null; + id: number; + node_id?: number | null; + ordinal: number; + port?: number | null; + role: string; + status: string; +}; + export type ServiceParameter = { choices?: Array | null; default_value?: string | null; @@ -9402,6 +9565,17 @@ export type UnsupportedFeature = { reason: string; }; +export type UpdateAlertRuleRequest = { + cooldown_minutes?: number | null; + enabled?: boolean | null; + environment_filter?: number | null; + error_level_filter?: string | null; + name?: string | null; + notification_priority?: string | null; + trigger_config?: unknown; + trigger_type?: string | null; +}; + export type UpdateApiKeyRequest = { expires_at?: string | null; is_active?: boolean | null; @@ -9522,6 +9696,13 @@ export type UpdateEnvironmentSettingsRequest = { * idle_timeout_seconds of no traffic and started on the next request. */ on_demand?: boolean | null; + /** + * Set a password to protect this environment. The proxy will show an HTML + * password form before allowing access. The password is bcrypt-hashed + * server-side and never stored in plaintext. + * Send an empty string to remove password protection. + */ + password?: string | null; /** * Enable/disable performance metrics collection */ @@ -16443,6 +16624,39 @@ export type GetServiceEnvironmentVariableResponses = { export type GetServiceEnvironmentVariableResponse = GetServiceEnvironmentVariableResponses[keyof GetServiceEnvironmentVariableResponses]; +export type RetryClusterData = { + body: RetryClusterRequest; + path: { + id: number; + }; + query?: never; + url: '/external-services/{id}/retry'; +}; + +export type RetryClusterErrors = { + /** + * Service is not a failed cluster + */ + 400: unknown; + /** + * Service not found + */ + 404: unknown; + /** + * Internal server error + */ + 500: unknown; +}; + +export type RetryClusterResponses = { + /** + * Cluster retry initiated + */ + 200: ExternalServiceInfo; +}; + +export type RetryClusterResponse = RetryClusterResponses[keyof RetryClusterResponses]; + export type StartServiceData = { body?: never; path: { @@ -18486,6 +18700,46 @@ export type NodeHeartbeatResponses = { export type NodeHeartbeatResponse = NodeHeartbeatResponses[keyof NodeHeartbeatResponses]; +export type GetS3CredentialsData = { + body?: never; + path: { + /** + * Node ID + */ + node_id: number; + /** + * S3 source ID + */ + s3_source_id: number; + }; + query?: never; + url: '/internal/nodes/{node_id}/s3-credentials/{s3_source_id}'; +}; + +export type GetS3CredentialsErrors = { + /** + * Unauthorized + */ + 401: unknown; + /** + * S3 source not found + */ + 404: unknown; + /** + * Internal server error + */ + 500: unknown; +}; + +export type GetS3CredentialsResponses = { + /** + * S3 credentials + */ + 200: S3CredentialsResponse; +}; + +export type GetS3CredentialsResponse = GetS3CredentialsResponses[keyof GetS3CredentialsResponses]; + export type ListIpAccessControlData = { body?: never; path?: never; @@ -23380,6 +23634,10 @@ export type SleepEnvironmentErrors = { * Environment not found */ 404: unknown; + /** + * Too many state transitions, retry after cooldown + */ + 429: unknown; /** * Internal server error */ @@ -23456,6 +23714,10 @@ export type WakeEnvironmentErrors = { * Environment not found */ 404: unknown; + /** + * Too many state transitions, retry after cooldown + */ + 429: unknown; /** * Internal server error */ @@ -24002,6 +24264,178 @@ export type DeployFromStaticResponses = { export type DeployFromStaticResponse = DeployFromStaticResponses[keyof DeployFromStaticResponses]; +export type ListAlertRulesData = { + body?: never; + path: { + /** + * Project ID + */ + project_id: number; + }; + query?: never; + url: '/projects/{project_id}/error-alert-rules'; +}; + +export type ListAlertRulesErrors = { + /** + * Internal server error + */ + 500: unknown; +}; + +export type ListAlertRulesResponses = { + /** + * List of alert rules + */ + 200: Array; +}; + +export type ListAlertRulesResponse = ListAlertRulesResponses[keyof ListAlertRulesResponses]; + +export type CreateAlertRuleData = { + body: CreateAlertRuleRequest; + path: { + /** + * Project ID + */ + project_id: number; + }; + query?: never; + url: '/projects/{project_id}/error-alert-rules'; +}; + +export type CreateAlertRuleErrors = { + /** + * Validation error + */ + 400: unknown; + /** + * Internal server error + */ + 500: unknown; +}; + +export type CreateAlertRuleResponses = { + /** + * Alert rule created + */ + 201: AlertRuleResponse; +}; + +export type CreateAlertRuleResponse = CreateAlertRuleResponses[keyof CreateAlertRuleResponses]; + +export type DeleteAlertRuleData = { + body?: never; + path: { + /** + * Project ID + */ + project_id: number; + /** + * Alert rule ID + */ + rule_id: number; + }; + query?: never; + url: '/projects/{project_id}/error-alert-rules/{rule_id}'; +}; + +export type DeleteAlertRuleErrors = { + /** + * Alert rule not found + */ + 404: unknown; + /** + * Internal server error + */ + 500: unknown; +}; + +export type DeleteAlertRuleResponses = { + /** + * Alert rule deleted + */ + 204: void; +}; + +export type DeleteAlertRuleResponse = DeleteAlertRuleResponses[keyof DeleteAlertRuleResponses]; + +export type GetAlertRuleData = { + body?: never; + path: { + /** + * Project ID + */ + project_id: number; + /** + * Alert rule ID + */ + rule_id: number; + }; + query?: never; + url: '/projects/{project_id}/error-alert-rules/{rule_id}'; +}; + +export type GetAlertRuleErrors = { + /** + * Alert rule not found + */ + 404: unknown; + /** + * Internal server error + */ + 500: unknown; +}; + +export type GetAlertRuleResponses = { + /** + * Alert rule details + */ + 200: AlertRuleResponse; +}; + +export type GetAlertRuleResponse = GetAlertRuleResponses[keyof GetAlertRuleResponses]; + +export type UpdateAlertRuleData = { + body: UpdateAlertRuleRequest; + path: { + /** + * Project ID + */ + project_id: number; + /** + * Alert rule ID + */ + rule_id: number; + }; + query?: never; + url: '/projects/{project_id}/error-alert-rules/{rule_id}'; +}; + +export type UpdateAlertRuleErrors = { + /** + * Validation error + */ + 400: unknown; + /** + * Alert rule not found + */ + 404: unknown; + /** + * Internal server error + */ + 500: unknown; +}; + +export type UpdateAlertRuleResponses = { + /** + * Alert rule updated + */ + 200: AlertRuleResponse; +}; + +export type UpdateAlertRuleResponse = UpdateAlertRuleResponses[keyof UpdateAlertRuleResponses]; + export type GetErrorDashboardStatsData = { body?: never; path: { @@ -27781,6 +28215,29 @@ export type GetJoinTokenStatusResponses = { export type GetJoinTokenStatusResponse = GetJoinTokenStatusResponses[keyof GetJoinTokenStatusResponses]; +export type GetPublicSettingsData = { + body?: never; + path?: never; + query?: never; + url: '/settings/public'; +}; + +export type GetPublicSettingsErrors = { + /** + * Internal server error + */ + 500: unknown; +}; + +export type GetPublicSettingsResponses = { + /** + * Public settings + */ + 200: PublicSettingsResponse; +}; + +export type GetPublicSettingsResponse = GetPublicSettingsResponses[keyof GetPublicSettingsResponses]; + export type ListTemplatesData = { body?: never; path?: never; diff --git a/web/src/components/analytics/LiveGlobe.tsx b/web/src/components/analytics/LiveGlobe.tsx index f3b33b08..cf27ff05 100644 --- a/web/src/components/analytics/LiveGlobe.tsx +++ b/web/src/components/analytics/LiveGlobe.tsx @@ -375,7 +375,7 @@ export function LiveGlobePage({ project }: LiveGlobePageProps) { return (
{/* Header */} -
+
-
+
{/* Pause/Resume */}
) : ( -
+
{sortedItems.map((item) => { const Icon = selectedChannel ? null diff --git a/web/src/components/analytics/overview/DevicesChart.tsx b/web/src/components/analytics/overview/DevicesChart.tsx index 6eda3d10..2078ffbe 100644 --- a/web/src/components/analytics/overview/DevicesChart.tsx +++ b/web/src/components/analytics/overview/DevicesChart.tsx @@ -12,7 +12,7 @@ import { import { useQuery } from '@tanstack/react-query' import { format } from 'date-fns' import type { LucideIcon } from 'lucide-react' -import { Monitor, Smartphone, Tablet } from 'lucide-react' +import { BarChart3, Monitor, Smartphone, Tablet } from 'lucide-react' import * as React from 'react' const DEVICE_ICONS: Record = { @@ -105,6 +105,7 @@ export function DevicesChart({
) : !sortedDevices.length ? (
+

No data available for the selected period

diff --git a/web/src/components/analytics/overview/LanguagesChart.tsx b/web/src/components/analytics/overview/LanguagesChart.tsx index 9babfc46..fa9a6abc 100644 --- a/web/src/components/analytics/overview/LanguagesChart.tsx +++ b/web/src/components/analytics/overview/LanguagesChart.tsx @@ -166,7 +166,7 @@ export function LanguagesChart({

) : ( -
+
{sortedLanguages.map((lang) => (
diff --git a/web/src/components/analytics/overview/OperatingSystemChart.tsx b/web/src/components/analytics/overview/OperatingSystemChart.tsx index 513e768d..7553d0aa 100644 --- a/web/src/components/analytics/overview/OperatingSystemChart.tsx +++ b/web/src/components/analytics/overview/OperatingSystemChart.tsx @@ -173,7 +173,7 @@ export function OperatingSystemChart({

) : ( -
+
{sortedItems.map((item) => (
) : ( -
+
{sortedPages.map((page) => (
) : ( -
+
{sortedReferrers.map((referrer) => (
)}
diff --git a/web/src/components/forms/JsonSchemaForm.tsx b/web/src/components/forms/JsonSchemaForm.tsx index 06a15b5d..3340fa94 100644 --- a/web/src/components/forms/JsonSchemaForm.tsx +++ b/web/src/components/forms/JsonSchemaForm.tsx @@ -89,6 +89,11 @@ interface JsonSchemaFormProps { * @default [['host', 'port'], ['username', 'password']] */ pairedFields?: [string, string][] + + /** + * Fields to hide from the form (they won't be rendered or submitted) + */ + hiddenFields?: string[] } /** @@ -108,11 +113,15 @@ export function JsonSchemaForm({ ['host', 'port'], ['username', 'password'], ], + hiddenFields = [], }: JsonSchemaFormProps) { - // Get list of property names in order + // Get list of property names in order, excluding hidden fields const propertyNames = useMemo( - () => Object.keys(schema.properties), - [schema.properties] + () => + Object.keys(schema.properties).filter( + (name) => !hiddenFields.includes(name) + ), + [schema.properties, hiddenFields] ) // Create Zod schema from JSON Schema @@ -191,6 +200,9 @@ export function JsonSchemaForm({ const cleanedValues: Record = {} Object.entries(values).forEach(([key, value]) => { + // Skip hidden fields + if (hiddenFields.includes(key)) return + const prop = schema.properties[key] const types = Array.isArray(prop.type) ? prop.type : [prop.type] const isNullable = types.includes('null') diff --git a/web/src/components/funnel/FunnelCard.tsx b/web/src/components/funnel/FunnelCard.tsx index a7c3f0fe..859f0543 100644 --- a/web/src/components/funnel/FunnelCard.tsx +++ b/web/src/components/funnel/FunnelCard.tsx @@ -8,9 +8,7 @@ import { CardTitle, } from '@/components/ui/card' import { getFunnelMetricsOptions } from '@/api/client/@tanstack/react-query.gen' -import { formatDateForAPI } from '@/lib/date' import { useQuery } from '@tanstack/react-query' -import { subDays } from 'date-fns' import { Users, TrendingUp, @@ -20,9 +18,15 @@ import { ChevronRight, } from 'lucide-react' +interface DateRangeQuery { + start_date: string + end_date: string +} + interface FunnelCardProps { funnel: FunnelResponse project: ProjectResponse + dateRange: DateRangeQuery onDelete: () => void onView: () => void onEdit: () => void @@ -31,6 +35,7 @@ interface FunnelCardProps { export function FunnelCard({ funnel, project, + dateRange, onDelete, onView, onEdit, @@ -45,10 +50,7 @@ export function FunnelCard({ project_id: project.id, funnel_id: funnel.id, }, - query: { - start_date: formatDateForAPI(subDays(new Date(), 30)), - end_date: formatDateForAPI(new Date()), - }, + query: dateRange, }), retry: false, }) diff --git a/web/src/components/funnel/FunnelManagement.tsx b/web/src/components/funnel/FunnelManagement.tsx index 15204d9b..9e3b0bec 100644 --- a/web/src/components/funnel/FunnelManagement.tsx +++ b/web/src/components/funnel/FunnelManagement.tsx @@ -1,13 +1,24 @@ import { ProjectResponse, FunnelResponse } from '@/api/client/types.gen' import { Button } from '@/components/ui/button' import { Card, CardContent, CardHeader } from '@/components/ui/card' +import { Calendar } from '@/components/ui/calendar' +import { + Popover, + PopoverContent, + PopoverTrigger, +} from '@/components/ui/popover' import { Skeleton } from '@/components/ui/skeleton' import { listFunnelsOptions, deleteFunnelMutation, } from '@/api/client/@tanstack/react-query.gen' +import { formatDateForAPI } from '@/lib/date' +import { cn } from '@/lib/utils' import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query' -import { BarChart3, Plus } from 'lucide-react' +import { format, subDays } from 'date-fns' +import { BarChart3, Calendar as CalendarIcon, Plus } from 'lucide-react' +import * as React from 'react' +import { DateRange } from 'react-day-picker' import { useNavigate } from 'react-router-dom' import { FunnelCard } from './FunnelCard' @@ -18,6 +29,18 @@ interface FunnelManagementProps { export function FunnelManagement({ project }: FunnelManagementProps) { const queryClient = useQueryClient() const navigate = useNavigate() + const [dateRange, setDateRange] = React.useState({ + from: subDays(new Date(), 30), + to: new Date(), + }) + + const dateRangeQuery = React.useMemo(() => { + if (!dateRange?.from || !dateRange?.to) return null + return { + start_date: formatDateForAPI(dateRange.from), + end_date: formatDateForAPI(dateRange.to), + } + }, [dateRange]) const { data: funnels, @@ -55,21 +78,60 @@ export function FunnelManagement({ project }: FunnelManagementProps) { return (
-
+

Funnels

Track user conversion through defined steps

- +
+ + + + + + date > new Date()} + /> + + + +
{isLoading ? ( @@ -138,6 +200,7 @@ export function FunnelManagement({ project }: FunnelManagementProps) { key={funnel.id} funnel={funnel} project={project} + dateRange={dateRangeQuery!} onDelete={() => handleDelete(funnel.id)} onView={() => navigate( diff --git a/web/src/components/monitoring/AlertRulesManagement.tsx b/web/src/components/monitoring/AlertRulesManagement.tsx new file mode 100644 index 00000000..6cfd867e --- /dev/null +++ b/web/src/components/monitoring/AlertRulesManagement.tsx @@ -0,0 +1,268 @@ +import { + deleteAlertRuleMutation, + listAlertRulesOptions, + updateAlertRuleMutation, + getProjectsOptions, +} from '@/api/client/@tanstack/react-query.gen' +import { AlertRuleResponse } from '@/api/client/types.gen' +import { Button } from '@/components/ui/button' +import { + Card, + CardContent, + CardHeader, + CardTitle, +} from '@/components/ui/card' +import { + DropdownMenu, + DropdownMenuContent, + DropdownMenuItem, + DropdownMenuSeparator, + DropdownMenuTrigger, +} from '@/components/ui/dropdown-menu' +import { EmptyState } from '@/components/ui/empty-state' +import { + Select, + SelectContent, + SelectItem, + SelectTrigger, + SelectValue, +} from '@/components/ui/select' +import { Switch } from '@/components/ui/switch' +import { Badge } from '@/components/ui/badge' +import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query' +import { AlertTriangle, EllipsisVertical, Plus, ShieldAlert } from 'lucide-react' +import { useMemo, useState } from 'react' +import { useNavigate } from 'react-router-dom' +import { toast } from 'sonner' + +const TRIGGER_TYPES = [ + { value: 'new_issue', label: 'New Issue' }, + { value: 'regression', label: 'Regression' }, + { value: 'frequency', label: 'Frequency' }, + { value: 'new_user', label: 'New User Affected' }, + { value: 'user_count', label: 'User Count' }, + { value: 'status_change', label: 'Status Change' }, +] as const + +function triggerTypeLabel(type: string): string { + return TRIGGER_TYPES.find((t) => t.value === type)?.label ?? type +} + +function priorityVariant(priority: string): 'default' | 'secondary' | 'destructive' | 'outline' { + switch (priority) { + case 'Critical': return 'destructive' + case 'High': return 'default' + case 'Normal': return 'secondary' + default: return 'outline' + } +} + +function renderTriggerConfig(rule: AlertRuleResponse) { + const config = (rule.trigger_config ?? {}) as Record + switch (rule.trigger_type) { + case 'frequency': + return ( +

+ Threshold: {String(config.count ?? '—')} events / {String(config.window_minutes ?? '—')} min +

+ ) + case 'user_count': + return

User threshold: {String(config.threshold ?? '—')}

+ default: + return null + } +} + +interface AlertRulesManagementProps { + projectId?: number +} + +export function AlertRulesManagement({ projectId: fixedProjectId }: AlertRulesManagementProps = {}) { + const queryClient = useQueryClient() + const navigate = useNavigate() + const [selectedProjectId, setSelectedProjectId] = useState(null) + + const { data: projects, isLoading: projectsLoading } = useQuery({ + ...getProjectsOptions(), + enabled: !fixedProjectId, + }) + + const projectList = projects?.projects + const projectId = fixedProjectId ?? selectedProjectId ?? projectList?.[0]?.id ?? null + const showProjectSelector = !fixedProjectId && (projectList?.length ?? 0) > 1 + + const { data: rules, isLoading: rulesLoading } = useQuery({ + ...listAlertRulesOptions({ + path: { project_id: projectId! }, + }), + enabled: !!projectId, + }) + + const updateMutation = useMutation({ + ...updateAlertRuleMutation(), + meta: { errorTitle: 'Failed to update alert rule' }, + onSuccess: () => { + queryClient.invalidateQueries({ predicate: (query) => (query.queryKey[0] as Record)?._id === 'listAlertRules' }) + }, + }) + + const deleteMutation = useMutation({ + ...deleteAlertRuleMutation(), + meta: { errorTitle: 'Failed to delete alert rule' }, + onSuccess: () => { + toast.success('Alert rule deleted') + queryClient.invalidateQueries({ predicate: (query) => (query.queryKey[0] as Record)?._id === 'listAlertRules' }) + }, + }) + + const handleDelete = async (rule: AlertRuleResponse) => { + if (!projectId) return + await deleteMutation.mutateAsync({ + path: { project_id: projectId, rule_id: rule.id }, + }) + } + + const handleToggleEnabled = async (rule: AlertRuleResponse) => { + if (!projectId) return + await updateMutation.mutateAsync({ + path: { project_id: projectId, rule_id: rule.id }, + body: { enabled: !rule.enabled }, + }) + } + + const hasRules = useMemo(() => rules && rules.length > 0, [rules]) + + if (!fixedProjectId && projectsLoading) { + return ( +
+
+
+ ) + } + + if (!fixedProjectId && !projectList?.length) { + return ( + + ) + } + + return ( +
+
+
+

Error Alert Rules

+

+ Configure rules that trigger notifications when errors match certain conditions. +

+
+
+ {showProjectSelector && ( + + )} + +
+
+ + {rulesLoading ? ( +
+
+
+ ) : !hasRules ? ( + navigate('new')}> + + Add Rule + + } + /> + ) : ( +
+ {rules?.map((rule) => ( + navigate(`${rule.id}/edit`)}> + +
+ + {rule.name} + +

+ {triggerTypeLabel(rule.trigger_type)} +

+
+
e.stopPropagation()}> + handleToggleEnabled(rule)} + disabled={updateMutation.isPending} + /> + + + + + + navigate(`${rule.id}/edit`)}> + Edit + + + handleDelete(rule)} + > + Delete + + + +
+
+ +
+ + {rule.notification_priority} + + + {triggerTypeLabel(rule.trigger_type)} + + {rule.error_level_filter && ( + + {rule.error_level_filter} + + )} +
+
+

Cooldown: {rule.cooldown_minutes} min

+ {renderTriggerConfig(rule)} +
+
+
+ ))} +
+ )} +
+ ) +} diff --git a/web/src/components/monitoring/MonitoringSettings.tsx b/web/src/components/monitoring/MonitoringSettings.tsx index 05ae0a37..615f3190 100644 --- a/web/src/components/monitoring/MonitoringSettings.tsx +++ b/web/src/components/monitoring/MonitoringSettings.tsx @@ -42,6 +42,7 @@ import { type RouteAlertsFormData, type WeeklyDigestFormData, } from './schemas' +import { AlertRulesManagement } from './AlertRulesManagement' import { ResourceMonitoring } from './ResourceMonitoring' interface AlertComponentProps { @@ -863,7 +864,7 @@ function WeeklyDigest({ export function MonitoringSettings() { const navigate = useNavigate() const { section } = useParams() - const currentSection = section || 'project' + const currentSection = section || 'resources' const { data: preferences, isLoading } = useQuery({ queryKey: ['preferences'], @@ -878,13 +879,9 @@ export function MonitoringSettings() { } const settingsSections = [ - { id: 'resources', label: 'Resources' }, - { id: 'project', label: 'Project Health' }, - { id: 'domains', label: 'Domains' }, - { id: 'backups', label: 'Backups' }, - { id: 'routes', label: 'Routes' }, + { id: 'resources', label: 'Health' }, + { id: 'alerts', label: 'Alerts' }, { id: 'notifications', label: 'Notifications' }, - { id: 'digest', label: 'Weekly Digest' }, ] as const const handleProjectSave = async (data: ProjectAlertsFormData) => { @@ -1045,7 +1042,7 @@ export function MonitoringSettings() { } const renderContent = () => { - // Resources tab doesn't depend on preferences + // Health tab doesn't depend on preferences if (currentSection === 'resources') { return } @@ -1128,44 +1125,51 @@ export function MonitoringSettings() { } switch (currentSection) { - case 'project': + case 'resources': + return // handled by early return above, kept for switch exhaustiveness + case 'alerts': return ( - - ) - case 'domains': - return ( - - ) - case 'backups': - return ( - - ) - case 'routes': - return ( - +
+ + + + + + + + + + + + + +
) case 'notifications': return ( - - ) - case 'digest': - return ( - +
+ + + + + + +
) default: return null @@ -1218,11 +1222,7 @@ export function MonitoringSettings() {
{/* Content - Shared between mobile and desktop */} - {currentSection === 'resources' ? ( - renderContent() - ) : ( - {renderContent()} - )} + {renderContent()}
) } diff --git a/web/src/components/project/GitImportClone.tsx b/web/src/components/project/GitImportClone.tsx index f137196f..eba19fc6 100644 --- a/web/src/components/project/GitImportClone.tsx +++ b/web/src/components/project/GitImportClone.tsx @@ -20,17 +20,18 @@ import { import { Button } from '@/components/ui/button' import { Input } from '@/components/ui/input' import { Label } from '@/components/ui/label' -import { Tabs, TabsContent, TabsList, TabsTrigger } from '@/components/ui/tabs' import { ProjectConfigurator } from '@/components/project/ProjectConfigurator' import { RepositoryList } from '@/components/repositories/RepositoryList' import { TemplateList, TemplateConfigurator } from '@/components/templates' import { ManualProjectConfigurator } from '@/components/project/ManualProjectConfigurator' import type { RepositoryResponse, TemplateResponse } from '@/api/client/types.gen' -import { GitBranch, ChevronLeft, Link as LinkIcon, Loader2, Gitlab, LayoutTemplate, Container } from 'lucide-react' +import { GitBranch, ChevronLeft, Link as LinkIcon, Loader2, Gitlab, LayoutTemplate, Container, FolderGit2 } from 'lucide-react' import Github from '@/icons/Github' import { toast } from 'sonner' import { Badge } from '@/components/ui/badge' +type ProjectSource = 'templates' | 'browse' | 'git-url' | 'manual' + /** Parsed git URL info for public repositories */ interface ParsedGitUrl { provider: 'github' | 'gitlab' @@ -98,13 +99,13 @@ export function GitImportClone({ mode = 'navigation', onProjectCreated, }: GitImportCloneProps) { + const [selectedSource, setSelectedSource] = useState(null) const [selectedConnection, setSelectedConnection] = useState< string | undefined >() const [selectedRepository, setSelectedRepository] = useState(null) const [selectedTemplate, setSelectedTemplate] = useState(null) - const [showManualDeploy, setShowManualDeploy] = useState(false) const [gitUrl, setGitUrl] = useState('') const [useGitUrl, setUseGitUrl] = useState(false) const [parsedPublicRepo, setParsedPublicRepo] = useState(null) @@ -234,10 +235,13 @@ export function GitImportClone({
@@ -250,28 +254,6 @@ export function GitImportClone({ ) } - // Show ManualProjectConfigurator when manual deploy mode is selected - if (showManualDeploy) { - return ( -
-
- -
- - setShowManualDeploy(false)} - /> -
- ) - } - // Show ProjectConfigurator when: // 1. In inline mode with authenticated repo selected, OR // 2. Using Git URL with public repo selected (works in both modes) @@ -288,10 +270,11 @@ export function GitImportClone({ onClick={() => { setSelectedRepository(null) setUseGitUrl(false) + setSelectedSource(null) }} > - Back to {useGitUrl ? 'Git URL' : 'Repositories'} + Back to Create Project
@@ -418,90 +401,192 @@ export function GitImportClone({ } } + // Source selection step + if (!selectedSource) { + return ( + + + + + Create New Project + + + +

+ Choose how you want to set up your project +

+
+ + + + + + + +
+
+
+ ) + } + + // Selected source content return ( - - - Create New Project - +
+ + + {selectedSource === 'templates' && 'Choose a Template'} + {selectedSource === 'browse' && 'Import Repository'} + {selectedSource === 'git-url' && 'Import from Git URL'} + {selectedSource === 'manual' && 'Manual Deployment'} + +
- - - - - Templates - - Browse Repositories - - - Git URL - - - - Manual - - - - - - - - -
- -
+ {selectedSource === 'templates' && ( + + )} + + {selectedSource === 'browse' && ( +
+ {selectedConnection && ( )} - +
+ )} - + {selectedSource === 'git-url' && ( +
- - -
-
- -

Manual Deployment

-

- Deploy a pre-built Docker image or static files bundle without connecting to a Git repository. -

-
- -
-
-
-

Supported deployment methods:

-
-
- -
-

Docker Image

-

- Deploy from DockerHub, GHCR, or any container registry -

-
-
-
- -
-

Static Files

-

- Upload pre-built static files as tar.gz or zip -

-
-
-
-
-
-
- +
+ )} + + {selectedSource === 'manual' && ( +
+ setSelectedSource(null)} + /> +
+ )} ) diff --git a/web/src/components/project/ProjectAnalytics.tsx b/web/src/components/project/ProjectAnalytics.tsx index 0b6a673c..c4923908 100644 --- a/web/src/components/project/ProjectAnalytics.tsx +++ b/web/src/components/project/ProjectAnalytics.tsx @@ -73,7 +73,13 @@ import { CreateFunnel } from '@/pages/CreateFunnel' import { EditFunnel } from '@/pages/EditFunnel' import RequestLogs from '@/pages/RequestLogs' import { useQuery, useQueryClient } from '@tanstack/react-query' -import { format, subDays } from 'date-fns' +import { format } from 'date-fns' +import { + getDateRangeFromFilter, + QUICK_FILTERS, + type QuickFilter, + type AnalyticsDateFilter, +} from '@/hooks/useAnalyticsDateRange' import { Calendar as CalendarIcon, Code2, @@ -304,32 +310,32 @@ export function VisitorChart({ return (
-
- {getChartTitle()} +
+ {getChartTitle()}
{onZoom && ( Drag on chart to zoom )} -
+
setAggregationLevel('events')} > Events setAggregationLevel('sessions')} > Sessions setAggregationLevel('visitors')} > Visitors @@ -410,17 +416,6 @@ export function VisitorChart({ ) } -const QUICK_FILTERS = [ - { label: 'Today', value: 'today' }, - { label: 'Yesterday', value: 'yesterday' }, - { label: 'Last 24 hours', value: '24hours' }, - { label: 'Last 7 Days', value: '7days' }, - { label: 'Last 30 Days', value: '30days' }, - { label: 'Custom', value: 'custom' }, -] as const - -type QuickFilter = (typeof QUICK_FILTERS)[number]['value'] - interface AnalyticsFiltersProps { project: ProjectResponse activeFilter: QuickFilter @@ -526,23 +521,28 @@ function AnalyticsFilters({ variant={activeFilter === 'custom' ? 'default' : 'outline'} size="sm" className={cn( - 'min-w-[140px]', + 'sm:min-w-[140px]', !dateRange?.from && 'text-muted-foreground' )} > - - {dateRange?.from ? ( - dateRange.to ? ( - <> - {format(dateRange.from, 'LLL dd, y HH:mm')} -{' '} - {format(dateRange.to, 'LLL dd, y HH:mm')} - + + + {dateRange?.from ? ( + dateRange.to ? ( + <> + {format(dateRange.from, 'LLL dd, y HH:mm')} -{' '} + {format(dateRange.to, 'LLL dd, y HH:mm')} + + ) : ( + format(dateRange.from, 'LLL dd, y HH:mm') + ) ) : ( - format(dateRange.from, 'LLL dd, y HH:mm') - ) - ) : ( - Custom range - )} + 'Custom range' + )} + + + {dateRange?.from ? format(dateRange.from, 'MM/dd') : 'Custom'} + @@ -554,7 +554,7 @@ function AnalyticsFilters({ } selected={dateRange} onSelect={onDateRangeChange} - numberOfMonths={2} + numberOfMonths={typeof window !== 'undefined' && window.innerWidth < 640 ? 1 : 2} disabled={(date) => date > new Date()} toDate={new Date()} fromDate={ @@ -659,56 +659,7 @@ function PagesTab({ project }: PagesTabProps) { const [isRefreshing, setIsRefreshing] = React.useState(false) const queryClient = useQueryClient() - const getDateRange = React.useCallback(() => { - const now = new Date() - if (dateFilter.quickFilter === 'custom' && dateFilter.dateRange) { - return { - startDate: dateFilter.dateRange.from, - endDate: dateFilter.dateRange.to, - } - } - - switch (dateFilter.quickFilter) { - case 'today': - return { - startDate: new Date(now.setHours(0, 0, 0, 0)), - endDate: new Date(now.setHours(23, 59, 59, 999)), - } - case 'yesterday': { - const yesterday = new Date(now) - yesterday.setDate(yesterday.getDate() - 1) - return { - startDate: new Date(yesterday.setHours(0, 0, 0, 0)), - endDate: new Date(yesterday.setHours(23, 59, 59, 999)), - } - } - case '24hours': { - const twentyFourHoursAgo = new Date(now) - twentyFourHoursAgo.setHours(twentyFourHoursAgo.getHours() - 24) - return { - startDate: twentyFourHoursAgo, - endDate: now, - } - } - case '7days': - return { - startDate: subDays(now, 7), - endDate: now, - } - case '30days': - return { - startDate: subDays(now, 30), - endDate: now, - } - default: - return { - startDate: subDays(now, 7), - endDate: now, - } - } - }, [dateFilter]) - - const { startDate, endDate } = getDateRange() + const { startDate, endDate } = getDateRangeFromFilter(dateFilter) const handleRefresh = React.useCallback(() => { setIsRefreshing(true) @@ -826,56 +777,7 @@ function EventDetailTab({ project }: EventDetailTabProps) { const [isRefreshing, setIsRefreshing] = React.useState(false) const queryClient = useQueryClient() - const getDateRange = React.useCallback(() => { - const now = new Date() - if (dateFilter.quickFilter === 'custom' && dateFilter.dateRange) { - return { - startDate: dateFilter.dateRange.from, - endDate: dateFilter.dateRange.to, - } - } - - switch (dateFilter.quickFilter) { - case 'today': - return { - startDate: new Date(now.setHours(0, 0, 0, 0)), - endDate: new Date(now.setHours(23, 59, 59, 999)), - } - case 'yesterday': { - const yesterday = new Date(now) - yesterday.setDate(yesterday.getDate() - 1) - return { - startDate: new Date(yesterday.setHours(0, 0, 0, 0)), - endDate: new Date(yesterday.setHours(23, 59, 59, 999)), - } - } - case '24hours': { - const twentyFourHoursAgo = new Date(now) - twentyFourHoursAgo.setHours(twentyFourHoursAgo.getHours() - 24) - return { - startDate: twentyFourHoursAgo, - endDate: now, - } - } - case '7days': - return { - startDate: subDays(now, 7), - endDate: now, - } - case '30days': - return { - startDate: subDays(now, 30), - endDate: now, - } - default: - return { - startDate: subDays(now, 7), - endDate: now, - } - } - }, [dateFilter]) - - const { startDate, endDate } = getDateRange() + const { startDate, endDate } = getDateRangeFromFilter(dateFilter) const handleRefresh = React.useCallback(() => { setIsRefreshing(true) @@ -949,56 +851,7 @@ function SessionReplaysTab({ project }: SessionReplaysTabProps) { const [isRefreshing, setIsRefreshing] = React.useState(false) const queryClient = useQueryClient() - const getDateRange = React.useCallback(() => { - const now = new Date() - if (dateFilter.quickFilter === 'custom' && dateFilter.dateRange) { - return { - startDate: dateFilter.dateRange.from, - endDate: dateFilter.dateRange.to, - } - } - - switch (dateFilter.quickFilter) { - case 'today': - return { - startDate: new Date(now.setHours(0, 0, 0, 0)), - endDate: new Date(now.setHours(23, 59, 59, 999)), - } - case 'yesterday': { - const yesterday = new Date(now) - yesterday.setDate(yesterday.getDate() - 1) - return { - startDate: new Date(yesterday.setHours(0, 0, 0, 0)), - endDate: new Date(yesterday.setHours(23, 59, 59, 999)), - } - } - case '24hours': { - const twentyFourHoursAgo = new Date(now) - twentyFourHoursAgo.setHours(twentyFourHoursAgo.getHours() - 24) - return { - startDate: twentyFourHoursAgo, - endDate: now, - } - } - case '7days': - return { - startDate: subDays(now, 7), - endDate: now, - } - case '30days': - return { - startDate: subDays(now, 30), - endDate: now, - } - default: - return { - startDate: subDays(now, 7), - endDate: now, - } - } - }, [dateFilter]) - - const { startDate, endDate } = getDateRange() + const { startDate, endDate } = getDateRangeFromFilter(dateFilter) const handleRefresh = React.useCallback(() => { setIsRefreshing(true) @@ -1063,56 +916,7 @@ function JourneyTab({ project }: JourneyTabProps) { const [isRefreshing, setIsRefreshing] = React.useState(false) const queryClient = useQueryClient() - const getDateRange = React.useCallback(() => { - const now = new Date() - if (dateFilter.quickFilter === 'custom' && dateFilter.dateRange) { - return { - startDate: dateFilter.dateRange.from, - endDate: dateFilter.dateRange.to, - } - } - - switch (dateFilter.quickFilter) { - case 'today': - return { - startDate: new Date(now.setHours(0, 0, 0, 0)), - endDate: new Date(now.setHours(23, 59, 59, 999)), - } - case 'yesterday': { - const yesterday = new Date(now) - yesterday.setDate(yesterday.getDate() - 1) - return { - startDate: new Date(yesterday.setHours(0, 0, 0, 0)), - endDate: new Date(yesterday.setHours(23, 59, 59, 999)), - } - } - case '24hours': { - const twentyFourHoursAgo = new Date(now) - twentyFourHoursAgo.setHours(twentyFourHoursAgo.getHours() - 24) - return { - startDate: twentyFourHoursAgo, - endDate: now, - } - } - case '7days': - return { - startDate: subDays(now, 7), - endDate: now, - } - case '30days': - return { - startDate: subDays(now, 30), - endDate: now, - } - default: - return { - startDate: subDays(now, 7), - endDate: now, - } - } - }, [dateFilter]) - - const { startDate, endDate } = getDateRange() + const { startDate, endDate } = getDateRangeFromFilter(dateFilter) const handleRefresh = React.useCallback(() => { setIsRefreshing(true) @@ -1163,10 +967,6 @@ function JourneyTab({ project }: JourneyTabProps) { interface ProjectAnalyticsProps { project: ProjectResponse } -interface AnalyticsDateFilter { - quickFilter: QuickFilter - dateRange: DateRange | undefined -} export function ProjectAnalytics({ project }: ProjectAnalyticsProps) { return ( @@ -1267,58 +1067,7 @@ function ProjectAnalyticsOverview({ project }: ProjectAnalyticsOverviewProps) { const [isRefreshing, setIsRefreshing] = React.useState(false) const [showSetupOverride] = React.useState(false) const queryClient = useQueryClient() - const getDateRange = React.useCallback(() => { - const now = new Date() - - if (dateFilter.quickFilter === 'custom' && dateFilter.dateRange) { - return { - startDate: dateFilter.dateRange.from, - endDate: dateFilter.dateRange.to, - } - } - - switch (dateFilter.quickFilter) { - case 'today': - return { - startDate: new Date(now.setHours(0, 0, 0, 0)), - endDate: new Date(now.setHours(23, 59, 59, 999)), - } - case 'yesterday': { - const yesterday = new Date(now) - yesterday.setDate(yesterday.getDate() - 1) - return { - startDate: new Date(yesterday.setHours(0, 0, 0, 0)), - endDate: new Date(yesterday.setHours(23, 59, 59, 999)), - } - } - case '24hours': { - const twentyFourHoursAgo = new Date(now) - twentyFourHoursAgo.setHours(twentyFourHoursAgo.getHours() - 24) - return { - startDate: twentyFourHoursAgo, - endDate: now, - } - } - case '7days': { - const sevenDaysAgo = new Date(now) - sevenDaysAgo.setDate(sevenDaysAgo.getDate() - 7) - return { - startDate: new Date(sevenDaysAgo.setHours(0, 0, 0, 0)), - endDate: new Date(now.setHours(23, 59, 59, 999)), - } - } - case '30days': - default: { - const thirtyDaysAgo = new Date(now) - thirtyDaysAgo.setDate(thirtyDaysAgo.getDate() - 30) - return { - startDate: new Date(thirtyDaysAgo.setHours(0, 0, 0, 0)), - endDate: new Date(now.setHours(23, 59, 59, 999)), - } - } - } - }, [dateFilter]) - const { startDate, endDate } = getDateRange() + const { startDate, endDate } = getDateRangeFromFilter(dateFilter) // Chart zoom handler — sets a custom date range from drag selection const handleChartZoom = React.useCallback( @@ -1471,25 +1220,25 @@ function ProjectAnalyticsOverview({ project }: ProjectAnalyticsOverviewProps) { navigate(`/projects/${project.slug}/analytics/globe`) } > - -
- -
+ +
+ +

Visitor Globe

-

+

See where your visitors are coming from on an interactive 3D globe

-
{/* Analytics Charts */} -
+
-

Deployments

+
+

Deployments

+ +
{isExpanded && ( -
+
{item.subItems.map((subItem) => { const subActive = isActive(subItem.url) return ( @@ -427,7 +381,7 @@ export function ProjectDetailSidebar({ project }: ProjectDetailSidebarProps) { to={`/projects/${project.slug}/${subItem.url}`} onClick={closeSheet} className={cn( - 'rounded-lg px-3 py-1.5 text-sm transition-all hover:bg-accent', + 'rounded-md px-2 py-1 text-xs transition-all hover:bg-accent', subActive ? 'bg-accent text-accent-foreground font-medium' : 'text-muted-foreground' diff --git a/web/src/components/project/ProjectMonitors.tsx b/web/src/components/project/ProjectMonitors.tsx index e12c1a62..f5f1f34e 100644 --- a/web/src/components/project/ProjectMonitors.tsx +++ b/web/src/components/project/ProjectMonitors.tsx @@ -6,7 +6,7 @@ import { getEnvironmentsOptions, getCurrentMonitorStatusOptions, } from '@/api/client/@tanstack/react-query.gen' -import { ProjectResponse, MonitorResponse } from '@/api/client' +import { ProjectResponse, MonitorResponse, EnvironmentResponse } from '@/api/client' import { Button } from '@/components/ui/button' import { Card, CardContent } from '@/components/ui/card' import { Badge } from '@/components/ui/badge' @@ -15,6 +15,7 @@ import { ErrorAlert } from '@/components/utils/ErrorAlert' import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query' import { Activity, + Moon, Plus, Trash2, ExternalLink, @@ -91,9 +92,11 @@ interface MonitorCardProps { monitor: MonitorResponse projectSlug: string onDelete: () => void + environment?: EnvironmentResponse } -function MonitorCard({ monitor, projectSlug, onDelete }: MonitorCardProps) { +function MonitorCard({ monitor, projectSlug, onDelete, environment }: MonitorCardProps) { + const isOnDemand = environment?.deployment_config?.onDemand === true const { startDate, endDate } = useMemo(() => { const now = new Date() return { @@ -177,42 +180,63 @@ function MonitorCard({ monitor, projectSlug, onDelete }: MonitorCardProps) {
{/* Status Timeline */}
-
-
- {uptimePercentage?.toFixed(0) ?? 'N/A'}% -
-
- - Healthy -
-
- {/* Mini Status Timeline */} -
- {statusData?.buckets && statusData.buckets.length > 0 - ? statusData.buckets - .slice(-48) - .map((bucket, idx) => ( -
- )) - : Array.from({ length: 48 }).map((_, idx) => ( + {isOnDemand ? ( + <> +
+
+ + On-demand {environment?.sleeping ? '· Sleeping' : '· Awake'} +
+
+
+ {Array.from({ length: 48 }).map((_, idx) => (
))} -
+
+ + ) : ( + <> +
+
+ {uptimePercentage?.toFixed(0) ?? 'N/A'}% +
+
+ + Healthy +
+
+ {/* Mini Status Timeline */} +
+ {statusData?.buckets && statusData.buckets.length > 0 + ? statusData.buckets + .slice(-48) + .map((bucket, idx) => ( +
+ )) + : Array.from({ length: 48 }).map((_, idx) => ( +
+ ))} +
+ + )}
@@ -517,6 +541,7 @@ export function ProjectMonitors({ project }: ProjectMonitorsProps) { monitor={monitor} projectSlug={project.slug} onDelete={() => setMonitorToDelete(monitor.id)} + environment={environments?.find((env) => env.id === monitor.environment_id)} /> ))}
diff --git a/web/src/components/project/ProjectOverview.tsx b/web/src/components/project/ProjectOverview.tsx index dcadbe4d..8ef58387 100644 --- a/web/src/components/project/ProjectOverview.tsx +++ b/web/src/components/project/ProjectOverview.tsx @@ -63,6 +63,7 @@ function getChangeDisplay(change: number | undefined, inverse = false) { } } + export function ProjectOverview({ project, lastDeployment, @@ -145,6 +146,7 @@ export function ProjectOverview({ } return false // Stop polling when deployment has screenshot or failed }, + refetchOnWindowFocus: true, }) // Use fresh deployment data if available, otherwise fall back to passed prop diff --git a/web/src/components/project/settings/environments/CreateEnvironmentDialog.tsx b/web/src/components/project/settings/environments/CreateEnvironmentDialog.tsx index df1dfc5c..d2e1990b 100644 --- a/web/src/components/project/settings/environments/CreateEnvironmentDialog.tsx +++ b/web/src/components/project/settings/environments/CreateEnvironmentDialog.tsx @@ -64,9 +64,9 @@ export function CreateEnvironmentDialog({ return ( - diff --git a/web/src/components/project/settings/environments/EnvironmentConfigurationCard.tsx b/web/src/components/project/settings/environments/EnvironmentConfigurationCard.tsx index 8ba96292..3a260899 100644 --- a/web/src/components/project/settings/environments/EnvironmentConfigurationCard.tsx +++ b/web/src/components/project/settings/environments/EnvironmentConfigurationCard.tsx @@ -26,7 +26,7 @@ import { SelectValue, } from '@/components/ui/select' import { useMutation, useQuery } from '@tanstack/react-query' -import { GitBranch, Loader2, Moon, Network, Plus, Shield, X } from 'lucide-react' +import { GitBranch, KeyRound, Loader2, Moon, Network, Plus, Shield, X } from 'lucide-react' import { useEffect, useState } from 'react' import { toast } from 'sonner' @@ -81,6 +81,8 @@ export function EnvironmentConfigurationCard({ on_demand: environment.deployment_config?.onDemand ?? false, idle_timeout_seconds: environment.deployment_config?.idleTimeoutSeconds?.toString() ?? '300', wake_timeout_seconds: environment.deployment_config?.wakeTimeoutSeconds?.toString() ?? '30', + password_enabled: environment.deployment_config?.security?.passwordProtection?.enabled ?? false, + password: '', security: { enabled: environment.deployment_config?.security?.enabled ?? false, headers: { @@ -132,6 +134,8 @@ export function EnvironmentConfigurationCard({ on_demand: environment.deployment_config?.onDemand ?? false, idle_timeout_seconds: environment.deployment_config?.idleTimeoutSeconds?.toString() ?? '300', wake_timeout_seconds: environment.deployment_config?.wakeTimeoutSeconds?.toString() ?? '30', + password_enabled: environment.deployment_config?.security?.passwordProtection?.enabled ?? false, + password: '', security: { enabled: environment.deployment_config?.security?.enabled ?? false, headers: { @@ -213,6 +217,11 @@ export function EnvironmentConfigurationCard({ ? parseInt(formData.wake_timeout_seconds) : null, security: formData.security, + password: formData.password_enabled + ? (formData.password || null) + : (formData.password_enabled === false && environment.deployment_config?.security?.passwordProtection?.enabled + ? '' + : null), }, }) } @@ -394,12 +403,12 @@ export function EnvironmentConfigurationCard({

On-Demand (Scale-to-Zero)

-
-
+
+

- Automatically stop containers after a period of inactivity - and start them when a new request arrives. + Automatically stop containers after idle timeout + and restart on the next request.

{/* Protected environment toggle */} -
-
+
+

- Git pushes will not auto-deploy to this environment. - Deployments must be promoted from another environment. + Git pushes will not auto-deploy. Deployments must be promoted from another environment.

{/* Anti-affinity toggle */} -
-
+
+

- Spread replicas across different nodes. When enabled, - no two replicas of this environment land on the same - node (best-effort if more replicas than nodes). + Spread replicas across different nodes (best-effort).

+
{ if (newLabelKey.trim() && newLabelValue.trim()) { @@ -673,8 +680,8 @@ export function EnvironmentConfigurationCard({
-
-
+
+

Enable attack mode for development/testing @@ -691,8 +698,8 @@ export function EnvironmentConfigurationCard({ />

-
-
+
+
@@ -749,6 +756,57 @@ export function EnvironmentConfigurationCard({
)} + {/* Password Protection */} +
+
+ +

+ Require a password to access this environment. +

+
+ + setFormData((prev) => ({ + ...prev, + password_enabled: checked, + password: checked ? prev.password : '', + })) + } + /> +
+ + {formData.password_enabled && ( +
+
+ + + setFormData((prev) => ({ + ...prev, + password: e.target.value, + })) + } + placeholder={ + environment.deployment_config?.security?.passwordProtection?.enabled + ? 'Leave empty to keep current password' + : 'Enter a password' + } + /> +

+ {environment.deployment_config?.security?.passwordProtection?.enabled + ? 'A password is currently set. Enter a new one to change it, or leave empty to keep the current password.' + : 'Set a password that visitors must enter to access this environment. The password is securely hashed.'} +

+
+
+ )} +

Rate Limiting

@@ -809,6 +867,7 @@ export function EnvironmentConfigurationCard({ diff --git a/web/src/components/settings/SettingsLayout.tsx b/web/src/components/settings/SettingsLayout.tsx index f178518f..1552f931 100644 --- a/web/src/components/settings/SettingsLayout.tsx +++ b/web/src/components/settings/SettingsLayout.tsx @@ -1,19 +1,26 @@ import { cn } from '@/lib/utils' import { Bell, + ChevronLeft, + Cloud, + Database, DatabaseBackup, + GitBranch, Globe, HardDrive, Key, + Mail, Monitor, Network, Puzzle, Server, Settings2, Shield, + Sparkles, Users, type LucideIcon, } from 'lucide-react' +import { useState } from 'react' import { Link, Outlet, useLocation } from 'react-router-dom' interface SettingsNavItem { @@ -45,6 +52,20 @@ const settingsNavGroups: SettingsNavGroup[] = [ { label: 'Infrastructure', items: [ + { title: 'Domains', url: '/domains', icon: Globe }, + { title: 'Storage', url: '/storage', icon: Database }, + { title: 'Email', url: '/email', icon: Mail }, + { title: 'AI Gateway', url: '/ai-gateway', icon: Sparkles }, + { + title: 'Git Providers', + url: '/git-providers', + icon: GitBranch, + }, + { + title: 'DNS Providers', + url: '/dns-providers', + icon: Cloud, + }, { title: 'Load Balancer', url: '/settings/load-balancer', @@ -89,57 +110,113 @@ function isActive(pathname: string, url: string): boolean { return pathname.startsWith(url) } +function findActiveItem(pathname: string): SettingsNavItem | undefined { + for (const group of settingsNavGroups) { + for (const item of group.items) { + if (isActive(pathname, item.url)) return item + } + } + return undefined +} + +function SettingsNav({ onClick }: { onClick?: () => void }) { + const location = useLocation() + + return ( + + ) +} + export function SettingsLayout() { const location = useLocation() + const [showNav, setShowNav] = useState(false) + const activeItem = findActiveItem(location.pathname) return (
-

Settings

-

+ {/* Mobile: breadcrumb back button when viewing content */} +

+ {activeItem && !showNav && ( + <> + + / + + )} +

+ {showNav || !activeItem ? 'Settings' : activeItem.title} +

+
+ {/* Desktop: always show Settings heading */} +

+ Settings +

+

Manage your platform configuration

+ {(showNav || !activeItem) && ( +

+ Manage your platform configuration +

+ )}
-
- {/* Inner sidebar */} - - {/* Content area */} + {/* Desktop: sidebar + content side by side */} +
+
+ +
+ + {/* Mobile: drill-in — show nav OR content */} +
+ {showNav || !activeItem ? ( + setShowNav(false)} /> + ) : ( +
+ +
+ )} +
) diff --git a/web/src/components/storage/CreateServiceButton.tsx b/web/src/components/storage/CreateServiceButton.tsx index f660c010..175b604a 100644 --- a/web/src/components/storage/CreateServiceButton.tsx +++ b/web/src/components/storage/CreateServiceButton.tsx @@ -12,7 +12,15 @@ import { getProvidersMetadataOptions } from '@/api/client/@tanstack/react-query. import { useQuery } from '@tanstack/react-query' import { useNavigate } from 'react-router-dom' -export function CreateServiceButton({ onSuccess }: { onSuccess?: () => void }) { +export function CreateServiceButton({ + onSuccess, + open, + onOpenChange, +}: { + onSuccess?: () => void + open?: boolean + onOpenChange?: (open: boolean) => void +}) { const navigate = useNavigate() const { data: providers, isLoading } = useQuery({ @@ -20,7 +28,7 @@ export function CreateServiceButton({ onSuccess }: { onSuccess?: () => void }) { }) return ( - + -
+
{[...Array(Math.min(5, totalPages))].map((_, idx) => { const pageNum = page - 2 + idx if (pageNum < 1 || pageNum > totalPages) return null @@ -294,7 +301,7 @@ export function VisitorsList({ project }: VisitorsListProps) { } disabled={page === totalPages} > - Next + Next
@@ -380,12 +387,12 @@ function VisitorRow({ visitor, onClick }: VisitorRowProps) { {/* Source / Referrer */} - + {/* Browser / OS */} - +
{browserInfo?.name || 'Unknown'} diff --git a/web/src/hooks/useAnalyticsDateRange.ts b/web/src/hooks/useAnalyticsDateRange.ts new file mode 100644 index 00000000..640abb7e --- /dev/null +++ b/web/src/hooks/useAnalyticsDateRange.ts @@ -0,0 +1,70 @@ +import { subDays } from 'date-fns' +import type { DateRange } from 'react-day-picker' + +export const QUICK_FILTERS = [ + { label: 'Today', value: 'today' }, + { label: 'Yesterday', value: 'yesterday' }, + { label: 'Last 24 hours', value: '24hours' }, + { label: 'Last 7 Days', value: '7days' }, + { label: 'Last 30 Days', value: '30days' }, + { label: 'Custom', value: 'custom' }, +] as const + +export type QuickFilter = (typeof QUICK_FILTERS)[number]['value'] + +export interface AnalyticsDateFilter { + quickFilter: QuickFilter + dateRange: DateRange | undefined +} + +export function getDateRangeFromFilter(dateFilter: AnalyticsDateFilter): { + startDate: Date | undefined + endDate: Date | undefined +} { + const now = new Date() + if (dateFilter.quickFilter === 'custom' && dateFilter.dateRange) { + return { + startDate: dateFilter.dateRange.from, + endDate: dateFilter.dateRange.to, + } + } + + switch (dateFilter.quickFilter) { + case 'today': + return { + startDate: new Date(now.setHours(0, 0, 0, 0)), + endDate: new Date(now.setHours(23, 59, 59, 999)), + } + case 'yesterday': { + const yesterday = new Date(now) + yesterday.setDate(yesterday.getDate() - 1) + return { + startDate: new Date(yesterday.setHours(0, 0, 0, 0)), + endDate: new Date(yesterday.setHours(23, 59, 59, 999)), + } + } + case '24hours': { + const twentyFourHoursAgo = new Date(now) + twentyFourHoursAgo.setHours(twentyFourHoursAgo.getHours() - 24) + return { + startDate: twentyFourHoursAgo, + endDate: now, + } + } + case '7days': + return { + startDate: subDays(now, 7), + endDate: now, + } + case '30days': + return { + startDate: subDays(now, 30), + endDate: now, + } + default: + return { + startDate: subDays(now, 7), + endDate: now, + } + } +} diff --git a/web/src/hooks/useDashboardHealth.ts b/web/src/hooks/useDashboardHealth.ts new file mode 100644 index 00000000..fde2a96e --- /dev/null +++ b/web/src/hooks/useDashboardHealth.ts @@ -0,0 +1,35 @@ +import { client } from '@/api/client/client.gen' +import { useQuery } from '@tanstack/react-query' + +export interface ProjectMonitorHealth { + project_id: number + /** "operational" | "degraded" | "down" | "no_monitors" */ + status: string +} + +export interface ProjectsMonitorHealthResponse { + projects: Record +} + +async function fetchProjectsHealth( + projectIds: number[] +): Promise { + const response = await client.get({ + url: '/monitors-health/projects', + query: { + project_ids: projectIds.join(','), + }, + security: [{ scheme: 'bearer', type: 'http' }], + }) + return response.data as ProjectsMonitorHealthResponse +} + +export function useDashboardHealth(projectIds: number[]) { + return useQuery({ + queryKey: ['dashboard-projects-health', projectIds], + queryFn: () => fetchProjectsHealth(projectIds), + enabled: projectIds.length > 0, + staleTime: 1000 * 30, + refetchInterval: 1000 * 30, + }) +} diff --git a/web/src/pages/AddGitProvider.tsx b/web/src/pages/AddGitProvider.tsx index 2b20df9a..0578d05d 100644 --- a/web/src/pages/AddGitProvider.tsx +++ b/web/src/pages/AddGitProvider.tsx @@ -16,7 +16,7 @@ export function AddGitProvider() { useEffect(() => { setBreadcrumbs([ - { label: 'Git Sources', href: '/git-providers' }, + { label: 'Git Providers', href: '/git-providers' }, { label: 'Add Provider' }, ]) }, [setBreadcrumbs]) diff --git a/web/src/pages/AddNotificationProvider.tsx b/web/src/pages/AddNotificationProvider.tsx index 7e4d7de8..ada54c4e 100644 --- a/web/src/pages/AddNotificationProvider.tsx +++ b/web/src/pages/AddNotificationProvider.tsx @@ -91,7 +91,7 @@ export function AddNotificationProvider() { useEffect(() => { setBreadcrumbs([ { label: 'Monitoring & Alerts', href: '/monitoring' }, - { label: 'Providers', href: '/settings/notifications' }, + { label: 'Providers', href: '/monitoring/notifications' }, { label: 'Add Provider' }, ]) }, [setBreadcrumbs]) @@ -135,7 +135,7 @@ export function AddNotificationProvider() { setCurrentStep('complete') toast.success('Email provider added successfully') setTimeout(() => { - navigate('/settings/notifications') + navigate('/monitoring/notifications') }, 2000) }, }) @@ -149,7 +149,7 @@ export function AddNotificationProvider() { setCurrentStep('complete') toast.success('Slack provider added successfully') setTimeout(() => { - navigate('/settings/notifications') + navigate('/monitoring/notifications') }, 2000) }, }) @@ -163,7 +163,7 @@ export function AddNotificationProvider() { setCurrentStep('complete') toast.success('Webhook provider added successfully') setTimeout(() => { - navigate('/settings/notifications') + navigate('/monitoring/notifications') }, 2000) }, }) @@ -326,7 +326,7 @@ export function AddNotificationProvider() {
- diff --git a/web/src/pages/AiGateway.tsx b/web/src/pages/AiGateway.tsx index 927bffe7..26609912 100644 --- a/web/src/pages/AiGateway.tsx +++ b/web/src/pages/AiGateway.tsx @@ -2895,18 +2895,18 @@ console.log(response.choices[0].message.content);`,
{inlineTestResult && (
{inlineTestResult.success ? ( - + ) : ( - + )} - + {inlineTestResult.success ? 'Key is valid' : inlineTestResult.error || 'Key test failed'} diff --git a/web/src/pages/AlertRuleForm.tsx b/web/src/pages/AlertRuleForm.tsx new file mode 100644 index 00000000..ec67fe5e --- /dev/null +++ b/web/src/pages/AlertRuleForm.tsx @@ -0,0 +1,468 @@ +import { + createAlertRuleMutation, + getAlertRuleOptions, + updateAlertRuleMutation, +} from '@/api/client/@tanstack/react-query.gen' +import { CreateAlertRuleRequest } from '@/api/client/types.gen' +import { Button } from '@/components/ui/button' +import { + Card, + CardContent, + CardDescription, + CardHeader, + CardTitle, +} from '@/components/ui/card' +import { + Form, + FormControl, + FormDescription, + FormField, + FormItem, + FormLabel, + FormMessage, +} from '@/components/ui/form' +import { Input } from '@/components/ui/input' +import { + Select, + SelectContent, + SelectItem, + SelectTrigger, + SelectValue, +} from '@/components/ui/select' +import { Switch } from '@/components/ui/switch' +import { Skeleton } from '@/components/ui/skeleton' +import { zodResolver } from '@hookform/resolvers/zod' +import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query' +import { ArrowLeft } from 'lucide-react' +import { useMemo } from 'react' +import { useForm } from 'react-hook-form' +import { useNavigate, useParams } from 'react-router-dom' +import { toast } from 'sonner' +import { z } from 'zod' + +const TRIGGER_TYPES = [ + { value: 'new_issue', label: 'New Issue', description: 'Fires once when a new error group is first created. Does not fire again for subsequent events in the same group.' }, + { value: 'regression', label: 'Regression', description: 'Fires when a new event arrives for an error group that was previously marked as "resolved" or "ignored". The group is automatically re-opened.' }, + { value: 'frequency', label: 'Frequency', description: 'Fires when the number of events in an error group exceeds the configured threshold within the time window. Can fire repeatedly after cooldown expires.' }, + { value: 'new_user', label: 'New User Affected', description: 'Fires when the first event with user context (user ID, email, or visitor ID) is added to an error group.' }, + { value: 'user_count', label: 'User Count', description: 'Fires when the number of unique users affected by an error group reaches the configured threshold.' }, + { value: 'status_change', label: 'Status Change', description: 'Fires when an error group status is manually changed (e.g. resolved, assigned, ignored).' }, +] as const + +const PRIORITIES = [ + { value: 'Low', label: 'Low' }, + { value: 'Normal', label: 'Normal' }, + { value: 'High', label: 'High' }, + { value: 'Critical', label: 'Critical' }, +] as const + +const alertRuleSchema = z.object({ + name: z.string().min(1, 'Name is required'), + trigger_type: z.string().min(1, 'Trigger type is required'), + trigger_config: z.object({ + count: z.number().min(1).optional(), + window_minutes: z.number().min(1).optional(), + threshold: z.number().min(1).optional(), + }).optional(), + cooldown_minutes: z.number().min(0), + notification_priority: z.string(), + environment_filter: z.number().nullable().optional(), + error_level_filter: z.string().nullable().optional(), + enabled: z.boolean(), +}) + +type AlertRuleFormData = z.infer + +function needsConfig(triggerType: string): boolean { + return ['frequency', 'user_count'].includes(triggerType) +} + +interface AlertRuleFormProps { + projectId: number +} + +export function AlertRuleForm({ projectId }: AlertRuleFormProps) { + const navigate = useNavigate() + const queryClient = useQueryClient() + const { ruleId } = useParams() + const isEditing = !!ruleId + + const { data: existingRule, isLoading: ruleLoading } = useQuery({ + ...getAlertRuleOptions({ + path: { project_id: projectId, rule_id: Number(ruleId) }, + }), + enabled: isEditing, + }) + + const defaultValues = useMemo(() => { + if (existingRule) { + const config = (existingRule.trigger_config ?? {}) as Record + return { + name: existingRule.name, + trigger_type: existingRule.trigger_type, + trigger_config: { + count: (config.count as number) ?? undefined, + window_minutes: (config.window_minutes as number) ?? undefined, + threshold: (config.threshold as number) ?? undefined, + }, + cooldown_minutes: existingRule.cooldown_minutes, + notification_priority: existingRule.notification_priority, + environment_filter: existingRule.environment_filter ?? null, + error_level_filter: existingRule.error_level_filter ?? null, + enabled: existingRule.enabled, + } + } + return { + name: '', + trigger_type: 'new_issue', + trigger_config: {}, + cooldown_minutes: 60, + notification_priority: 'High', + environment_filter: null, + error_level_filter: null, + enabled: true, + } + }, [existingRule]) + + const form = useForm({ + resolver: zodResolver(alertRuleSchema), + values: defaultValues, + }) + + const watchedTriggerType = form.watch('trigger_type') + + const createMutation = useMutation({ + ...createAlertRuleMutation(), + meta: { errorTitle: 'Failed to create alert rule' }, + onSuccess: () => { + toast.success('Alert rule created') + queryClient.invalidateQueries({ predicate: (query) => (query.queryKey[0] as Record)?._id === 'listAlertRules' }) + navigate(-1) + }, + }) + + const updateMutation = useMutation({ + ...updateAlertRuleMutation(), + meta: { errorTitle: 'Failed to update alert rule' }, + onSuccess: () => { + toast.success('Alert rule updated') + queryClient.invalidateQueries({ predicate: (query) => (query.queryKey[0] as Record)?._id === 'listAlertRules' }) + navigate(-1) + }, + }) + + const isMutating = createMutation.isPending || updateMutation.isPending + + const onSubmit = async (data: AlertRuleFormData) => { + const triggerConfig = needsConfig(data.trigger_type) ? data.trigger_config : undefined + + if (isEditing) { + await updateMutation.mutateAsync({ + path: { project_id: projectId, rule_id: Number(ruleId) }, + body: { + name: data.name, + trigger_type: data.trigger_type, + trigger_config: triggerConfig, + cooldown_minutes: data.cooldown_minutes, + notification_priority: data.notification_priority, + environment_filter: data.environment_filter, + error_level_filter: data.error_level_filter, + enabled: data.enabled, + }, + }) + } else { + await createMutation.mutateAsync({ + path: { project_id: projectId }, + body: { + name: data.name, + trigger_type: data.trigger_type, + trigger_config: triggerConfig, + cooldown_minutes: data.cooldown_minutes, + notification_priority: data.notification_priority, + environment_filter: data.environment_filter, + error_level_filter: data.error_level_filter, + enabled: data.enabled, + } as CreateAlertRuleRequest, + }) + } + } + + if (isEditing && ruleLoading) { + return ( +
+
+ + +
+ +
+ ) + } + + return ( +
+
+ +
+

+ {isEditing ? 'Edit Alert Rule' : 'Create Alert Rule'} +

+

+ {isEditing + ? 'Update the alert rule configuration.' + : 'Configure a new error alert rule.'} +

+
+
+ + + + Rule Configuration + + Define when and how this alert should fire. + + + +
+ + ( + + Name + + + + + + )} + /> + + ( + + Trigger Type + + + {TRIGGER_TYPES.find((t) => t.value === field.value)?.description} + + + + )} + /> + + {needsConfig(watchedTriggerType) && ( +
+

Trigger Configuration

+ {watchedTriggerType === 'frequency' && ( + <> + ( + + Event Count + + + field.onChange(e.target.value ? Number(e.target.value) : undefined) + } + /> + + + Number of events to trigger the alert + + + + )} + /> + ( + + Time Window (minutes) + + + field.onChange(e.target.value ? Number(e.target.value) : undefined) + } + /> + + + + )} + /> + + )} + {watchedTriggerType === 'user_count' && ( + ( + + User Threshold + + + field.onChange(e.target.value ? Number(e.target.value) : undefined) + } + /> + + + Trigger when this many unique users are affected + + + + )} + /> + )} +
+ )} + + ( + + Priority + + + Adds a prefix to email subjects (e.g. [CRITICAL]) and is included in webhook/Slack payloads. Use it to filter or sort notifications in your inbox. + + + + )} + /> + + ( + + Cooldown (minutes) + + field.onChange(Number(e.target.value))} + /> + + + Minimum time between notifications for the same rule and error group. After an alert fires, it won't fire again for this group until the cooldown expires and a new matching event arrives. + + + + )} + /> + + ( + + Error Level Filter + + + Only trigger for errors at this level + + + + )} + /> + + ( + +
+ Enabled + + Activate this rule immediately + +
+ + + +
+ )} + /> + +
+ + +
+ + +
+
+
+ ) +} diff --git a/web/src/pages/CreateServiceNew.tsx b/web/src/pages/CreateServiceNew.tsx index e95aa92c..19f048ec 100644 --- a/web/src/pages/CreateServiceNew.tsx +++ b/web/src/pages/CreateServiceNew.tsx @@ -1,17 +1,29 @@ import { + adminListNodesOptions, createServiceMutation, getProviderMetadataOptions, getServiceTypeParametersOptions, } from '@/api/client/@tanstack/react-query.gen' -import { ServiceTypeRoute } from '@/api/client/types.gen' +import { + ClusterMemberRequest, + NodeInfoResponse, + ServiceTypeRoute, +} from '@/api/client/types.gen' import { Button } from '@/components/ui/button' import { JsonSchemaForm } from '@/components/forms/JsonSchemaForm' import { Input } from '@/components/ui/input' import { Label } from '@/components/ui/label' +import { + Select, + SelectContent, + SelectItem, + SelectTrigger, + SelectValue, +} from '@/components/ui/select' import { useBreadcrumbs } from '@/contexts/BreadcrumbContext' import { useMutation, useQuery } from '@tanstack/react-query' import { customAlphabet } from 'nanoid' -import { ArrowLeft } from 'lucide-react' +import { ArrowLeft, Plus, Server, Trash2 } from 'lucide-react' import { useEffect, useMemo, useState } from 'react' import { Link, useNavigate, useSearchParams } from 'react-router-dom' import { toast } from 'sonner' @@ -19,6 +31,221 @@ import { toast } from 'sonner' // Create a custom nanoid with lowercase alphanumeric characters const generateId = customAlphabet('0123456789abcdefghijklmnopqrstuvwxyz', 4) +/** Service types that support HA cluster topology */ +const CLUSTER_SERVICE_TYPES: ServiceTypeRoute[] = ['postgres'] + +/** Default cluster roles for each service type */ +const DEFAULT_CLUSTER_ROLES: Record = { + postgres: ['monitor', 'primary', 'replica'], +} + +const ROLE_DESCRIPTIONS: Record = { + monitor: 'pg_auto_failover monitor — coordinates failover', + primary: 'Read-write primary node', + replica: 'Read-only hot standby', +} + +function ClusterMemberConfig({ + members, + onMembersChange, + nodes, + serviceType, +}: { + members: ClusterMemberRequest[] + onMembersChange: (members: ClusterMemberRequest[]) => void + nodes: NodeInfoResponse[] + serviceType: string +}) { + const roles = DEFAULT_CLUSTER_ROLES[serviceType] || [] + + const addMember = () => { + // Default to replica if we already have all required roles + const hasMonitor = members.some((m) => m.role === 'monitor') + const hasPrimary = members.some((m) => m.role === 'primary') + const defaultRole = !hasMonitor + ? 'monitor' + : !hasPrimary + ? 'primary' + : 'replica' + onMembersChange([...members, { role: defaultRole, node_id: null }]) + } + + const removeMember = (index: number) => { + onMembersChange(members.filter((_, i) => i !== index)) + } + + const updateMember = ( + index: number, + field: keyof ClusterMemberRequest, + value: string | number | null + ) => { + const updated = [...members] + if (field === 'node_id') { + updated[index] = { + ...updated[index], + node_id: value === null ? null : Number(value), + } + } else { + updated[index] = { ...updated[index], [field]: value as string } + } + onMembersChange(updated) + } + + // Validation: warn about missing required roles + const hasMonitor = members.some((m) => m.role === 'monitor') + const hasPrimary = members.some((m) => m.role === 'primary') + const hasReplica = members.some((m) => m.role === 'replica') + const allHaveNodes = members.every((m) => m.node_id !== null) + + return ( +
+
+
+ +

+ Assign each member to a different node for true HA +

+
+ +
+ + {members.length === 0 && ( +
+ No members configured. Add at least a monitor, primary, and replica. +
+ )} + +
+ {members.map((member, index) => ( +
+
+ {index + 1} +
+ +
+
+ + + {ROLE_DESCRIPTIONS[member.role] && ( +

+ {ROLE_DESCRIPTIONS[member.role]} +

+ )} +
+ +
+ + +
+
+ + +
+ ))} +
+ + {members.length > 0 && (!hasMonitor || !hasPrimary || !hasReplica) && ( +
+ A PostgreSQL cluster requires at least:{' '} + + 1 monitor + + ,{' '} + + 1 primary + + ,{' '} + + 1 replica + +
+ )} + + {members.length >= 3 && + hasMonitor && + hasPrimary && + hasReplica && + !allHaveNodes && ( +
+ For true high availability, assign each member to a different node. + Members on the control plane share the same machine. +
+ )} + + {members.length >= 3 && hasMonitor && hasPrimary && hasReplica && allHaveNodes && ( +
+ Cluster configuration looks good. Members will communicate via their + private addresses. +
+ )} +
+ ) +} + export function CreateService() { const navigate = useNavigate() const [searchParams] = useSearchParams() @@ -31,6 +258,54 @@ export function CreateService() { ) const [serviceName, setServiceName] = useState(defaultName) + const supportsCluster = useMemo( + () => + serviceType !== null && + CLUSTER_SERVICE_TYPES.includes(serviceType as ServiceTypeRoute), + [serviceType] + ) + const [topology, setTopology] = useState<'standalone' | 'cluster'>( + 'standalone' + ) + const [clusterMembers, setClusterMembers] = useState( + [] + ) + + // Fetch available nodes to determine if cluster topology can be offered + const { data: nodesResponse } = useQuery({ + ...adminListNodesOptions(), + enabled: supportsCluster, + }) + const nodes = useMemo( + () => + (nodesResponse?.nodes ?? []).filter( + (n: NodeInfoResponse) => n.status === 'active' + ), + [nodesResponse] + ) + const hasWorkerNodes = useMemo(() => nodes.length > 0, [nodes]) + + // Reset to standalone if no worker nodes are available + useEffect(() => { + if (!hasWorkerNodes && topology === 'cluster') { + setTopology('standalone') + } + }, [hasWorkerNodes, topology]) + + // When switching to cluster topology, pre-populate default members + useEffect(() => { + if (topology === 'cluster' && clusterMembers.length === 0 && serviceType) { + const defaultRoles = DEFAULT_CLUSTER_ROLES[serviceType] + if (defaultRoles) { + setClusterMembers( + defaultRoles.map((role) => ({ role, node_id: null })) + ) + } + } + if (topology === 'standalone') { + setClusterMembers([]) + } + }, [topology, serviceType]) useEffect(() => { setBreadcrumbs([ @@ -65,7 +340,11 @@ export function CreateService() { errorTitle: 'Failed to create service', }, onSuccess: (data) => { - toast.success('Service created successfully') + if (data.status === 'creating') { + toast.success('Cluster creation started — tracking progress...') + } else { + toast.success('Service created successfully') + } navigate(`/storage/${data.id}`) }, }) @@ -89,11 +368,22 @@ export function CreateService() { } }) + // For cluster topology, remove standalone-only params so the backend uses HA defaults + if (topology === 'cluster') { + delete cleanedParameters['docker_image'] + delete cleanedParameters['host'] + delete cleanedParameters['port'] + } + await createServiceMut.mutateAsync({ body: { service_type: serviceType as ServiceTypeRoute, name: serviceName, parameters: cleanedParameters, + ...(topology === 'cluster' && { + topology: 'cluster', + members: clusterMembers, + }), }, }) } @@ -198,6 +488,69 @@ export function CreateService() {

+ {/* Topology Selector (only for service types that support clustering AND when worker nodes exist) */} + {supportsCluster && hasWorkerNodes && ( +
+
+ +

+ Choose standalone for a single instance, or cluster for + high-availability with automatic failover +

+
+
+ + +
+ + {topology === 'cluster' && ( + <> +

+ Docker image will be set to{' '} + + gotempsh/postgres-ha:18-bookworm + {' '} + automatically (includes pg_auto_failover). +

+ + + )} +
+ )} + {/* JSON Schema Form for Parameters */} navigate('/storage')} submitText="Create Service" isSubmitting={createServiceMut.isPending} + hiddenFields={ + topology === 'cluster' + ? ['host', 'port', 'docker_image'] + : [] + } />
diff --git a/web/src/pages/Dashboard.tsx b/web/src/pages/Dashboard.tsx index 9d1bf0ca..25877ef1 100644 --- a/web/src/pages/Dashboard.tsx +++ b/web/src/pages/Dashboard.tsx @@ -13,6 +13,7 @@ import { Badge } from '@/components/ui/badge' import { Button } from '@/components/ui/button' import { useBreadcrumbs } from '@/contexts/BreadcrumbContext' import { useDashboardAnalytics } from '@/hooks/useDashboardAnalytics' +import { useDashboardHealth } from '@/hooks/useDashboardHealth' import { usePageTitle } from '@/hooks/usePageTitle' import { useQuery } from '@tanstack/react-query' import { subDays } from 'date-fns' @@ -116,6 +117,9 @@ export function Dashboard() { endDate ) + // Batch fetch health summaries for all visible projects + const dashboardHealth = useDashboardHealth(projectIds) + // Fetch general stats const generalStatsQuery = useQuery({ ...getGeneralStatsOptions({ @@ -311,6 +315,11 @@ export function Dashboard() { } analyticsLoading={dashboardAnalytics.isLoading} analyticsError={dashboardAnalytics.isError} + health={ + dashboardHealth.data?.projects?.[ + String(project.id) + ] + } /> ))}
diff --git a/web/src/pages/EditNotificationProvider.tsx b/web/src/pages/EditNotificationProvider.tsx index d29a1905..763fd31e 100644 --- a/web/src/pages/EditNotificationProvider.tsx +++ b/web/src/pages/EditNotificationProvider.tsx @@ -63,7 +63,7 @@ export function EditNotificationProvider() { if (provider) { setBreadcrumbs([ { label: 'Monitoring & Alerts', href: '/monitoring' }, - { label: 'Providers', href: '/settings/notifications' }, + { label: 'Providers', href: '/monitoring/notifications' }, { label: provider.name || 'Edit Provider' }, ]) } @@ -145,7 +145,7 @@ export function EditNotificationProvider() { onSuccess: () => { toast.success('Email provider updated successfully') queryClient.invalidateQueries({ queryKey: ['getNotificationProviders'] }) - navigate('/settings/notifications') + navigate('/monitoring/notifications') }, }) @@ -157,7 +157,7 @@ export function EditNotificationProvider() { onSuccess: () => { toast.success('Slack provider updated successfully') queryClient.invalidateQueries({ queryKey: ['getNotificationProviders'] }) - navigate('/settings/notifications') + navigate('/monitoring/notifications') }, }) @@ -169,7 +169,7 @@ export function EditNotificationProvider() { onSuccess: () => { toast.success('Webhook provider updated successfully') queryClient.invalidateQueries({ queryKey: ['getNotificationProviders'] }) - navigate('/settings/notifications') + navigate('/monitoring/notifications') }, }) @@ -190,7 +190,7 @@ export function EditNotificationProvider() { onSuccess: () => { toast.success('Provider deleted successfully') queryClient.invalidateQueries({ queryKey: ['getNotificationProviders'] }) - navigate('/settings/notifications') + navigate('/monitoring/notifications') }, }) @@ -293,7 +293,7 @@ export function EditNotificationProvider() { Failed to load provider. Please check the ID and try again. - @@ -309,7 +309,7 @@ export function EditNotificationProvider() {
) diff --git a/web/src/pages/EnvironmentsTabsView.tsx b/web/src/pages/EnvironmentsTabsView.tsx index 032c9c08..75fa092b 100644 --- a/web/src/pages/EnvironmentsTabsView.tsx +++ b/web/src/pages/EnvironmentsTabsView.tsx @@ -74,9 +74,9 @@ export function EnvironmentsTabsView({ return (
-
-
-

Environments

+
+
+

Environments

Manage and monitor your environments

@@ -108,10 +108,10 @@ export function EnvironmentsTabsView({ onValueChange={(value) => setSelectedEnvId(parseInt(value))} className="flex flex-col h-full" > -
+
-
-

Environments

+
+

Environments

- +
+ + {environments.map((env) => ( + + {env.name} + + ))} + + { + await createEnv.mutateAsync({ + path: { project_id: project.id || 0 }, + body: values, + }) + }} + /> +
+ {environments.map((env) => ( { const [isLoading, setIsLoading] = useState(false) const [isDemoLoading, setIsDemoLoading] = useState(false) + const { data: publicSettings } = useQuery({ + queryKey: ['public-settings'], + queryFn: async () => { + const res = await fetch('/api/settings/public') + if (!res.ok) return { demo_enabled: false } + return res.json() as Promise<{ demo_enabled: boolean }> + }, + staleTime: 5 * 60 * 1000, + }) const navigate = useNavigate() const queryClient = useQueryClient() const { refetch } = useAuth() @@ -102,29 +111,33 @@ export const Login = () => { isLoading={isLoading || login.isPending} /> -
-
- -
-
- - Or continue with - -
-
+ {publicSettings?.demo_enabled && ( + <> +
+
+ +
+
+ + Or continue with + +
+
- -

- Explore analytics and monitoring with sample data -

+ +

+ Explore analytics and monitoring with sample data +

+ + )}
) diff --git a/web/src/pages/ProjectDetail.tsx b/web/src/pages/ProjectDetail.tsx index d45d7243..d0d080f4 100644 --- a/web/src/pages/ProjectDetail.tsx +++ b/web/src/pages/ProjectDetail.tsx @@ -30,6 +30,8 @@ import { SecurityOverview } from './security/SecurityOverview' import { ScanDetail } from './security/ScanDetail' import { VulnerabilityDetailPage } from './security/VulnerabilityDetailPage' +import { AlertRulesManagement } from '@/components/monitoring/AlertRulesManagement' +import { AlertRuleForm } from '@/pages/AlertRuleForm' import { ErrorAlert } from '@/components/utils/ErrorAlert' import { useBreadcrumbs } from '@/contexts/BreadcrumbContext' import { useAuth } from '@/contexts/AuthContext' @@ -390,6 +392,18 @@ export function ProjectDetail() { path="errors" element={} /> + } + /> + } + /> + } + /> } diff --git a/web/src/pages/Projects.tsx b/web/src/pages/Projects.tsx index 5dbe7748..a3b05292 100644 --- a/web/src/pages/Projects.tsx +++ b/web/src/pages/Projects.tsx @@ -3,6 +3,7 @@ import { useBreadcrumbs } from '@/contexts/BreadcrumbContext' import { useAuth } from '@/contexts/AuthContext' import { useKeyboardShortcut } from '@/hooks/useKeyboardShortcut' import { useDashboardAnalytics } from '@/hooks/useDashboardAnalytics' +import { useDashboardHealth } from '@/hooks/useDashboardHealth' import { usePageTitle } from '@/hooks/usePageTitle' import { ProjectCard } from '@/components/dashboard/ProjectCard' import { ProjectCardSkeleton } from '@/components/skeletons/ProjectCardSkeleton' @@ -100,6 +101,8 @@ export function Projects() { endDate ) + const dashboardHealth = useDashboardHealth(projectIds) + return (
{/* Header */} @@ -224,6 +227,9 @@ export function Projects() { } analyticsLoading={dashboardAnalytics.isLoading} analyticsError={dashboardAnalytics.isError} + health={ + dashboardHealth.data?.projects?.[String(project.id)] + } /> ))} diff --git a/web/src/pages/ServiceDetail.tsx b/web/src/pages/ServiceDetail.tsx index 40ca609d..93aeeb7d 100644 --- a/web/src/pages/ServiceDetail.tsx +++ b/web/src/pages/ServiceDetail.tsx @@ -53,6 +53,7 @@ import { MoreVertical, Pencil, RefreshCcw, + Server, Trash2, } from 'lucide-react' import { useEffect, useState } from 'react' @@ -70,6 +71,7 @@ export function ServiceDetail() { const [isBackupDialogOpen, setIsBackupDialogOpen] = useState(false) const [isStopDialogOpen, setIsStopDialogOpen] = useState(false) const [error, setError] = useState(null) + const [prevStatus, setPrevStatus] = useState(undefined) const [visibleParameters, setVisibleParameters] = useState>( new Set() ) @@ -84,6 +86,10 @@ export function ServiceDetail() { path: { id: parseInt(id!) }, }), enabled: !!id, + refetchInterval: (query) => { + const status = query.state.data?.service?.status + return status === 'creating' ? 2000 : false + }, }) // Query for environment variables @@ -127,6 +133,19 @@ export function ServiceDetail() { usePageTitle(service?.service?.name || 'Service Details') + // Notify when cluster creation completes or fails + useEffect(() => { + const currentStatus = service?.service?.status + if (prevStatus === 'creating' && currentStatus === 'running') { + toast.success('Cluster created successfully') + } else if (prevStatus === 'creating' && currentStatus === 'failed') { + toast.error('Cluster creation failed') + } + if (currentStatus) { + setPrevStatus(currentStatus) + } + }, [service?.service?.status, prevStatus]) + const startService = useMutation({ ...startServiceMutation(), meta: { @@ -149,6 +168,37 @@ export function ServiceDetail() { }, }) + const retryCluster = useMutation({ + mutationFn: async (options: { + path: { id: number } + body: { members: { role: string; node_id?: number }[] } + }) => { + const response = await fetch( + `/api/external-services/${options.path.id}/retry`, + { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + credentials: 'include', + body: JSON.stringify(options.body), + } + ) + if (!response.ok) { + const error = await response.json().catch(() => ({})) + throw new Error(error.detail || 'Retry failed') + } + return response.json() + }, + onSuccess: () => { + toast.success('Cluster retry initiated') + refetch() + }, + onError: (error: Error) => { + toast.error('Failed to retry cluster', { + description: error.message, + }) + }, + }) + const deleteService = useMutation({ ...deleteServiceMutation(), meta: { @@ -272,6 +322,12 @@ export function ServiceDetail() { /> {service.service.service_type} + {service.service.topology === 'cluster' && ( + + + Cluster + + )}

Created @@ -411,6 +467,154 @@ export function ServiceDetail() { + {/* Cluster Creation Progress */} + {service.service.topology === 'cluster' && + service.service.status === 'creating' && ( + + + + + Creating cluster members... + {' '} + This may take a minute. Members will appear below as they are + provisioned. + + + )} + + {/* Cluster Creation Failed */} + {service.service.topology === 'cluster' && + service.service.status === 'failed' && ( + + + +

+ + Cluster creation failed. + {' '} + {(service.service as Record).error_message + ? String( + (service.service as Record) + .error_message + ) + : 'An unknown error occurred.'} +
+ + + + )} + + {/* Cluster Members Section */} + {service.service.topology === 'cluster' && + service.service.members && + service.service.members.length > 0 && ( + + + + Cluster Members + + {service.service.members.length} + + + + pg_auto_failover cluster nodes + + + +
+ {service.service.members.map((member) => ( +
+
+ {member.status === 'creating' ? ( + + ) : ( + + )} +
+
+ + {member.container_name} + + + {member.role} + +
+
+ {member.hostname && ( + {member.hostname} + )} + {member.port && :{member.port}} + {member.node_id && ( + + (node {member.node_id}) + + )} +
+
+
+ + {member.status === 'creating' && ( + + )} + {member.status} + +
+ ))} +
+
+
+ )} + {/* Service Configuration Section */} diff --git a/web/src/pages/Settings.tsx b/web/src/pages/Settings.tsx index b155b62b..cfd0c2ba 100644 --- a/web/src/pages/Settings.tsx +++ b/web/src/pages/Settings.tsx @@ -24,15 +24,17 @@ import { useUpdateSettings, type PlatformSettings, } from '@/hooks/useSettings' +import { client } from '@/api/client/client.gen' import { AlertCircle, Globe, Image, Link, Loader2, + RefreshCw, Save, } from 'lucide-react' -import { useEffect } from 'react' +import { useEffect, useState } from 'react' import { useForm, useWatch } from 'react-hook-form' import { toast } from 'sonner' @@ -45,6 +47,7 @@ export function Settings() { const { setBreadcrumbs } = useBreadcrumbs() const { data: settings, isLoading, error } = useSettings() const updateSettings = useUpdateSettings() + const [isRefreshingRoutes, setIsRefreshingRoutes] = useState(false) const { register, @@ -250,6 +253,57 @@ export function Settings() { + + + + + Route Table + + + Manually refresh the proxy route table from the database. Use this if + routes appear out of sync after deployments or configuration changes. + + + + + + + {isDirty && (
diff --git a/web/src/pages/Storage.tsx b/web/src/pages/Storage.tsx index d674891b..3ebf0f37 100644 --- a/web/src/pages/Storage.tsx +++ b/web/src/pages/Storage.tsx @@ -31,6 +31,7 @@ export function Storage() { const [searchParams, setSearchParams] = useSearchParams() const [isEditDialogOpen, setIsEditDialogOpen] = useState(false) const [selectedService, setSelectedService] = useState(null) + const [isCreateDropdownOpen, setIsCreateDropdownOpen] = useState(false) // Get active tab from URL or default to 'external' const activeTab = searchParams.get('tab') || 'external' @@ -52,8 +53,11 @@ export function Storage() { setBreadcrumbs([{ label: 'Storage', href: '/storage' }]) }, [setBreadcrumbs]) - // Keyboard shortcut: N to create new service (navigate to create page) - useKeyboardShortcut({ key: 'n', path: '/storage/create' }) + // Keyboard shortcut: N to open the create service dropdown + useKeyboardShortcut({ + key: 'n', + callback: () => setIsCreateDropdownOpen(true), + }) usePageTitle('Storage') @@ -114,7 +118,11 @@ export function Storage() {
refetch()} /> - refetch()} /> + refetch()} + open={isCreateDropdownOpen} + onOpenChange={setIsCreateDropdownOpen} + />