diff --git a/README.md b/README.md index 46459e1..be0e4dd 100644 --- a/README.md +++ b/README.md @@ -17,8 +17,9 @@ The public JS package lives in `js/packages/core` as `@cynos/core`. The Rust cra - `observe()`: cached query execution, callback receives the full current result set when it changes. - `changes()`: cached query execution, callback receives the full current result set immediately and on later changes. - `trace()`: incremental view maintenance (IVM), callback receives `{ added, removed }` deltas for incrementalizable queries. +- A shared live runtime abstraction in `cynos-database` that batches table changes once and fans them out to row subscriptions and GraphQL subscriptions across snapshot and delta backends. - Prepared query handles via `prepare()`, which reuse the compiled physical plan and expose `exec()`, `execBinary()`, and `getSchemaLayout()` for repeated execution. -- A Rust/WASM-first GraphQL adapter via `cynos-gql`, including generated SDL, root `query` / `mutation` / `subscription` fields, prepared GraphQL operations, and planner-backed root-table execution for `where` / `orderBy` / `limit` / `offset`. +- A Rust/WASM-first GraphQL adapter via `cynos-gql`, including generated SDL, root `query` / `mutation` / `subscription` fields, prepared GraphQL operations, planner-backed root-table execution for `where` / `orderBy` / `limit` / `offset`, and batched nested relation rendering to avoid in-memory N+1 work during live payload assembly. - Compiled single-table execution fast paths that can fuse scan/filter/project work and apply row-local reactive patches for simple subscriptions instead of always re-running the full query. - Binary query results via `execBinary()` + `getSchemaLayout()` + `ResultSet` for low-overhead WASM-to-JS transfer. - JSONB building blocks including a compact binary codec, a JSONPath subset parser/evaluator, JSONB operators, and extraction helpers for GIN indexing. @@ -115,6 +116,8 @@ Application code Operationally there are two query delivery paths: +Both paths sit under one live runtime control plane in `cynos-database`: row subscriptions and GraphQL subscriptions share the same change registry / flush machinery, while the GraphQL layer stays an adapter that selects the snapshot or delta kernel per query shape. + 1. Cached execution path: - `SelectBuilder` produces a logical plan. - `cynos-query` rewrites and lowers it to a physical plan plus a cached `PlanExecutionArtifact`. @@ -343,6 +346,8 @@ const prepared = db.prepareGraphql( const sub = prepared.subscribe(); ``` +Runtime note: GraphQL subscriptions compile the root field into the existing planner path, then choose a snapshot/re-query or delta/IVM backend depending on query shape. Nested relation payloads are assembled with Rust-side batching so multi-level relations do not degrade into row-by-row in-memory N+1 fetch patterns. + ### Filters, ordering, and scalars The generated schema includes table-specific inputs such as: @@ -394,11 +399,10 @@ Current scalar mapping: The current GraphQL surface is intentionally focused on table access and live queries. Today it does not support: - fragments -- directives - full GraphQL introspection - multi-root subscriptions -`__typename` is supported, but GraphQL subscriptions must select exactly one concrete root field. +`@include`, `@skip`, and `__typename` are supported, but GraphQL subscriptions must select exactly one concrete root field. ## Reactive Modes @@ -625,7 +629,7 @@ Notes: - Cynos is in-memory only. There is no durable on-disk storage engine yet. - Transactions are journaled commit/rollback over in-memory state; this is not durable storage in the traditional ACID database sense. - `trace()` only works for plans that the physical planner can lower to incremental dataflow. -- The GraphQL layer is currently a focused table/query subset: fragments, directives, and full introspection are not implemented yet, and subscriptions currently require a single concrete root field. +- The GraphQL layer is currently a focused table/query subset: fragments, full introspection, and multi-root subscriptions are not implemented yet, and subscriptions currently require a single concrete root field. - Storage/query integration currently materializes B+Tree and GIN indexes from schema definitions. A standalone hash index implementation also exists in the workspace, but it is not the default secondary-index path in `RowStore` today. - JavaScript `Int64` values are exposed through JS-friendly paths as numbers, so values outside the safe integer range lose precision unless the calling pattern is designed around that limitation. diff --git a/crates/database/live-runtime-unification.md b/crates/database/live-runtime-unification.md new file mode 100644 index 0000000..ad53c6b --- /dev/null +++ b/crates/database/live-runtime-unification.md @@ -0,0 +1,758 @@ +# Live Runtime Unification Design + +Status: proposed +Owner: cynos-database / cynos-gql +Scope: `crates/database`, `crates/gql`, `js/packages/core` + +## 1. Context + +Cynos currently exposes four live-oriented APIs: + +- `observe()` +- `changes()` +- `trace()` +- `subscribeGraphql()` / `PreparedGraphqlQuery.subscribe()` + +At the execution-kernel level, these APIs are backed by two different mechanisms: + +1. Re-query / cached-plan / reactive-patch +2. DBSP-style incremental view maintenance (`trace()` / dataflow / delta propagation) + +At the runtime/control-plane level, however, the implementation has already started to diverge into three live shapes: + +- SQL rows snapshot live (`observe()` / `changes()`) +- SQL rows delta live (`trace()`) +- GraphQL payload snapshot live (`subscribeGraphql()`) + +This split is visible in the current code layout: + +- `crates/database/src/query_builder.rs` + - `observe()` / `changes()` build `ReQueryObservable` + - `trace()` builds `ObservableQuery` +- `crates/database/src/database.rs` + - `subscribeGraphql()` binds GraphQL and creates a GraphQL-specific subscription path +- `crates/database/src/reactive_bridge.rs` + - contains SQL re-query runtime pieces + - contains IVM registration path + - currently also contains GraphQL-specific subscription runtime pieces +- `crates/gql/src/plan.rs` + - lowers GraphQL root fields to existing planner-backed logical plans +- `crates/gql/src/execute.rs` + - renders rows into GraphQL payload trees +- `crates/gql/src/bind.rs` + - now also knows how to collect GraphQL dependency tables from nested selections + +This works, but if left unchecked it will gradually produce an expensive maintenance shape: + +- duplicate invalidation logic +- duplicate subscription lifecycle management +- duplicate flush/GC behavior +- duplicated performance work across SQL and GraphQL live paths +- pressure to create a third de facto runtime instead of keeping GraphQL as an upper-layer bridge + +The long-term direction should be: + +- exactly two live execution kernels +- one shared live runtime control plane +- SQL and GraphQL implemented as adapters on top of the shared runtime +- GraphQL able to choose `Snapshot` or `Delta` backend per query shape + +## 2. Problem Statement + +We need a unified live abstraction that satisfies all of the following: + +- does not regress `observe()` / `changes()` performance +- does not regress `trace()` / DBSP-IVM performance +- does not break existing unit, wasm, browser, or performance tests +- moves GraphQL to a bridge/compiler/adapter role instead of an independent runtime role +- allows GraphQL live execution to select `requery` or `ivm` backend per query +- preserves correctness for nested GraphQL relations and directive-pruned selections + +The key architectural constraint is: + +- unify the control plane +- do not flatten the hot execution paths into one generic slow abstraction + +## 3. Goals + +### 3.1 Primary goals + +- Introduce a shared live runtime control plane for all live APIs. +- Preserve exactly two execution kernels: + - `SnapshotKernel` + - `DeltaKernel` +- Move GraphQL live from a dedicated runtime lane to an adapter-driven model. +- Add a backend selector for GraphQL live queries. +- Keep current API surfaces stable. + +### 3.2 Secondary goals + +- Reduce long-term maintenance risk by centralizing: + - dependency registration + - pending-change batching + - flush scheduling + - subscription lifecycle + - keepalive/GC +- Make backend selection explicit and testable. +- Establish a capability matrix for GraphQL-on-IVM instead of ad hoc special cases. + +## 4. Non-Goals + +The initial unification is not required to deliver all of the following on day one: + +- full GraphQL tree delta patch output to JS +- IVM support for all GraphQL query shapes +- multi-root GraphQL subscriptions +- GraphQL fragment support +- GraphQL payload delta protocol (`path/op/value`) for JS consumers + +The first complete version may legitimately do this: + +- unify runtime control flow immediately +- allow GraphQL to select `SnapshotKernel` or `DeltaKernel` +- have `DeltaKernel` support only a restricted GraphQL subset at first +- automatically fall back to `SnapshotKernel` for the rest + +## 5. Design Principles + +1. Control-plane unification, hot-path specialization + - shared registry and lifecycle management + - specialized snapshot and delta kernels + +2. Cold-path selection, not hot-path guessing + - backend selection happens at compile/register time + - no per-row or per-delta backend branching in hot loops + +3. Strongly typed kernel plans + - avoid trait-object-heavy execution in hot paths + - prefer enums and concrete structs for runtime dispatch boundaries + +4. GraphQL remains a semantic adapter + - GraphQL owns selection semantics and payload assembly + - GraphQL does not own an independent live runtime + +5. Performance gates are first-class requirements + - benchmark regressions fail the work + - compatibility tests remain green throughout + +## 6. Current-State Summary + +### 6.1 SQL snapshot live + +Current path: + +- `SelectBuilder.observe()` in `crates/database/src/query_builder.rs` +- compiles `LogicalPlan` -> `CompiledPhysicalPlan` +- initializes `ReQueryObservable` +- registers dependencies into `QueryRegistry` +- `changes()` simply wraps `observe()` into `JsChangesStream` + +Properties: + +- full current-result snapshot delivery +- re-query on change, with row-local patch fast path when available +- performance-sensitive and already tuned + +### 6.2 SQL delta live + +Current path: + +- `SelectBuilder.trace()` in `crates/database/src/query_builder.rs` +- compiles physical plan -> dataflow via `compile_to_dataflow()` +- initializes `ObservableQuery` +- registers with `register_ivm()` + +Properties: + +- delta-oriented +- truly distinct execution kernel +- must not be slowed down by snapshot semantics + +### 6.3 GraphQL live + +Current path: + +- `Database.subscribeGraphql()` in `crates/database/src/database.rs` +- bind GraphQL -> `BoundOperation` +- root field -> planner-backed plan via `crates/gql/src/plan.rs` +- nested payload assembly via `crates/gql/src/execute.rs` +- dependency collection via `crates/gql/src/bind.rs` +- GraphQL subscription runtime currently hosted in `crates/database/src/reactive_bridge.rs` + +Properties: + +- root query can reuse planner/cached-plan machinery +- nested relations currently require payload re-rendering semantics +- GraphQL already differs from SQL live in payload shape, not just transport shape + +## 7. Target Architecture + +### 7.1 High-level structure + +```text +API Surfaces +├─ observe() / changes() +├─ trace() +└─ subscribeGraphql() + +Shared Live Runtime Control Plane +├─ LivePlan +├─ LiveRegistry +├─ LiveHandle / subscription lifecycle +├─ pending change batching +├─ flush scheduling +└─ GC / keepalive + +Execution Kernels +├─ SnapshotKernel (requery / cached-plan / patch) +└─ DeltaKernel (DBSP-IVM / ObservableQuery / dataflow) + +Adapters +├─ RowsSnapshotAdapter +├─ RowsDeltaAdapter +├─ GraphqlSnapshotAdapter +└─ GraphqlDeltaAdapter +``` + +### 7.2 Rule of ownership + +- kernels own execution +- runtime owns lifecycle and routing +- adapters own output shaping +- GraphQL compiler owns semantic analysis and backend selection + +## 8. Proposed Core Types + +### 8.1 `LiveEngineKind` + +```rust +pub enum LiveEngineKind { + Snapshot, + Delta, +} +``` + +### 8.2 `LiveOutputKind` + +```rust +pub enum LiveOutputKind { + RowsSnapshot, + RowsDelta, + GraphqlSnapshot, + GraphqlDelta, +} +``` + +`GraphqlDelta` is included in the design even if the first complete implementation only emits full GraphQL payload snapshots for delta-backed subscriptions. + +### 8.3 `LiveDependencySet` + +```rust +pub struct LiveDependencySet { + pub tables: Vec, + pub root_tables: Vec, +} +``` + +Notes: + +- `tables` is the complete invalidation set. +- `root_tables` is specifically needed by GraphQL snapshot adapters to distinguish: + - root-row maintenance events + - nested relation invalidation events + +### 8.4 `KernelPlan` + +```rust +pub enum KernelPlan { + Snapshot(SnapshotPlan), + Delta(DeltaPlan), +} +``` + +`SnapshotPlan` should wrap existing planner-backed artifacts directly. +`DeltaPlan` should wrap existing dataflow compilation output directly. + +### 8.5 `AdapterPlan` + +```rust +pub enum AdapterPlan { + RowsSnapshot(RowsSnapshotAdapterPlan), + RowsDelta(RowsDeltaAdapterPlan), + GraphqlSnapshot(GraphqlSnapshotAdapterPlan), + GraphqlDelta(GraphqlDeltaAdapterPlan), +} +``` + +This plan is built once and used to instantiate concrete live subscriptions. + +### 8.6 `LivePlan` + +```rust +pub struct LivePlan { + pub engine: LiveEngineKind, + pub output: LiveOutputKind, + pub dependencies: LiveDependencySet, + pub kernel: KernelPlan, + pub adapter: AdapterPlan, +} +``` + +## 9. Shared Runtime Control Plane + +### 9.1 `LiveRegistry` + +`LiveRegistry` replaces the current role split inside `QueryRegistry` without flattening the kernels into one path. + +Responsibilities: + +- register subscriptions against dependency tables +- accumulate pending changes by table +- schedule microtask flushes in wasm +- perform synchronous flushes in tests/native paths +- drop dead subscriptions +- coordinate keepalive semantics + +Internal shape should still preserve two execution lanes: + +- snapshot lane +- delta lane + +Important detail: + +- unification happens at registration/routing/flush boundaries +- not inside the hot row-processing or delta-propagation loops + +### 9.2 Subscription lifecycle + +The runtime should define a shared subscription-handle model: + +- `LiveHandle` +- keepalive subscription +- unsubscribe function generation +- active subscription counting +- GC after flush + +This consolidates the currently duplicated lifecycle behavior across SQL and GraphQL live code. + +### 9.3 Flush protocol + +A single flush cycle should do this: + +1. drain pending deltas/changes +2. dispatch to delta lane +3. dispatch to snapshot lane +4. run adapter-level diffing and notification +5. GC dead subscriptions + +The exact ordering may be tuned, but it must remain deterministic and benchmarked. + +## 10. SnapshotKernel + +### 10.1 Scope + +`SnapshotKernel` is the formalized home for the current re-query runtime. + +It should directly preserve the current performance-critical pieces: + +- `CompiledPhysicalPlan` +- `execute_compiled_physical_plan_with_summary()` +- `apply_reactive_patch()` +- row-summary comparison + +### 10.2 Expected behavior + +For SQL rows adapters: + +- no semantic change from current `observe()` / `changes()` behavior + +For GraphQL adapters: + +- root-table changes: + - try root rows patch + - if unsupported, re-execute root query +- non-root dependency changes: + - do not force root query re-execution + - re-render GraphQL payload from cached root rows +- notify only if final payload changes + +### 10.3 Performance contract + +The current fast path for SQL snapshot live must remain intact. +No GraphQL-specific logic may leak into the SQL rows hot path. + +## 11. DeltaKernel + +### 11.1 Scope + +`DeltaKernel` is the formalized home for the current `trace()` / DBSP-IVM runtime. + +It should directly preserve: + +- `compile_to_dataflow()` +- `ObservableQuery` +- current delta propagation path +- `register_ivm()`-style dependency extraction + +### 11.2 Expected behavior + +For SQL rows delta adapters: + +- identical semantics to current `trace()` + +For GraphQL delta adapters: + +- only enabled when both relational and GraphQL payload capability checks succeed +- otherwise not instantiated at all; the selector falls back to `SnapshotKernel` + +### 11.3 Performance contract + +The current `trace()` / IVM path must not pay for snapshot or GraphQL logic in its delta hot loops. + +## 12. Adapter Layer + +### 12.1 `RowsSnapshotAdapter` + +Used by: + +- `observe()` +- `changes()` + +Responsibilities: + +- expose current rows snapshot +- deliver rows snapshot callbacks +- preserve current `JsObservableQuery` / `JsChangesStream` behavior + +### 12.2 `RowsDeltaAdapter` + +Used by: + +- `trace()` + +Responsibilities: + +- expose `{ added, removed }` +- preserve current `JsIvmObservableQuery` behavior + +### 12.3 `GraphqlSnapshotAdapter` + +Used by: + +- GraphQL subscriptions that fall back to `SnapshotKernel` +- potentially all GraphQL subscriptions in the first unification step + +Responsibilities: + +- render root rows into GraphQL payload tree +- differentiate root-table vs non-root-table invalidation +- reuse cached root rows whenever possible +- maintain payload summary and exact equality fallback +- emit full GraphQL payload snapshots + +### 12.4 `GraphqlDeltaAdapter` + +Used by: + +- GraphQL subscriptions eligible for `DeltaKernel` + +Responsibilities: + +- consume relational deltas +- maintain GraphQL node/bucket state incrementally +- update only affected subtrees internally +- emit full GraphQL payload snapshots externally in v1 +- optionally evolve into true GraphQL delta output in a future phase + +## 13. GraphQL Live Compiler + +GraphQL should move toward a compiler role, not a runtime-owner role. + +### 13.1 Inputs + +- bound GraphQL operation +- GraphQL catalog +- table id map +- schema cache state + +### 13.2 Outputs + +A `GraphqlLivePlan` that contains: + +- root field analysis +- dependency set from the full selection tree +- root relational plan candidate +- GraphQL payload adapter plan +- backend selection result + +### 13.3 Existing code that can be reused + +- root planner lowering from `crates/gql/src/plan.rs` +- dependency collection from `crates/gql/src/bind.rs` +- payload rendering semantics from `crates/gql/src/execute.rs` + +### 13.4 Selector split + +The selector should be explicit and testable: + +- `relational_incrementalizable` +- `graphql_payload_incrementalizable` + +Backend choice rule: + +```text +if relational_incrementalizable && graphql_payload_incrementalizable: + use DeltaKernel +else: + use SnapshotKernel +``` + +## 14. GraphQL IVM Capability Matrix (v1) + +The first version must be strict. +It is better to miss some IVM opportunities than to produce an unstable or slow hybrid path. + +### 14.1 Eligible in v1 + +- exactly one concrete root subscription field +- root field is planner-lowerable query/subscription read +- root relational plan is incrementalizable by current dataflow compiler +- selections consist of: + - scalar columns + - aliases + - `__typename` + - directive-pruned fields (`@include` / `@skip` already resolved during binding) + - simple FK-backed relations +- nested relations do not use unsupported per-parent pagination semantics +- nested filters are within the current incrementalizable predicate subset +- ordering is absent or within a specifically supported stable subset + +### 14.2 Fallback to SnapshotKernel in v1 + +- multi-root subscriptions +- unsupported directives +- unsupported nested ordering/pagination +- unsupported nested filter shapes +- any GraphQL tree shape whose incremental payload maintenance is not explicitly implemented + +## 15. Data Flow by Surface + +### 15.1 `observe()` / `changes()` + +```text +SelectBuilder + -> LivePlan(engine = Snapshot, output = RowsSnapshot) + -> SnapshotKernel + -> RowsSnapshotAdapter + -> JsObservableQuery / JsChangesStream +``` + +### 15.2 `trace()` + +```text +SelectBuilder + -> LivePlan(engine = Delta, output = RowsDelta) + -> DeltaKernel + -> RowsDeltaAdapter + -> JsIvmObservableQuery +``` + +### 15.3 GraphQL fallback snapshot path + +```text +subscribeGraphql() + -> bind GraphQL + -> GraphqlLiveCompiler + -> selector chooses Snapshot + -> LivePlan(engine = Snapshot, output = GraphqlSnapshot) + -> SnapshotKernel (root rows) + -> GraphqlSnapshotAdapter (payload tree) + -> JsGraphqlSubscription +``` + +### 15.4 GraphQL eligible delta path + +```text +subscribeGraphql() + -> bind GraphQL + -> GraphqlLiveCompiler + -> selector chooses Delta + -> LivePlan(engine = Delta, output = GraphqlSnapshot/GraphqlDelta) + -> DeltaKernel (relational delta) + -> GraphqlDeltaAdapter (tree maintenance) + -> JsGraphqlSubscription +``` + +## 16. Migration Plan + +The target architecture should be delivered through bounded implementation steps, but the intended end-state is a single unified runtime model. + +### Step 1: Introduce `LivePlan` and shared runtime skeleton + +- add shared types +- add shared registry lifecycle model +- keep existing surfaces working +- do not change kernel hot paths yet + +### Step 2: Move SQL snapshot live onto the shared runtime + +- adapt `observe()` and `changes()` +- preserve `ReQueryObservable` semantics internally as `SnapshotKernel` +- prove zero semantic drift with existing tests + +### Step 3: Move SQL delta live onto the shared runtime + +- adapt `trace()` +- preserve `ObservableQuery` semantics internally as `DeltaKernel` +- keep current incremental performance characteristics intact + +### Step 4: Move GraphQL live to `GraphqlSnapshotAdapter` on unified runtime + +- remove GraphQL-specific runtime ownership +- keep current correctness for nested relation invalidation +- preserve current payload semantics + +### Step 5: Introduce GraphQL backend selector + +- add capability analysis +- wire `Snapshot` vs `Delta` backend decision into GraphQL live compiler +- default to conservative fallback + +### Step 6: Implement first `GraphqlDeltaAdapter` subset + +- restricted query shape only +- preserve full GraphQL payload snapshot API externally +- validate equivalence against snapshot backend and one-shot GraphQL execution + +### Step 7: Remove obsolete GraphQL-specific runtime branches + +- final cleanup in `reactive_bridge.rs` +- GraphQL remains compiler/adapter only + +## 17. Testing Plan + +### 17.1 Compatibility tests + +Must remain green: + +- all current Rust unit tests +- all wasm tests +- all browser tests +- all GraphQL tests + +### 17.2 New runtime tests + +Add tests for: + +- unified registry batching semantics +- subscription lifecycle and GC +- identical `observe()` behavior pre/post runtime unification +- identical `trace()` behavior pre/post runtime unification + +### 17.3 GraphQL backend-selection tests + +Add tests that assert a query shape selects: + +- `SnapshotKernel` when expected +- `DeltaKernel` when expected + +### 17.4 GraphQL equivalence tests + +For GraphQL queries eligible for `DeltaKernel`: + +- one-shot `graphql()` result +- snapshot-backed live result +- delta-backed live result + +must match after each mutation sequence. + +### 17.5 Randomized regression tests + +For a fixed schema and random mutation stream: + +- apply random inserts/updates/deletes +- compare live GraphQL state against fresh one-shot execution +- run for both eligible and fallback query shapes + +## 18. Performance Gates + +Performance is a release gate for this work. + +### 18.1 No-regression gates + +- `observe()` benchmark median must not regress beyond noise threshold +- `changes()` benchmark median must not regress beyond noise threshold +- `trace()` benchmark median must not regress beyond noise threshold +- GraphQL snapshot fallback path must not regress relative to the current implementation + +### 18.2 Positive expectation gates + +- GraphQL delta-backed eligible queries should outperform snapshot fallback on mutation-heavy workloads +- non-eligible GraphQL queries must still benefit from root-row reuse and relation-only re-rendering + +### 18.3 Practical enforcement + +Use existing benchmark/test surfaces where possible: + +- `js/packages/core/tests/performance.test.ts` +- `js/packages/core/tests/live-query-throughput.test.ts` +- `js/packages/core/tests/comprehensive-perf.test.ts` +- `js/packages/core/tests/graphql.test.ts` +- relevant Rust-side unit and wasm tests in `crates/database` and `crates/gql` + +## 19. Risks and Mitigations + +### Risk 1: Over-abstracting the hot path + +Mitigation: + +- keep kernels concrete +- use typed `KernelPlan` enums +- avoid per-row trait-object dispatch + +### Risk 2: GraphQL IVM capability surface is over-claimed + +Mitigation: + +- strict capability matrix +- explicit fallback rules +- contract tests for backend selection + +### Risk 3: Runtime unification accidentally changes subscription lifecycle semantics + +Mitigation: + +- dedicated lifecycle tests +- preserve keepalive/unsubscribe behavior during migration + +### Risk 4: GraphQL remains effectively a third runtime in disguise + +Mitigation: + +- force GraphQL onto shared `LivePlan` / `LiveRegistry` +- keep GraphQL code limited to compiler + adapter modules + +## 20. Acceptance Criteria + +This design is considered implemented when all of the following are true: + +- there is one shared live runtime control plane +- there are exactly two execution kernels (`SnapshotKernel`, `DeltaKernel`) +- `observe()` and `changes()` run through the shared runtime and keep their current performance/behavior +- `trace()` runs through the shared runtime and keeps its current performance/behavior +- GraphQL subscriptions no longer own an independent runtime lane +- GraphQL subscriptions are compiled into adapter-backed live plans +- GraphQL backend selection exists and is tested +- unsupported GraphQL-on-IVM shapes correctly fall back to snapshot backend +- all existing tests remain green +- performance benchmarks show no meaningful regressions on current hot paths + +## 21. Long-Term Outcome + +If this design is followed, Cynos ends up with: + +- two kernel implementations, not three +- one shared live runtime, not per-surface lifecycle logic +- GraphQL as a true upper-layer bridge +- a clean path for GraphQL to choose `requery` or `ivm` by query shape +- a sustainable foundation for future GraphQL live work without permanently fragmenting the runtime diff --git a/crates/database/src/database.rs b/crates/database/src/database.rs index 804dc26..fe1d967 100644 --- a/crates/database/src/database.rs +++ b/crates/database/src/database.rs @@ -5,18 +5,19 @@ use crate::binary_protocol::SchemaLayoutCache; use crate::convert::{gql_response_to_js, js_to_gql_variables}; +use crate::dataflow_compiler::compile_to_dataflow; +use crate::live_runtime::{LiveDependencySet, LivePlan, LiveRegistry}; use crate::query_builder::{DeleteBuilder, InsertBuilder, SelectBuilder, UpdateBuilder}; -use crate::reactive_bridge::{JsGraphqlSubscription, QueryRegistry, ReQueryObservable}; +use crate::reactive_bridge::JsGraphqlSubscription; use crate::table::{JsTable, JsTableBuilder}; use crate::transaction::JsTransaction; use alloc::rc::Rc; use alloc::string::{String, ToString}; -#[cfg(feature = "benchmark")] use alloc::vec::Vec; use core::cell::RefCell; use cynos_core::Row; -use cynos_incremental::Delta; use cynos_gql::{PreparedQuery as GqlPreparedQuery, SchemaCache as GraphqlSchemaCache}; +use cynos_incremental::Delta; use cynos_query::plan_cache::PlanCache; use cynos_reactive::TableId; use cynos_storage::TableCache; @@ -33,7 +34,7 @@ use wasm_bindgen::prelude::*; pub struct Database { name: String, cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, next_table_id: Rc>, schema_layout_cache: Rc>, @@ -46,7 +47,7 @@ pub struct Database { #[wasm_bindgen] pub struct PreparedGraphqlQuery { cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, graphql_schema_cache: Rc>, schema_epoch: Rc>, @@ -58,7 +59,7 @@ impl Database { /// Creates a new database instance. #[wasm_bindgen(constructor)] pub fn new(name: &str) -> Self { - let query_registry = Rc::new(RefCell::new(QueryRegistry::new())); + let query_registry = Rc::new(RefCell::new(LiveRegistry::new())); // Set self reference for microtask scheduling query_registry .borrow_mut() @@ -584,7 +585,7 @@ fn bind_graphql_operation( fn execute_graphql_bound_operation( cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, catalog: cynos_gql::GraphqlCatalog, bound: cynos_gql::BoundOperation, @@ -606,11 +607,28 @@ fn execute_graphql_bound_operation( fn create_graphql_subscription( cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, catalog: cynos_gql::GraphqlCatalog, bound: cynos_gql::BoundOperation, ) -> Result { + let live_plan = compile_graphql_live_plan(cache.clone(), table_id_map, catalog, bound)?; + Ok(match live_plan.descriptor.engine { + crate::live_runtime::LiveEngineKind::Snapshot => { + live_plan.materialize_graphql_snapshot(cache, query_registry) + } + crate::live_runtime::LiveEngineKind::Delta => { + live_plan.materialize_graphql_delta(cache, query_registry) + } + }) +} + +fn compile_graphql_live_plan( + cache: Rc>, + table_id_map: Rc>>, + catalog: cynos_gql::GraphqlCatalog, + bound: cynos_gql::BoundOperation, +) -> Result { if bound.kind != cynos_gql::OperationType::Subscription { return Err(JsValue::from_str( "subscribeGraphql() only accepts subscription operations", @@ -633,49 +651,159 @@ fn create_graphql_subscription( )); } - let plan = cynos_gql::build_root_field_plan(&catalog, &field) + let root_plan = cynos_gql::build_root_field_plan(&catalog, &field) .map_err(|error| JsValue::from_str(error.message()))?; - let dependent_tables = plan.logical_plan.collect_tables(); - - let cache_ref = cache.clone(); - let cache_borrow = cache_ref.borrow(); - let compiled_plan = - crate::query_engine::compile_cached_plan(&cache_borrow, &plan.table_name, plan.logical_plan); - let initial_output = - crate::query_engine::execute_compiled_physical_plan_with_summary(&cache_borrow, &compiled_plan) - .map_err(|error| JsValue::from_str(&alloc::format!("Query execution error: {:?}", error)))?; - drop(cache_borrow); - - let observable = ReQueryObservable::new_with_summary( - compiled_plan, - cache_ref.clone(), - initial_output.rows, - initial_output.summary, - ); - let observable_rc = Rc::new(RefCell::new(observable)); + let mut root_dependency_tables = root_plan.logical_plan.collect_tables(); + if !root_dependency_tables + .iter() + .any(|table| table == &root_plan.table_name) + { + root_dependency_tables.push(root_plan.table_name.clone()); + } + let all_dependency_tables = cynos_gql::bind::collect_dependency_tables(&field); + let (dependency_set, dependency_table_bindings) = { + let table_id_map = table_id_map.borrow(); + let dependency_table_bindings = + build_graphql_dependency_bindings(&table_id_map, &all_dependency_tables)?; + let dependency_set = build_graphql_dependency_set( + &table_id_map, + &dependency_table_bindings, + &root_dependency_tables, + )?; + (dependency_set, dependency_table_bindings) + }; { + let cache_borrow = cache.borrow(); let table_id_map = table_id_map.borrow(); - let mut registry = query_registry.borrow_mut(); - for table in dependent_tables { - let table_id = table_id_map - .get(&table) - .copied() - .ok_or_else(|| JsValue::from_str(&alloc::format!("Table ID not found: {}", table)))?; - registry.register(observable_rc.clone(), table_id); + if cynos_gql::bind::is_delta_capable_root_field(&field) { + if let Some(live_plan) = build_graphql_delta_live_plan( + &cache_borrow, + &table_id_map, + dependency_set.clone(), + catalog.clone(), + field.clone(), + dependency_table_bindings.clone(), + &root_plan, + )? { + return Ok(live_plan); + } } } - Ok(JsGraphqlSubscription::new( - observable_rc, - cache_ref, + let cache_borrow = cache.borrow(); + let compiled_plan = crate::query_engine::compile_cached_plan( + &cache_borrow, + &root_plan.table_name, + root_plan.logical_plan, + ); + let initial_output = crate::query_engine::execute_compiled_physical_plan_with_summary( + &cache_borrow, + &compiled_plan, + ) + .map_err(|error| JsValue::from_str(&alloc::format!("Query execution error: {:?}", error)))?; + + Ok(LivePlan::graphql_snapshot( + dependency_set, + compiled_plan, + initial_output.rows, + initial_output.summary, catalog, field, + dependency_table_bindings, )) } +fn build_graphql_dependency_bindings( + table_id_map: &hashbrown::HashMap, + dependency_tables: &[String], +) -> Result, JsValue> { + let mut bindings = dependency_tables + .iter() + .map(|table| { + table_id_map + .get(table) + .copied() + .map(|table_id| (table_id, table.clone())) + .ok_or_else(|| JsValue::from_str(&alloc::format!("Table ID not found: {}", table))) + }) + .collect::, _>>()?; + bindings.sort_unstable_by(|(left_id, left_name), (right_id, right_name)| { + left_id + .cmp(right_id) + .then_with(|| left_name.cmp(right_name)) + }); + bindings.dedup_by(|left, right| left.0 == right.0); + Ok(bindings) +} + +fn build_graphql_dependency_set( + table_id_map: &hashbrown::HashMap, + dependency_table_bindings: &[(TableId, String)], + root_tables: &[String], +) -> Result { + let dependency_table_ids = dependency_table_bindings + .iter() + .map(|(table_id, _)| *table_id) + .collect::>(); + let root_table_ids = root_tables + .iter() + .map(|table| { + table_id_map + .get(table) + .copied() + .ok_or_else(|| JsValue::from_str(&alloc::format!("Table ID not found: {}", table))) + }) + .collect::, _>>()?; + Ok(LiveDependencySet::graphql( + dependency_table_ids, + root_table_ids, + )) +} + +fn build_graphql_delta_live_plan( + cache: &TableCache, + table_id_map: &hashbrown::HashMap, + dependency_set: LiveDependencySet, + catalog: cynos_gql::GraphqlCatalog, + field: cynos_gql::bind::BoundRootField, + dependency_table_bindings: Vec<(TableId, String)>, + root_plan: &cynos_gql::RootFieldPlan, +) -> Result, JsValue> { + let store = cache.get_table(&root_plan.table_name).ok_or_else(|| { + JsValue::from_str(&alloc::format!("Table not found: {}", root_plan.table_name)) + })?; + + let physical_plan = crate::query_engine::compile_plan( + cache, + &root_plan.table_name, + root_plan.logical_plan.clone(), + ); + let mut table_schemas = hashbrown::HashMap::new(); + table_schemas.insert(root_plan.table_name.clone(), store.schema().clone()); + let Some(compile_result) = compile_to_dataflow(&physical_plan, table_id_map, &table_schemas) + else { + return Ok(None); + }; + + let initial_rows = + crate::query_engine::execute_physical_plan(cache, &physical_plan).map_err(|error| { + JsValue::from_str(&alloc::format!("Query execution error: {:?}", error)) + })?; + let initial_owned = initial_rows.iter().map(|row| (**row).clone()).collect(); + + Ok(Some(LivePlan::graphql_delta( + dependency_set, + compile_result.dataflow, + initial_owned, + catalog, + field, + dependency_table_bindings, + ))) +} + fn notify_graphql_changes( - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, changes: &[cynos_gql::TableChange], ) { @@ -710,7 +838,7 @@ fn notify_graphql_changes( let mut registry = query_registry.borrow_mut(); for (table_name, (deltas, changed_ids)) in aggregated { if let Some(table_id) = table_id_map.get(&table_name).copied() { - registry.on_table_change_ivm(table_id, deltas, &changed_ids); + registry.on_table_change_delta(table_id, deltas, &changed_ids); } } } @@ -723,7 +851,7 @@ impl Database { } /// Gets the query registry (for internal use). - pub(crate) fn query_registry(&self) -> Rc> { + pub(crate) fn query_registry(&self) -> Rc> { self.query_registry.clone() } @@ -747,6 +875,7 @@ impl Database { #[cfg(test)] mod tests { use super::*; + use crate::live_runtime::LiveEngineKind; use crate::table::{ColumnOptions, ForeignKeyOptions}; use crate::JsDataType; use cynos_core::{Row, Value}; @@ -754,6 +883,80 @@ mod tests { wasm_bindgen_test_configure!(run_in_browser); + fn setup_graphql_users_db() -> Database { + let db = Database::new("graphql"); + let users = db + .create_table("users") + .column( + "id", + JsDataType::Int64, + Some(ColumnOptions::new().set_primary_key(true)), + ) + .column("name", JsDataType::String, None); + db.register_table(&users).unwrap(); + db + } + + fn setup_graphql_users_posts_db() -> Database { + let db = setup_graphql_users_db(); + let posts = db + .create_table("posts") + .column( + "id", + JsDataType::Int64, + Some(ColumnOptions::new().set_primary_key(true)), + ) + .column("author_id", JsDataType::Int64, None) + .column("title", JsDataType::String, None) + .foreign_key( + "fk_posts_author", + "author_id", + "users", + "id", + Some( + ForeignKeyOptions::new() + .set_field_name("author") + .set_reverse_field_name("posts"), + ), + ); + db.register_table(&posts).unwrap(); + db + } + + fn collect_titles(array: &js_sys::Array) -> Vec { + let mut titles = Vec::with_capacity(array.length() as usize); + for index in 0..array.length() { + let item = array.get(index); + let title = js_sys::Reflect::get(&item, &JsValue::from_str("title")) + .unwrap() + .as_string() + .unwrap(); + titles.push(title); + } + titles.sort(); + titles + } + + fn compile_subscription_engine(db: &Database, query: &str) -> LiveEngineKind { + let prepared = GqlPreparedQuery::parse_with_operation(query, None).unwrap(); + let variables = cynos_gql::VariableValues::default(); + let cache = db.cache.borrow(); + let (catalog, bound) = bind_graphql_operation( + &prepared, + &cache, + &db.graphql_schema_cache, + &db.schema_epoch, + &variables, + ) + .unwrap(); + drop(cache); + + compile_graphql_live_plan(db.cache.clone(), db.table_id_map.clone(), catalog, bound) + .unwrap() + .descriptor + .engine + } + #[wasm_bindgen_test] fn test_database_new() { let db = Database::new("test"); @@ -1114,6 +1317,476 @@ mod tests { assert_eq!(name, "Alice"); } + #[wasm_bindgen_test] + fn test_graphql_live_selector_chooses_delta_for_scalar_root_subscription() { + let db = setup_graphql_users_db(); + let engine = compile_subscription_engine( + &db, + "subscription UserCard { usersByPk(pk: { id: 1 }) { id name } }", + ); + assert_eq!(engine, LiveEngineKind::Delta); + } + + #[wasm_bindgen_test] + fn test_graphql_live_selector_chooses_delta_for_nested_relations_without_sorting() { + let db = setup_graphql_users_posts_db(); + let engine = compile_subscription_engine( + &db, + "subscription PostAuthorGraph { postsByPk(pk: { id: 10 }) { id title author { id name posts { id title } } } }", + ); + assert_eq!(engine, LiveEngineKind::Delta); + } + + #[wasm_bindgen_test] + fn test_graphql_live_selector_falls_back_to_snapshot_for_nested_relation_sorting() { + let db = setup_graphql_users_posts_db(); + let engine = compile_subscription_engine( + &db, + "subscription UserCard { usersByPk(pk: { id: 1 }) { id name posts(orderBy: [{ field: ID, direction: ASC }]) { id title } } }", + ); + assert_eq!(engine, LiveEngineKind::Snapshot); + } + + #[wasm_bindgen_test] + fn test_database_graphql_delta_subscription_tracks_scalar_root_updates() { + let db = setup_graphql_users_db(); + + let subscription = db + .subscribe_graphql( + "subscription UserCard { usersByPk(pk: { id: 1 }) { id name } }", + None, + None, + ) + .unwrap(); + + assert_eq!( + compile_subscription_engine( + &db, + "subscription UserCard { usersByPk(pk: { id: 1 }) { id name } }" + ), + LiveEngineKind::Delta + ); + + let initial = subscription.get_result(); + let initial_data = js_sys::Reflect::get(&initial, &JsValue::from_str("data")).unwrap(); + let initial_user = + js_sys::Reflect::get(&initial_data, &JsValue::from_str("usersByPk")).unwrap(); + assert!(initial_user.is_null()); + + db.graphql( + "mutation { insertUsers(input: [{ id: 1, name: \"Alice\" }]) { id name } }", + None, + None, + ) + .unwrap(); + db.query_registry.borrow_mut().flush(); + + let inserted = subscription.get_result(); + let inserted_data = js_sys::Reflect::get(&inserted, &JsValue::from_str("data")).unwrap(); + let inserted_user = + js_sys::Reflect::get(&inserted_data, &JsValue::from_str("usersByPk")).unwrap(); + let inserted_name = js_sys::Reflect::get(&inserted_user, &JsValue::from_str("name")) + .unwrap() + .as_string() + .unwrap(); + assert_eq!(inserted_name, "Alice"); + + db.graphql( + "mutation { updateUsers(where: { id: { eq: 1 } }, set: { name: \"Alicia\" }) { id name } }", + None, + None, + ) + .unwrap(); + db.query_registry.borrow_mut().flush(); + + let updated = subscription.get_result(); + let updated_data = js_sys::Reflect::get(&updated, &JsValue::from_str("data")).unwrap(); + let updated_user = + js_sys::Reflect::get(&updated_data, &JsValue::from_str("usersByPk")).unwrap(); + let updated_name = js_sys::Reflect::get(&updated_user, &JsValue::from_str("name")) + .unwrap() + .as_string() + .unwrap(); + assert_eq!(updated_name, "Alicia"); + } + + #[wasm_bindgen_test] + fn test_database_graphql_delta_subscription_tracks_multilevel_nested_relation_changes() { + let db = setup_graphql_users_posts_db(); + + assert_eq!( + compile_subscription_engine( + &db, + "subscription PostAuthorGraph { postsByPk(pk: { id: 10 }) { id title author { id name posts { id title } } } }", + ), + LiveEngineKind::Delta + ); + + db.cache + .borrow_mut() + .get_table_mut("users") + .unwrap() + .insert(Row::new( + 2, + alloc::vec![Value::Int64(2), Value::String("Bob".into())], + )) + .unwrap(); + db.cache + .borrow_mut() + .get_table_mut("posts") + .unwrap() + .insert(Row::new( + 10, + alloc::vec![ + Value::Int64(10), + Value::Int64(2), + Value::String("First".into()), + ], + )) + .unwrap(); + + let subscription = db + .subscribe_graphql( + "subscription PostAuthorGraph { postsByPk(pk: { id: 10 }) { id title author { id name posts { id title } } } }", + None, + None, + ) + .unwrap(); + + let initial = subscription.get_result(); + let initial_data = js_sys::Reflect::get(&initial, &JsValue::from_str("data")).unwrap(); + let initial_post = + js_sys::Reflect::get(&initial_data, &JsValue::from_str("postsByPk")).unwrap(); + let initial_author = + js_sys::Reflect::get(&initial_post, &JsValue::from_str("author")).unwrap(); + let initial_posts = js_sys::Array::from( + &js_sys::Reflect::get(&initial_author, &JsValue::from_str("posts")).unwrap(), + ); + assert_eq!(collect_titles(&initial_posts), vec!["First".to_string()]); + + db.graphql( + "mutation { insertPosts(input: [{ id: 11, author_id: 2, title: \"Second\" }]) { id title } }", + None, + None, + ) + .unwrap(); + db.query_registry.borrow_mut().flush(); + + let updated = subscription.get_result(); + let updated_data = js_sys::Reflect::get(&updated, &JsValue::from_str("data")).unwrap(); + let updated_post = + js_sys::Reflect::get(&updated_data, &JsValue::from_str("postsByPk")).unwrap(); + let updated_author = + js_sys::Reflect::get(&updated_post, &JsValue::from_str("author")).unwrap(); + let updated_posts = js_sys::Array::from( + &js_sys::Reflect::get(&updated_author, &JsValue::from_str("posts")).unwrap(), + ); + assert_eq!( + collect_titles(&updated_posts), + vec!["First".to_string(), "Second".to_string()] + ); + } + + #[wasm_bindgen_test] + fn test_database_graphql_delta_subscription_reparents_nested_relation_membership() { + let db = setup_graphql_users_posts_db(); + + db.cache + .borrow_mut() + .get_table_mut("users") + .unwrap() + .insert(Row::new( + 2, + alloc::vec![Value::Int64(2), Value::String("Bob".into())], + )) + .unwrap(); + db.cache + .borrow_mut() + .get_table_mut("users") + .unwrap() + .insert(Row::new( + 3, + alloc::vec![Value::Int64(3), Value::String("Cara".into())], + )) + .unwrap(); + db.cache + .borrow_mut() + .get_table_mut("posts") + .unwrap() + .insert(Row::new( + 10, + alloc::vec![ + Value::Int64(10), + Value::Int64(2), + Value::String("First".into()), + ], + )) + .unwrap(); + db.cache + .borrow_mut() + .get_table_mut("posts") + .unwrap() + .insert(Row::new( + 11, + alloc::vec![ + Value::Int64(11), + Value::Int64(2), + Value::String("Second".into()), + ], + )) + .unwrap(); + + let subscription = db + .subscribe_graphql( + "subscription PostAuthorGraph { postsByPk(pk: { id: 10 }) { id title author { id name posts { id title } } } }", + None, + None, + ) + .unwrap(); + + let initial = subscription.get_result(); + let initial_data = js_sys::Reflect::get(&initial, &JsValue::from_str("data")).unwrap(); + let initial_post = + js_sys::Reflect::get(&initial_data, &JsValue::from_str("postsByPk")).unwrap(); + let initial_author = + js_sys::Reflect::get(&initial_post, &JsValue::from_str("author")).unwrap(); + let initial_posts = js_sys::Array::from( + &js_sys::Reflect::get(&initial_author, &JsValue::from_str("posts")).unwrap(), + ); + assert_eq!( + collect_titles(&initial_posts), + vec!["First".to_string(), "Second".to_string()] + ); + + db.graphql( + "mutation { updatePosts(where: { id: { eq: 11 } }, set: { author_id: 3 }) { id title } }", + None, + None, + ) + .unwrap(); + db.query_registry.borrow_mut().flush(); + + let updated = subscription.get_result(); + let updated_data = js_sys::Reflect::get(&updated, &JsValue::from_str("data")).unwrap(); + let updated_post = + js_sys::Reflect::get(&updated_data, &JsValue::from_str("postsByPk")).unwrap(); + let updated_author = + js_sys::Reflect::get(&updated_post, &JsValue::from_str("author")).unwrap(); + let updated_posts = js_sys::Array::from( + &js_sys::Reflect::get(&updated_author, &JsValue::from_str("posts")).unwrap(), + ); + assert_eq!(collect_titles(&updated_posts), vec!["First".to_string()]); + } + + #[wasm_bindgen_test] + fn test_database_graphql_subscription_tracks_nested_relation_changes() { + let db = Database::new("test"); + + let users = db + .create_table("users") + .column( + "id", + JsDataType::Int64, + Some(ColumnOptions::new().set_primary_key(true)), + ) + .column("name", JsDataType::String, None); + db.register_table(&users).unwrap(); + + let posts = db + .create_table("posts") + .column( + "id", + JsDataType::Int64, + Some(ColumnOptions::new().set_primary_key(true)), + ) + .column("author_id", JsDataType::Int64, None) + .column("title", JsDataType::String, None) + .foreign_key( + "fk_posts_author", + "author_id", + "users", + "id", + Some( + ForeignKeyOptions::new() + .set_field_name("author") + .set_reverse_field_name("posts"), + ), + ); + db.register_table(&posts).unwrap(); + + db.cache + .borrow_mut() + .get_table_mut("users") + .unwrap() + .insert(Row::new( + 1, + alloc::vec![Value::Int64(1), Value::String("Alice".into())], + )) + .unwrap(); + + let subscription = db + .subscribe_graphql( + "subscription WatchUsersWithPosts { users(orderBy: [{ field: ID, direction: ASC }]) { id name posts(orderBy: [{ field: ID, direction: ASC }]) { id title } } }", + None, + None, + ) + .unwrap(); + + let initial = subscription.get_result(); + let initial_data = js_sys::Reflect::get(&initial, &JsValue::from_str("data")).unwrap(); + let initial_users = js_sys::Array::from( + &js_sys::Reflect::get(&initial_data, &JsValue::from_str("users")).unwrap(), + ); + assert_eq!(initial_users.length(), 1); + let initial_user = initial_users.get(0); + let initial_posts = js_sys::Array::from( + &js_sys::Reflect::get(&initial_user, &JsValue::from_str("posts")).unwrap(), + ); + assert_eq!(initial_posts.length(), 0); + + db.graphql( + "mutation { insertPosts(input: [{ id: 10, author_id: 1, title: \"Hello\" }]) { id title } }", + None, + None, + ) + .unwrap(); + db.query_registry.borrow_mut().flush(); + + let inserted = subscription.get_result(); + let inserted_data = js_sys::Reflect::get(&inserted, &JsValue::from_str("data")).unwrap(); + let inserted_users = js_sys::Array::from( + &js_sys::Reflect::get(&inserted_data, &JsValue::from_str("users")).unwrap(), + ); + let inserted_user = inserted_users.get(0); + let inserted_posts = js_sys::Array::from( + &js_sys::Reflect::get(&inserted_user, &JsValue::from_str("posts")).unwrap(), + ); + assert_eq!(inserted_posts.length(), 1); + let inserted_post = inserted_posts.get(0); + let title = js_sys::Reflect::get(&inserted_post, &JsValue::from_str("title")) + .unwrap() + .as_string() + .unwrap(); + assert_eq!(title, "Hello"); + + db.graphql( + "mutation { updatePosts(where: { id: { eq: 10 } }, set: { title: \"Updated\" }) { id title } }", + None, + None, + ) + .unwrap(); + db.query_registry.borrow_mut().flush(); + + let updated = subscription.get_result(); + let updated_data = js_sys::Reflect::get(&updated, &JsValue::from_str("data")).unwrap(); + let updated_users = js_sys::Array::from( + &js_sys::Reflect::get(&updated_data, &JsValue::from_str("users")).unwrap(), + ); + let updated_user = updated_users.get(0); + let updated_posts = js_sys::Array::from( + &js_sys::Reflect::get(&updated_user, &JsValue::from_str("posts")).unwrap(), + ); + assert_eq!(updated_posts.length(), 1); + let updated_post = updated_posts.get(0); + let title = js_sys::Reflect::get(&updated_post, &JsValue::from_str("title")) + .unwrap() + .as_string() + .unwrap(); + assert_eq!(title, "Updated"); + } + + #[wasm_bindgen_test] + fn test_database_graphql_snapshot_subscription_reparents_nested_relation_membership() { + let db = setup_graphql_users_posts_db(); + + db.cache + .borrow_mut() + .get_table_mut("users") + .unwrap() + .insert(Row::new( + 1, + alloc::vec![Value::Int64(1), Value::String("Alice".into())], + )) + .unwrap(); + db.cache + .borrow_mut() + .get_table_mut("users") + .unwrap() + .insert(Row::new( + 2, + alloc::vec![Value::Int64(2), Value::String("Bob".into())], + )) + .unwrap(); + db.cache + .borrow_mut() + .get_table_mut("posts") + .unwrap() + .insert(Row::new( + 10, + alloc::vec![ + Value::Int64(10), + Value::Int64(1), + Value::String("Hello".into()), + ], + )) + .unwrap(); + + let subscription = db + .subscribe_graphql( + "subscription WatchUsersWithPosts { users(orderBy: [{ field: ID, direction: ASC }]) { id name posts(orderBy: [{ field: ID, direction: ASC }]) { id title } } }", + None, + None, + ) + .unwrap(); + + assert_eq!( + compile_subscription_engine( + &db, + "subscription WatchUsersWithPosts { users(orderBy: [{ field: ID, direction: ASC }]) { id name posts(orderBy: [{ field: ID, direction: ASC }]) { id title } } }" + ), + LiveEngineKind::Snapshot + ); + + let initial = subscription.get_result(); + let initial_data = js_sys::Reflect::get(&initial, &JsValue::from_str("data")).unwrap(); + let initial_users = js_sys::Array::from( + &js_sys::Reflect::get(&initial_data, &JsValue::from_str("users")).unwrap(), + ); + let user_one_posts = js_sys::Array::from( + &js_sys::Reflect::get(&initial_users.get(0), &JsValue::from_str("posts")).unwrap(), + ); + let user_two_posts = js_sys::Array::from( + &js_sys::Reflect::get(&initial_users.get(1), &JsValue::from_str("posts")).unwrap(), + ); + assert_eq!(collect_titles(&user_one_posts), vec!["Hello".to_string()]); + assert!(collect_titles(&user_two_posts).is_empty()); + + db.graphql( + "mutation { updatePosts(where: { id: { eq: 10 } }, set: { author_id: 2 }) { id title } }", + None, + None, + ) + .unwrap(); + db.query_registry.borrow_mut().flush(); + + let updated = subscription.get_result(); + let updated_data = js_sys::Reflect::get(&updated, &JsValue::from_str("data")).unwrap(); + let updated_users = js_sys::Array::from( + &js_sys::Reflect::get(&updated_data, &JsValue::from_str("users")).unwrap(), + ); + let updated_user_one_posts = js_sys::Array::from( + &js_sys::Reflect::get(&updated_users.get(0), &JsValue::from_str("posts")).unwrap(), + ); + let updated_user_two_posts = js_sys::Array::from( + &js_sys::Reflect::get(&updated_users.get(1), &JsValue::from_str("posts")).unwrap(), + ); + assert!(collect_titles(&updated_user_one_posts).is_empty()); + assert_eq!( + collect_titles(&updated_user_two_posts), + vec!["Hello".to_string()] + ); + } + #[wasm_bindgen_test] fn test_prepared_graphql_supports_mutation_and_subscription() { let db = Database::new("test"); @@ -1144,7 +1817,12 @@ mod tests { .unwrap(); let variables = js_sys::Object::new(); - js_sys::Reflect::set(&variables, &JsValue::from_str("id"), &JsValue::from_f64(2.0)).unwrap(); + js_sys::Reflect::set( + &variables, + &JsValue::from_str("id"), + &JsValue::from_f64(2.0), + ) + .unwrap(); js_sys::Reflect::set( &variables, &JsValue::from_str("name"), @@ -1164,7 +1842,8 @@ mod tests { let payload = subscription.get_result(); let data = js_sys::Reflect::get(&payload, &JsValue::from_str("data")).unwrap(); - let users = js_sys::Array::from(&js_sys::Reflect::get(&data, &JsValue::from_str("users")).unwrap()); + let users = + js_sys::Array::from(&js_sys::Reflect::get(&data, &JsValue::from_str("users")).unwrap()); assert_eq!(users.length(), 1); let first = users.get(0); let id = js_sys::Reflect::get(&first, &JsValue::from_str("id")) diff --git a/crates/database/src/dataflow_compiler.rs b/crates/database/src/dataflow_compiler.rs index dc9de82..cd80740 100644 --- a/crates/database/src/dataflow_compiler.rs +++ b/crates/database/src/dataflow_compiler.rs @@ -14,13 +14,14 @@ use alloc::boxed::Box; use alloc::string::String; use alloc::vec::Vec; -use cynos_core::{Row, Value}; +use cynos_core::{schema::Table, Row, Value}; use cynos_incremental::{ AggregateType, DataflowNode, JoinType as IvmJoinType, KeyExtractorFn, TableId, }; +use cynos_index::KeyRange; use cynos_query::ast::JoinType as QueryJoinType; use cynos_query::ast::{AggregateFunc, BinaryOp, Expr, UnaryOp}; -use cynos_query::planner::PhysicalPlan; +use cynos_query::planner::{IndexBounds, PhysicalPlan}; use hashbrown::HashMap; /// Result of compiling a PhysicalPlan to a DataflowNode. @@ -119,6 +120,12 @@ struct CompiledNode { layout: CompileLayout, } +#[derive(Clone)] +struct IndexedColumnRef { + name: String, + index: usize, +} + /// Compiles a PhysicalPlan into a DataflowNode for IVM. /// /// Returns None if the plan contains non-incrementalizable operators @@ -126,43 +133,258 @@ struct CompiledNode { pub fn compile_to_dataflow( plan: &PhysicalPlan, table_id_map: &HashMap, - table_column_counts: &HashMap, + table_schemas: &HashMap, ) -> Option { if !plan.is_incrementalizable() { return None; } let mut table_ids = table_id_map.clone(); - let compiled = compile_node(plan, &mut table_ids, table_column_counts)?; + let compiled = compile_node(plan, &mut table_ids, table_schemas)?; Some(CompileResult { dataflow: compiled.dataflow, table_ids, }) } +fn compile_source_node( + table: &str, + table_ids: &mut HashMap, + table_schemas: &HashMap, +) -> Option { + let table_id = get_or_assign_table_id(table, table_ids); + let column_count = table_schemas.get(table)?.columns().len(); + Some(CompiledNode { + dataflow: DataflowNode::source(table_id), + layout: CompileLayout::table(table, column_count), + }) +} + +fn compile_filtered_source( + table: &str, + predicate: Option, + table_ids: &mut HashMap, + table_schemas: &HashMap, +) -> Option { + let source = compile_source_node(table, table_ids, table_schemas)?; + if let Some(predicate) = predicate { + let bound_predicate = bind_expr_to_layout(&predicate, &source.layout); + let pred_fn = compile_predicate(&bound_predicate); + return Some(CompiledNode { + dataflow: DataflowNode::Filter { + input: Box::new(source.dataflow), + predicate: pred_fn, + }, + layout: source.layout, + }); + } + + Some(source) +} + +fn lookup_index_columns( + table_schemas: &HashMap, + table: &str, + index_name: &str, +) -> Option> { + let schema = table_schemas.get(table)?; + let index = schema + .get_index(index_name) + .or_else(|| { + schema + .indices() + .iter() + .find(|candidate| candidate.normalized_name() == index_name) + }) + .or_else(|| { + schema.primary_key().filter(|candidate| { + candidate.name() == index_name || candidate.normalized_name() == index_name + }) + })?; + + index + .columns() + .iter() + .map(|column| { + Some(IndexedColumnRef { + name: column.name.clone(), + index: schema.get_column_index(&column.name)?, + }) + }) + .collect() +} + +fn column_expr(table: &str, column: &IndexedColumnRef) -> Expr { + Expr::column(table, column.name.clone(), column.index) +} + +fn combine_with_and(mut predicates: Vec) -> Option { + let first = predicates.pop()?; + Some( + predicates + .into_iter() + .fold(first, |combined, predicate| Expr::and(combined, predicate)), + ) +} + +fn build_scalar_range_predicate( + table: &str, + column: &IndexedColumnRef, + range: &KeyRange, +) -> Option { + let expr = column_expr(table, column); + match range { + KeyRange::All => None, + KeyRange::Only(value) => Some(Expr::eq(expr, Expr::Literal(value.clone()))), + KeyRange::LowerBound { value, exclusive } => Some(if *exclusive { + Expr::gt(expr, Expr::Literal(value.clone())) + } else { + Expr::ge(expr, Expr::Literal(value.clone())) + }), + KeyRange::UpperBound { value, exclusive } => Some(if *exclusive { + Expr::lt(expr, Expr::Literal(value.clone())) + } else { + Expr::le(expr, Expr::Literal(value.clone())) + }), + KeyRange::Bound { + lower, + upper, + lower_exclusive, + upper_exclusive, + } => combine_with_and(alloc::vec![ + if *lower_exclusive { + Expr::gt(column_expr(table, column), Expr::Literal(lower.clone())) + } else { + Expr::ge(column_expr(table, column), Expr::Literal(lower.clone())) + }, + if *upper_exclusive { + Expr::lt(column_expr(table, column), Expr::Literal(upper.clone())) + } else { + Expr::le(column_expr(table, column), Expr::Literal(upper.clone())) + }, + ]), + } +} + +fn build_composite_only_predicate( + table: &str, + indexed_columns: &[IndexedColumnRef], + values: &[Value], +) -> Result { + if indexed_columns.len() != values.len() || indexed_columns.is_empty() { + return Err(()); + } + + combine_with_and( + indexed_columns + .iter() + .zip(values.iter()) + .map(|(column, value)| { + Expr::eq(column_expr(table, column), Expr::Literal(value.clone())) + }) + .collect(), + ) + .ok_or(()) +} + +fn build_index_scan_predicate( + table: &str, + indexed_columns: &[IndexedColumnRef], + bounds: &IndexBounds, +) -> Result, ()> { + match bounds { + IndexBounds::Unbounded => Ok(None), + IndexBounds::Scalar(range) => { + if indexed_columns.len() != 1 { + return Err(()); + } + Ok(build_scalar_range_predicate( + table, + &indexed_columns[0], + range, + )) + } + IndexBounds::Composite(range) => match range { + KeyRange::All => Ok(None), + KeyRange::Only(values) => { + build_composite_only_predicate(table, indexed_columns, values).map(Some) + } + KeyRange::Bound { + lower, + upper, + lower_exclusive, + upper_exclusive, + } if lower == upper && !lower_exclusive && !upper_exclusive => { + build_composite_only_predicate(table, indexed_columns, lower).map(Some) + } + _ => Err(()), + }, + } +} + fn compile_node( plan: &PhysicalPlan, table_ids: &mut HashMap, - table_column_counts: &HashMap, + table_schemas: &HashMap, ) -> Option { match plan { - // All scan types map to Source nodes - PhysicalPlan::TableScan { table } - | PhysicalPlan::IndexScan { table, .. } - | PhysicalPlan::IndexGet { table, .. } - | PhysicalPlan::IndexInGet { table, .. } - | PhysicalPlan::GinIndexScan { table, .. } - | PhysicalPlan::GinIndexScanMulti { table, .. } => { - let table_id = get_or_assign_table_id(table, table_ids); - let column_count = table_column_counts.get(table).copied()?; - Some(CompiledNode { - dataflow: DataflowNode::source(table_id), - layout: CompileLayout::table(table, column_count), - }) + PhysicalPlan::TableScan { table } => compile_source_node(table, table_ids, table_schemas), + + PhysicalPlan::IndexScan { + table, + index, + bounds, + limit, + offset, + reverse, + } => { + if *reverse || limit.is_some() || offset.unwrap_or(0) > 0 { + return None; + } + let indexed_columns = lookup_index_columns(table_schemas, table, index)?; + let predicate = build_index_scan_predicate(table, &indexed_columns, bounds).ok()?; + compile_filtered_source(table, predicate, table_ids, table_schemas) + } + + PhysicalPlan::IndexGet { + table, + index, + key, + limit, + } => { + if limit.is_some() { + return None; + } + let indexed_columns = lookup_index_columns(table_schemas, table, index)?; + if indexed_columns.len() != 1 { + return None; + } + let predicate = Some(Expr::eq( + column_expr(table, &indexed_columns[0]), + Expr::Literal(key.clone()), + )); + compile_filtered_source(table, predicate, table_ids, table_schemas) + } + + PhysicalPlan::IndexInGet { table, index, keys } => { + let indexed_columns = lookup_index_columns(table_schemas, table, index)?; + if indexed_columns.len() != 1 { + return None; + } + let predicate = Some(Expr::In { + expr: Box::new(column_expr(table, &indexed_columns[0])), + list: keys.iter().cloned().map(Expr::Literal).collect(), + }); + compile_filtered_source(table, predicate, table_ids, table_schemas) + } + + PhysicalPlan::GinIndexScan { table, recheck, .. } + | PhysicalPlan::GinIndexScanMulti { table, recheck, .. } => { + compile_filtered_source(table, Some(recheck.clone()?), table_ids, table_schemas) } PhysicalPlan::Filter { input, predicate } => { - let input_node = compile_node(input, table_ids, table_column_counts)?; + let input_node = compile_node(input, table_ids, table_schemas)?; let bound_predicate = bind_expr_to_layout(predicate, &input_node.layout); let pred_fn = compile_predicate(&bound_predicate); Some(CompiledNode { @@ -175,7 +397,7 @@ fn compile_node( } PhysicalPlan::Project { input, columns } => { - let input_node = compile_node(input, table_ids, table_column_counts)?; + let input_node = compile_node(input, table_ids, table_schemas)?; let bound_columns: Vec = columns .iter() .map(|expr| bind_expr_to_layout(expr, &input_node.layout)) @@ -231,8 +453,8 @@ fn compile_node( join_type, output_tables, } => { - let left_node = compile_node(left, table_ids, table_column_counts)?; - let right_node = compile_node(right, table_ids, table_column_counts)?; + let left_node = compile_node(left, table_ids, table_schemas)?; + let right_node = compile_node(right, table_ids, table_schemas)?; let ivm_join_type = convert_join_type(join_type); let (left_key, right_key) = extract_join_keys(condition, &left_node.layout, &right_node.layout); @@ -256,9 +478,9 @@ fn compile_node( output_tables, .. } => { - let outer_node = compile_node(outer, table_ids, table_column_counts)?; + let outer_node = compile_node(outer, table_ids, table_schemas)?; let inner_table_id = get_or_assign_table_id(inner_table, table_ids); - let inner_column_count = table_column_counts.get(inner_table).copied()?; + let inner_column_count = table_schemas.get(inner_table)?.columns().len(); let inner_layout = CompileLayout::table(inner_table, inner_column_count); let inner_node = CompiledNode { dataflow: DataflowNode::source(inner_table_id), @@ -286,8 +508,8 @@ fn compile_node( } PhysicalPlan::CrossProduct { left, right } => { - let left_node = compile_node(left, table_ids, table_column_counts)?; - let right_node = compile_node(right, table_ids, table_column_counts)?; + let left_node = compile_node(left, table_ids, table_schemas)?; + let right_node = compile_node(right, table_ids, table_schemas)?; let raw_layout = CompileLayout::combined(&left_node.layout, &right_node.layout); // Cross product = join with constant key (everything matches) Some(CompiledNode { @@ -307,7 +529,7 @@ fn compile_node( group_by, aggregates, } => { - let input_node = compile_node(input, table_ids, table_column_counts)?; + let input_node = compile_node(input, table_ids, table_schemas)?; let bound_group_by: Vec = group_by .iter() .map(|expr| bind_expr_to_layout(expr, &input_node.layout)) @@ -348,7 +570,7 @@ fn compile_node( }) } - PhysicalPlan::NoOp { input } => compile_node(input, table_ids, table_column_counts), + PhysicalPlan::NoOp { input } => compile_node(input, table_ids, table_schemas), PhysicalPlan::Empty => Some(CompiledNode { dataflow: DataflowNode::source(u32::MAX), layout: CompileLayout { @@ -762,12 +984,19 @@ fn convert_aggregate_func(func: &AggregateFunc) -> AggregateType { #[cfg(test)] mod tests { use super::*; + use cynos_core::{schema::TableBuilder, DataType}; use cynos_query::ast::Expr; - fn column_counts(entries: &[(&str, usize)]) -> HashMap { + fn table_schemas(entries: &[(&str, &[&str])]) -> HashMap { entries .iter() - .map(|(table, count)| ((*table).into(), *count)) + .map(|(table, columns)| { + let mut builder = TableBuilder::new(*table).unwrap(); + for column in *columns { + builder = builder.add_column(*column, DataType::String).unwrap(); + } + ((*table).into(), builder.build().unwrap()) + }) .collect() } @@ -776,9 +1005,9 @@ mod tests { let plan = PhysicalPlan::table_scan("users"); let mut table_ids = HashMap::new(); table_ids.insert("users".into(), 1u32); - let table_column_counts = column_counts(&[("users", 2)]); + let table_schemas = table_schemas(&[("users", &["id", "name"])]); - let result = compile_to_dataflow(&plan, &table_ids, &table_column_counts).unwrap(); + let result = compile_to_dataflow(&plan, &table_ids, &table_schemas).unwrap(); assert!(matches!( result.dataflow, DataflowNode::Source { table_id: 1 } @@ -793,10 +1022,83 @@ mod tests { ); let mut table_ids = HashMap::new(); table_ids.insert("users".into(), 1u32); - let table_column_counts = column_counts(&[("users", 2)]); + let table_schemas = table_schemas(&[("users", &["id", "age"])]); + + let result = compile_to_dataflow(&plan, &table_ids, &table_schemas).unwrap(); + assert!(matches!(result.dataflow, DataflowNode::Filter { .. })); + } + + #[test] + fn test_compile_index_get_lowers_to_filtered_source() { + use cynos_incremental::{Delta, MaterializedView}; + + let users = TableBuilder::new("users") + .unwrap() + .add_column("id", DataType::Int64) + .unwrap() + .add_column("name", DataType::String) + .unwrap() + .add_primary_key(&["id"], false) + .unwrap() + .build() + .unwrap(); + let pk_name = users.primary_key().unwrap().name().to_string(); + let mut table_schemas = HashMap::new(); + table_schemas.insert("users".into(), users); + + let plan = PhysicalPlan::index_get("users", pk_name, Value::Int64(1)); + let mut table_ids = HashMap::new(); + table_ids.insert("users".into(), 1u32); - let result = compile_to_dataflow(&plan, &table_ids, &table_column_counts).unwrap(); + let result = compile_to_dataflow(&plan, &table_ids, &table_schemas).unwrap(); assert!(matches!(result.dataflow, DataflowNode::Filter { .. })); + + let mut view = MaterializedView::new(result.dataflow); + view.on_table_change( + 1, + vec![ + Delta::insert(Row::new( + 1, + vec![Value::Int64(1), Value::String("Alice".into())], + )), + Delta::insert(Row::new( + 2, + vec![Value::Int64(2), Value::String("Bob".into())], + )), + ], + ); + + let rows = view.result(); + assert_eq!(rows.len(), 1); + assert_eq!(rows[0].get(0), Some(&Value::Int64(1))); + } + + #[test] + fn test_compile_reverse_index_scan_is_not_incrementalizable() { + let users = TableBuilder::new("users") + .unwrap() + .add_column("id", DataType::Int64) + .unwrap() + .add_primary_key(&["id"], false) + .unwrap() + .build() + .unwrap(); + let pk_name = users.primary_key().unwrap().name().to_string(); + let mut table_schemas = HashMap::new(); + table_schemas.insert("users".into(), users); + + let plan = PhysicalPlan::IndexScan { + table: "users".into(), + index: pk_name, + bounds: IndexBounds::Unbounded, + limit: None, + offset: None, + reverse: true, + }; + let mut table_ids = HashMap::new(); + table_ids.insert("users".into(), 1u32); + + assert!(compile_to_dataflow(&plan, &table_ids, &table_schemas).is_none()); } #[test] @@ -809,8 +1111,8 @@ mod tests { )], ); let table_ids = HashMap::new(); - let table_column_counts = column_counts(&[("users", 1)]); - assert!(compile_to_dataflow(&plan, &table_ids, &table_column_counts).is_none()); + let table_schemas = table_schemas(&[("users", &["id"])]); + assert!(compile_to_dataflow(&plan, &table_ids, &table_schemas).is_none()); } #[test] @@ -828,9 +1130,12 @@ mod tests { let mut table_ids = HashMap::new(); table_ids.insert("employees".into(), 1u32); table_ids.insert("departments".into(), 2u32); - let table_column_counts = column_counts(&[("employees", 3), ("departments", 2)]); + let table_schemas = table_schemas(&[ + ("employees", &["id", "name", "dept_id"]), + ("departments", &["id", "name"]), + ]); - let result = compile_to_dataflow(&plan, &table_ids, &table_column_counts).unwrap(); + let result = compile_to_dataflow(&plan, &table_ids, &table_schemas).unwrap(); match &result.dataflow { DataflowNode::Join { join_type, .. } => { assert_eq!(*join_type, IvmJoinType::LeftOuter); @@ -856,9 +1161,12 @@ mod tests { let mut table_ids = HashMap::new(); table_ids.insert("employees".into(), 1u32); table_ids.insert("departments".into(), 2u32); - let table_column_counts = column_counts(&[("employees", 2), ("departments", 2)]); + let table_schemas = table_schemas(&[ + ("employees", &["id", "dept_id"]), + ("departments", &["id", "name"]), + ]); - let result = compile_to_dataflow(&plan, &table_ids, &table_column_counts).unwrap(); + let result = compile_to_dataflow(&plan, &table_ids, &table_schemas).unwrap(); match &result.dataflow { DataflowNode::Project { input, columns } => { assert_eq!(columns, &[2, 3, 0, 1]); @@ -883,9 +1191,9 @@ mod tests { ); let mut table_ids = HashMap::new(); table_ids.insert("orders".into(), 1u32); - let table_column_counts = column_counts(&[("orders", 3)]); + let table_schemas = table_schemas(&[("orders", &["customer_id", "id", "amount"])]); - let result = compile_to_dataflow(&plan, &table_ids, &table_column_counts).unwrap(); + let result = compile_to_dataflow(&plan, &table_ids, &table_schemas).unwrap(); match &result.dataflow { DataflowNode::Aggregate { group_by, @@ -1007,9 +1315,9 @@ mod tests { ); let mut table_ids = HashMap::new(); table_ids.insert("users".into(), 1u32); - let table_column_counts = column_counts(&[("users", 2)]); + let table_schemas = table_schemas(&[("users", &["id", "name"])]); - let result = compile_to_dataflow(&plan, &table_ids, &table_column_counts).unwrap(); + let result = compile_to_dataflow(&plan, &table_ids, &table_schemas).unwrap(); let mut view = MaterializedView::new(result.dataflow); // Insert rows: id=1 (match), id=2 (no match), id=3 (match) @@ -1065,9 +1373,12 @@ mod tests { let mut table_ids = HashMap::new(); table_ids.insert("employees".into(), 1u32); table_ids.insert("departments".into(), 2u32); - let table_column_counts = column_counts(&[("employees", 2), ("departments", 2)]); + let table_schemas = table_schemas(&[ + ("employees", &["id", "dept_id"]), + ("departments", &["id", "name"]), + ]); - let result = compile_to_dataflow(&plan, &table_ids, &table_column_counts).unwrap(); + let result = compile_to_dataflow(&plan, &table_ids, &table_schemas).unwrap(); let mut view = MaterializedView::new(result.dataflow); view.on_table_change( diff --git a/crates/database/src/expr.rs b/crates/database/src/expr.rs index 09806a8..21faa74 100644 --- a/crates/database/src/expr.rs +++ b/crates/database/src/expr.rs @@ -576,8 +576,8 @@ impl Expr { } ExprInner::Between { column, low, high } => { let lookup_key = column_lookup_key(column); - let (table, idx, dt) = get_column_info(&lookup_key) - .unwrap_or((String::new(), 0, DataType::Float64)); + let (table, idx, dt) = + get_column_info(&lookup_key).unwrap_or((String::new(), 0, DataType::Float64)); let col_expr = AstExpr::column(&table, &column.name, idx); let low_val = js_to_value(low, dt).unwrap_or(Value::Null); let high_val = js_to_value(high, dt).unwrap_or(Value::Null); @@ -589,8 +589,8 @@ impl Expr { } ExprInner::NotBetween { column, low, high } => { let lookup_key = column_lookup_key(column); - let (table, idx, dt) = get_column_info(&lookup_key) - .unwrap_or((String::new(), 0, DataType::Float64)); + let (table, idx, dt) = + get_column_info(&lookup_key).unwrap_or((String::new(), 0, DataType::Float64)); let col_expr = AstExpr::column(&table, &column.name, idx); let low_val = js_to_value(low, dt).unwrap_or(Value::Null); let high_val = js_to_value(high, dt).unwrap_or(Value::Null); diff --git a/crates/database/src/lib.rs b/crates/database/src/lib.rs index 5b3b5ec..22c50b8 100644 --- a/crates/database/src/lib.rs +++ b/crates/database/src/lib.rs @@ -42,6 +42,7 @@ pub mod convert; pub mod database; pub mod dataflow_compiler; pub mod expr; +pub mod live_runtime; pub mod query_builder; pub mod query_engine; pub mod reactive_bridge; @@ -55,7 +56,9 @@ pub use expr::{Column, Expr}; pub use query_builder::{ DeleteBuilder, InsertBuilder, PreparedSelectQuery, SelectBuilder, UpdateBuilder, }; -pub use reactive_bridge::{JsChangesStream, JsGraphqlSubscription, JsIvmObservableQuery, JsObservableQuery}; +pub use reactive_bridge::{ + JsChangesStream, JsGraphqlSubscription, JsIvmObservableQuery, JsObservableQuery, +}; pub use table::{ForeignKeyOptions, JsTable, JsTableBuilder}; pub use transaction::JsTransaction; diff --git a/crates/database/src/live_runtime.rs b/crates/database/src/live_runtime.rs new file mode 100644 index 0000000..0be72e8 --- /dev/null +++ b/crates/database/src/live_runtime.rs @@ -0,0 +1,699 @@ +use crate::binary_protocol::SchemaLayout; +use crate::query_engine::{CompiledPhysicalPlan, QueryResultSummary}; +use crate::reactive_bridge::{ + GraphqlDeltaObservable, GraphqlSubscriptionObservable, JsGraphqlSubscription, + JsIvmObservableQuery, JsObservableQuery, ReQueryObservable, +}; +use alloc::rc::Rc; +use alloc::string::String; +use alloc::vec::Vec; +use core::cell::RefCell; +use cynos_core::schema::Table; +use cynos_core::Row; +use cynos_gql::{bind::BoundRootField, GraphqlCatalog}; +use cynos_incremental::{DataflowNode, Delta, TableId}; +use cynos_reactive::ObservableQuery; +use cynos_storage::TableCache; +use hashbrown::{HashMap, HashSet}; +#[cfg(target_arch = "wasm32")] +use wasm_bindgen::prelude::{Closure, JsValue}; + +#[derive(Clone, Copy, Debug, PartialEq, Eq)] +pub(crate) enum LiveEngineKind { + Snapshot, + Delta, +} + +#[derive(Clone, Copy, Debug, PartialEq, Eq)] +pub(crate) enum LiveOutputKind { + RowsSnapshot, + RowsDelta, + GraphqlSnapshot, + GraphqlDelta, +} + +#[derive(Clone, Debug, Default, PartialEq, Eq)] +pub(crate) struct LiveDependencySet { + pub tables: Vec, + pub root_tables: Vec, +} + +impl LiveDependencySet { + pub fn new(mut tables: Vec, mut root_tables: Vec) -> Self { + tables.sort_unstable(); + tables.dedup(); + root_tables.sort_unstable(); + root_tables.dedup(); + Self { + tables, + root_tables, + } + } + + pub fn snapshot(tables: Vec) -> Self { + Self::new(tables, Vec::new()) + } + + pub fn graphql(tables: Vec, root_tables: Vec) -> Self { + Self::new(tables, root_tables) + } +} + +#[derive(Clone, Debug)] +pub(crate) enum RowsProjection { + Full { schema: Table }, + Projection { schema: Table, columns: Vec }, +} + +impl RowsProjection { + fn into_snapshot_js( + self, + inner: Rc>, + binary_layout: SchemaLayout, + ) -> JsObservableQuery { + match self { + Self::Full { schema } => JsObservableQuery::new(inner, schema, binary_layout), + Self::Projection { schema, columns } => { + JsObservableQuery::new_with_projection(inner, schema, columns, binary_layout) + } + } + } + + fn into_delta_js( + self, + inner: Rc>, + binary_layout: SchemaLayout, + ) -> JsIvmObservableQuery { + match self { + Self::Full { schema } => JsIvmObservableQuery::new(inner, schema, binary_layout), + Self::Projection { schema, columns } => { + JsIvmObservableQuery::new_with_projection(inner, schema, columns, binary_layout) + } + } + } +} + +pub(crate) struct SnapshotKernelPlan { + pub compiled_plan: CompiledPhysicalPlan, + pub initial_rows: Vec>, + pub initial_summary: QueryResultSummary, +} + +pub(crate) struct DeltaKernelPlan { + pub dataflow: DataflowNode, + pub initial_rows: Vec, +} + +pub(crate) enum KernelPlan { + Snapshot(SnapshotKernelPlan), + Delta(DeltaKernelPlan), +} + +pub(crate) struct RowsSnapshotAdapterPlan { + pub projection: RowsProjection, + pub binary_layout: SchemaLayout, +} + +pub(crate) struct RowsDeltaAdapterPlan { + pub projection: RowsProjection, + pub binary_layout: SchemaLayout, +} + +pub(crate) struct GraphqlSnapshotAdapterPlan { + pub catalog: GraphqlCatalog, + pub field: BoundRootField, + pub dependency_table_bindings: Vec<(TableId, String)>, +} + +pub(crate) struct GraphqlDeltaAdapterPlan { + pub catalog: GraphqlCatalog, + pub field: BoundRootField, + pub dependency_table_bindings: Vec<(TableId, String)>, +} + +pub(crate) enum AdapterPlan { + RowsSnapshot(RowsSnapshotAdapterPlan), + RowsDelta(RowsDeltaAdapterPlan), + GraphqlSnapshot(GraphqlSnapshotAdapterPlan), + GraphqlDelta(GraphqlDeltaAdapterPlan), +} + +pub(crate) struct LivePlanDescriptor { + pub engine: LiveEngineKind, + #[allow(dead_code)] + pub output: LiveOutputKind, + pub dependencies: LiveDependencySet, +} + +pub(crate) struct LivePlan { + pub descriptor: LivePlanDescriptor, + pub kernel: KernelPlan, + pub adapter: AdapterPlan, +} + +impl LivePlan { + pub fn rows_snapshot( + dependencies: LiveDependencySet, + compiled_plan: CompiledPhysicalPlan, + initial_rows: Vec>, + initial_summary: QueryResultSummary, + projection: RowsProjection, + binary_layout: SchemaLayout, + ) -> Self { + Self { + descriptor: LivePlanDescriptor { + engine: LiveEngineKind::Snapshot, + output: LiveOutputKind::RowsSnapshot, + dependencies, + }, + kernel: KernelPlan::Snapshot(SnapshotKernelPlan { + compiled_plan, + initial_rows, + initial_summary, + }), + adapter: AdapterPlan::RowsSnapshot(RowsSnapshotAdapterPlan { + projection, + binary_layout, + }), + } + } + + pub fn rows_delta( + dependencies: LiveDependencySet, + dataflow: DataflowNode, + initial_rows: Vec, + projection: RowsProjection, + binary_layout: SchemaLayout, + ) -> Self { + Self { + descriptor: LivePlanDescriptor { + engine: LiveEngineKind::Delta, + output: LiveOutputKind::RowsDelta, + dependencies, + }, + kernel: KernelPlan::Delta(DeltaKernelPlan { + dataflow, + initial_rows, + }), + adapter: AdapterPlan::RowsDelta(RowsDeltaAdapterPlan { + projection, + binary_layout, + }), + } + } + + pub fn graphql_snapshot( + dependencies: LiveDependencySet, + compiled_plan: CompiledPhysicalPlan, + initial_rows: Vec>, + initial_summary: QueryResultSummary, + catalog: GraphqlCatalog, + field: BoundRootField, + dependency_table_bindings: Vec<(TableId, String)>, + ) -> Self { + Self { + descriptor: LivePlanDescriptor { + engine: LiveEngineKind::Snapshot, + output: LiveOutputKind::GraphqlSnapshot, + dependencies, + }, + kernel: KernelPlan::Snapshot(SnapshotKernelPlan { + compiled_plan, + initial_rows, + initial_summary, + }), + adapter: AdapterPlan::GraphqlSnapshot(GraphqlSnapshotAdapterPlan { + catalog, + field, + dependency_table_bindings, + }), + } + } + + pub fn graphql_delta( + dependencies: LiveDependencySet, + dataflow: DataflowNode, + initial_rows: Vec, + catalog: GraphqlCatalog, + field: BoundRootField, + dependency_table_bindings: Vec<(TableId, String)>, + ) -> Self { + Self { + descriptor: LivePlanDescriptor { + engine: LiveEngineKind::Delta, + output: LiveOutputKind::GraphqlDelta, + dependencies, + }, + kernel: KernelPlan::Delta(DeltaKernelPlan { + dataflow, + initial_rows, + }), + adapter: AdapterPlan::GraphqlDelta(GraphqlDeltaAdapterPlan { + catalog, + field, + dependency_table_bindings, + }), + } + } + + pub fn materialize_rows_snapshot( + self, + cache: Rc>, + registry: Rc>, + ) -> JsObservableQuery { + let dependencies = self.descriptor.dependencies; + let kernel = match self.kernel { + KernelPlan::Snapshot(plan) => plan, + KernelPlan::Delta(_) => { + unreachable!("rows snapshot live plans must use snapshot kernel") + } + }; + let adapter = match self.adapter { + AdapterPlan::RowsSnapshot(plan) => plan, + AdapterPlan::RowsDelta(_) + | AdapterPlan::GraphqlSnapshot(_) + | AdapterPlan::GraphqlDelta(_) => { + unreachable!("rows snapshot live plans must use rows snapshot adapters") + } + }; + + let observable = Rc::new(RefCell::new(ReQueryObservable::new_with_summary( + kernel.compiled_plan, + cache, + kernel.initial_rows, + kernel.initial_summary, + ))); + registry.borrow_mut().register_snapshot( + SnapshotSubscription::Rows(observable.clone()), + &dependencies, + ); + adapter + .projection + .into_snapshot_js(observable, adapter.binary_layout) + } + + pub fn materialize_rows_delta( + self, + registry: Rc>, + ) -> JsIvmObservableQuery { + let dependencies = self.descriptor.dependencies; + let kernel = match self.kernel { + KernelPlan::Delta(plan) => plan, + KernelPlan::Snapshot(_) => unreachable!("rows delta live plans must use delta kernel"), + }; + let adapter = match self.adapter { + AdapterPlan::RowsDelta(plan) => plan, + AdapterPlan::RowsSnapshot(_) + | AdapterPlan::GraphqlSnapshot(_) + | AdapterPlan::GraphqlDelta(_) => { + unreachable!("rows delta live plans must use rows delta adapters") + } + }; + + let observable = Rc::new(RefCell::new(ObservableQuery::with_initial( + kernel.dataflow, + kernel.initial_rows, + ))); + registry + .borrow_mut() + .register_delta(DeltaSubscription::Rows(observable.clone()), &dependencies); + adapter + .projection + .into_delta_js(observable, adapter.binary_layout) + } + + pub fn materialize_graphql_snapshot( + self, + cache: Rc>, + registry: Rc>, + ) -> JsGraphqlSubscription { + let dependencies = self.descriptor.dependencies; + let kernel = match self.kernel { + KernelPlan::Snapshot(plan) => plan, + KernelPlan::Delta(_) => { + unreachable!("GraphQL snapshot live plans must use snapshot kernel") + } + }; + let adapter = match self.adapter { + AdapterPlan::GraphqlSnapshot(plan) => plan, + AdapterPlan::RowsSnapshot(_) + | AdapterPlan::RowsDelta(_) + | AdapterPlan::GraphqlDelta(_) => { + unreachable!("GraphQL snapshot live plans must use GraphQL snapshot adapters") + } + }; + + let root_table_ids = dependencies.root_tables.iter().copied().collect(); + let observable = Rc::new(RefCell::new(GraphqlSubscriptionObservable::new( + kernel.compiled_plan, + cache, + adapter.catalog, + adapter.field, + adapter.dependency_table_bindings, + root_table_ids, + kernel.initial_rows, + kernel.initial_summary, + ))); + registry.borrow_mut().register_snapshot( + SnapshotSubscription::Graphql(observable.clone()), + &dependencies, + ); + JsGraphqlSubscription::new_snapshot(observable) + } + + pub fn materialize_graphql_delta( + self, + cache: Rc>, + registry: Rc>, + ) -> JsGraphqlSubscription { + let dependencies = self.descriptor.dependencies; + let kernel = match self.kernel { + KernelPlan::Delta(plan) => plan, + KernelPlan::Snapshot(_) => { + unreachable!("GraphQL delta live plans must use delta kernel") + } + }; + let adapter = match self.adapter { + AdapterPlan::GraphqlDelta(plan) => plan, + AdapterPlan::RowsSnapshot(_) + | AdapterPlan::RowsDelta(_) + | AdapterPlan::GraphqlSnapshot(_) => { + unreachable!("GraphQL delta live plans must use GraphQL delta adapters") + } + }; + + let observable = Rc::new(RefCell::new(GraphqlDeltaObservable::new( + kernel.dataflow, + cache, + adapter.catalog, + adapter.field, + adapter.dependency_table_bindings, + kernel.initial_rows, + ))); + registry.borrow_mut().register_delta( + DeltaSubscription::Graphql(observable.clone()), + &dependencies, + ); + JsGraphqlSubscription::new_delta(observable) + } +} + +#[derive(Clone)] +pub(crate) enum SnapshotSubscription { + Rows(Rc>), + Graphql(Rc>), +} + +impl SnapshotSubscription { + fn subscription_count(&self) -> usize { + match self { + Self::Rows(query) => query.borrow().subscription_count(), + Self::Graphql(query) => query.borrow().subscription_count(), + } + } +} + +#[derive(Clone)] +pub(crate) enum DeltaSubscription { + Rows(Rc>), + Graphql(Rc>), +} + +impl DeltaSubscription { + fn subscription_count(&self) -> usize { + match self { + Self::Rows(query) => query.borrow().subscription_count(), + Self::Graphql(query) => query.borrow().subscription_count(), + } + } + + fn on_table_change(&self, table_id: TableId, deltas: Vec>) { + match self { + Self::Rows(query) => query.borrow_mut().on_table_change(table_id, deltas), + Self::Graphql(query) => query.borrow_mut().on_table_change(table_id, deltas), + } + } +} + +pub(crate) struct LiveRegistry { + snapshot_queries: HashMap>, + delta_queries: HashMap>, + pending_changes: Rc>>>, + pending_deltas: Rc>>>>, + flush_scheduled: Rc>, + self_ref: Option>>, + #[cfg(target_arch = "wasm32")] + flush_closure: Option>, +} + +impl LiveRegistry { + pub fn new() -> Self { + Self { + snapshot_queries: HashMap::new(), + delta_queries: HashMap::new(), + pending_changes: Rc::new(RefCell::new(HashMap::new())), + pending_deltas: Rc::new(RefCell::new(HashMap::new())), + flush_scheduled: Rc::new(RefCell::new(false)), + self_ref: None, + #[cfg(target_arch = "wasm32")] + flush_closure: None, + } + } + + pub fn set_self_ref(&mut self, self_ref: Rc>) { + self.self_ref = Some(self_ref); + } + + pub fn register_snapshot( + &mut self, + query: SnapshotSubscription, + dependencies: &LiveDependencySet, + ) { + for &table_id in &dependencies.tables { + self.snapshot_queries + .entry(table_id) + .or_insert_with(Vec::new) + .push(query.clone()); + } + } + + pub fn register_delta(&mut self, query: DeltaSubscription, dependencies: &LiveDependencySet) { + for &table_id in &dependencies.tables { + self.delta_queries + .entry(table_id) + .or_insert_with(Vec::new) + .push(query.clone()); + } + } + + fn flush_snapshot_lane(&self, changes: HashMap>) { + let mut merged_rows: HashMap>, HashSet)> = + HashMap::new(); + let mut merged_graphql: HashMap< + usize, + ( + Rc>, + HashMap>, + ), + > = HashMap::new(); + + for (table_id, changed_ids) in changes { + if let Some(queries) = self.snapshot_queries.get(&table_id) { + for query in queries { + match query { + SnapshotSubscription::Rows(query) => { + let entry = merged_rows + .entry(Rc::as_ptr(query) as usize) + .or_insert_with(|| (query.clone(), HashSet::new())); + entry.1.extend(changed_ids.iter().copied()); + } + SnapshotSubscription::Graphql(query) => { + let entry = merged_graphql + .entry(Rc::as_ptr(query) as usize) + .or_insert_with(|| (query.clone(), HashMap::new())); + entry.1.insert(table_id, changed_ids.clone()); + } + } + } + } + } + + for (_, (query, changed_ids)) in merged_rows { + query.borrow_mut().on_change(&changed_ids); + } + + for (_, (query, changes)) in merged_graphql { + query.borrow_mut().on_change(&changes); + } + } + + pub fn on_table_change(&mut self, table_id: TableId, changed_ids: &HashSet) { + { + let mut pending = self.pending_changes.borrow_mut(); + pending + .entry(table_id) + .or_insert_with(HashSet::new) + .extend(changed_ids.iter().copied()); + } + + let mut scheduled = self.flush_scheduled.borrow_mut(); + if !*scheduled { + *scheduled = true; + drop(scheduled); + self.schedule_flush(); + } + } + + pub fn on_table_change_delta( + &mut self, + table_id: TableId, + deltas: Vec>, + changed_ids: &HashSet, + ) { + { + let mut pending = self.pending_deltas.borrow_mut(); + pending + .entry(table_id) + .or_insert_with(Vec::new) + .extend(deltas); + } + + { + let mut pending = self.pending_changes.borrow_mut(); + pending + .entry(table_id) + .or_insert_with(HashSet::new) + .extend(changed_ids.iter().copied()); + } + + let mut scheduled = self.flush_scheduled.borrow_mut(); + if !*scheduled { + *scheduled = true; + drop(scheduled); + self.schedule_flush(); + } + } + + fn flush_delta_lane(&self, delta_changes: &HashMap>>) { + for (table_id, deltas) in delta_changes { + if let Some(queries) = self.delta_queries.get(table_id) { + for query in queries { + query.on_table_change(*table_id, deltas.clone()); + } + } + } + } + + fn schedule_flush(&mut self) { + #[cfg(target_arch = "wasm32")] + { + if self.flush_closure.is_none() { + if let Some(ref self_ref) = self.self_ref { + let self_ref_clone = self_ref.clone(); + let pending_changes = self.pending_changes.clone(); + let pending_deltas = self.pending_deltas.clone(); + let flush_scheduled = self.flush_scheduled.clone(); + + self.flush_closure = Some(Closure::new(move |_: JsValue| { + *flush_scheduled.borrow_mut() = false; + + let delta_changes: HashMap>> = + pending_deltas.borrow_mut().drain().collect(); + let changes: HashMap> = + pending_changes.borrow_mut().drain().collect(); + + { + let registry = self_ref_clone.borrow(); + registry.flush_delta_lane(&delta_changes); + registry.flush_snapshot_lane(changes); + } + + { + let mut registry = self_ref_clone.borrow_mut(); + registry.gc_dead_queries(); + } + })); + } + } + + if let Some(ref closure) = self.flush_closure { + let promise = js_sys::Promise::resolve(&JsValue::UNDEFINED); + let _ = promise.then(closure); + } + } + + #[cfg(not(target_arch = "wasm32"))] + { + self.flush_sync(); + } + } + + #[cfg(not(target_arch = "wasm32"))] + fn flush_sync(&mut self) { + *self.flush_scheduled.borrow_mut() = false; + + let delta_changes: HashMap>> = + self.pending_deltas.borrow_mut().drain().collect(); + self.flush_delta_lane(&delta_changes); + + let changes: HashMap> = + self.pending_changes.borrow_mut().drain().collect(); + self.flush_snapshot_lane(changes); + + self.gc_dead_queries(); + } + + #[allow(dead_code)] + pub fn flush(&mut self) { + *self.flush_scheduled.borrow_mut() = false; + + let delta_changes: HashMap>> = + self.pending_deltas.borrow_mut().drain().collect(); + self.flush_delta_lane(&delta_changes); + + let changes: HashMap> = + self.pending_changes.borrow_mut().drain().collect(); + self.flush_snapshot_lane(changes); + + self.gc_dead_queries(); + } + + fn gc_dead_queries(&mut self) { + for queries in self.snapshot_queries.values_mut() { + queries.retain(|query| query.subscription_count() > 0); + } + self.snapshot_queries + .retain(|_, queries| !queries.is_empty()); + + for queries in self.delta_queries.values_mut() { + queries.retain(|query| query.subscription_count() > 0); + } + self.delta_queries.retain(|_, queries| !queries.is_empty()); + } + + #[allow(dead_code)] + pub fn query_count(&self) -> usize { + let snapshot_count: usize = self + .snapshot_queries + .values() + .map(|queries| queries.len()) + .sum(); + let delta_count: usize = self + .delta_queries + .values() + .map(|queries| queries.len()) + .sum(); + snapshot_count + delta_count + } + + #[allow(dead_code)] + pub fn has_pending_changes(&self) -> bool { + !self.pending_changes.borrow().is_empty() || !self.pending_deltas.borrow().is_empty() + } +} + +impl Default for LiveRegistry { + fn default() -> Self { + Self::new() + } +} diff --git a/crates/database/src/query_builder.rs b/crates/database/src/query_builder.rs index 08b95c8..ef0a2a3 100644 --- a/crates/database/src/query_builder.rs +++ b/crates/database/src/query_builder.rs @@ -7,14 +7,13 @@ use crate::binary_protocol::{SchemaLayout, SchemaLayoutCache}; use crate::convert::{js_array_to_rows, js_to_value, projected_rows_to_js_array, rows_to_js_array}; use crate::dataflow_compiler::compile_to_dataflow; use crate::expr::{Expr, ExprInner}; +use crate::live_runtime::{LiveDependencySet, LivePlan, LiveRegistry, RowsProjection}; use crate::query_engine::{ compile_cached_plan, compile_plan, execute_compiled_physical_plan, execute_compiled_physical_plan_with_summary, execute_physical_plan, execute_plan, explain_plan, CompiledPhysicalPlan, }; -use crate::reactive_bridge::{ - JsChangesStream, JsIvmObservableQuery, JsObservableQuery, QueryRegistry, ReQueryObservable, -}; +use crate::reactive_bridge::{JsChangesStream, JsIvmObservableQuery, JsObservableQuery}; use crate::JsSortOrder; use alloc::boxed::Box; use alloc::rc::Rc; @@ -27,7 +26,7 @@ use cynos_incremental::Delta; use cynos_query::ast::{AggregateFunc, SortOrder}; use cynos_query::plan_cache::{compute_plan_fingerprint, PlanCache}; use cynos_query::planner::LogicalPlan; -use cynos_reactive::{ObservableQuery, TableId}; +use cynos_reactive::TableId; use cynos_storage::TableCache; use wasm_bindgen::prelude::*; @@ -35,7 +34,7 @@ use wasm_bindgen::prelude::*; #[wasm_bindgen] pub struct SelectBuilder { cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, schema_layout_cache: Rc>, plan_cache: Rc>, @@ -188,7 +187,7 @@ enum JoinType { impl SelectBuilder { pub(crate) fn new( cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, schema_layout_cache: Rc>, plan_cache: Rc>, @@ -1463,52 +1462,49 @@ impl SelectBuilder { let initial_output = execute_compiled_physical_plan_with_summary(&cache, &compiled_plan) .map_err(|e| JsValue::from_str(&alloc::format!("Query execution error: {:?}", e)))?; + let dependencies = { + let table_id_map = self.table_id_map.borrow(); + let table_ids = logical_plan + .collect_tables() + .into_iter() + .map(|table| { + table_id_map.get(&table).copied().ok_or_else(|| { + JsValue::from_str(&alloc::format!("Table ID not found: {}", table)) + }) + }) + .collect::, _>>()?; + LiveDependencySet::snapshot(table_ids) + }; + + let projection = if self.frozen_base.is_some() + || !self.aggregates.is_empty() + || !self.group_by_cols.is_empty() + { + RowsProjection::Projection { + schema: output.schema, + columns: output_columns, + } + } else if let Some(cols) = self.parse_columns() { + RowsProjection::Projection { + schema, + columns: cols, + } + } else { + RowsProjection::Full { schema } + }; + drop(cache); // Release borrow - // Create re-query observable with cached compiled plan - let observable = ReQueryObservable::new_with_summary( + let live_plan = LivePlan::rows_snapshot( + dependencies, compiled_plan, - cache_ref.clone(), initial_output.rows, initial_output.summary, + projection, + binary_layout, ); - let observable_rc = Rc::new(RefCell::new(observable)); - { - let table_id_map = self.table_id_map.borrow(); - let mut registry = self.query_registry.borrow_mut(); - for table in logical_plan.collect_tables() { - let table_id = table_id_map.get(&table).copied().ok_or_else(|| { - JsValue::from_str(&alloc::format!("Table ID not found: {}", table)) - })?; - registry.register(observable_rc.clone(), table_id); - } - } - - if self.frozen_base.is_some() { - Ok(JsObservableQuery::new_with_projection( - observable_rc, - output.schema, - output_columns, - binary_layout, - )) - } else if !self.aggregates.is_empty() || !self.group_by_cols.is_empty() { - Ok(JsObservableQuery::new_with_projection( - observable_rc, - output.schema, - output_columns, - binary_layout, - )) - } else if let Some(cols) = self.parse_columns() { - Ok(JsObservableQuery::new_with_projection( - observable_rc, - schema, - cols, - binary_layout, - )) - } else { - Ok(JsObservableQuery::new(observable_rc, schema, binary_layout)) - } + Ok(live_plan.materialize_rows_snapshot(cache_ref.clone(), self.query_registry.clone())) } /// Creates a changes stream (initial + incremental). @@ -1560,18 +1556,18 @@ impl SelectBuilder { SchemaLayout::from_schemas(&schemas) }; let physical_plan = compile_plan(&cache, table_name, logical_plan); - let mut table_column_counts = hashbrown::HashMap::new(); - table_column_counts.insert(table_name.clone(), store.schema().columns().len()); + let mut table_schemas = hashbrown::HashMap::new(); + table_schemas.insert(table_name.clone(), store.schema().clone()); for join in &self.joins { let join_store = cache.get_table(&join.table).ok_or_else(|| { JsValue::from_str(&alloc::format!("Join table not found: {}", join.table)) })?; - table_column_counts.insert(join.table.clone(), join_store.schema().columns().len()); + table_schemas.insert(join.table.clone(), join_store.schema().clone()); } // Compile physical plan to dataflow — errors if not incrementalizable let table_id_map = self.table_id_map.borrow(); - let compile_result = compile_to_dataflow(&physical_plan, &table_id_map, &table_column_counts) + let compile_result = compile_to_dataflow(&physical_plan, &table_id_map, &table_schemas) .ok_or_else(|| JsValue::from_str( "Query is not incrementalizable (contains ORDER BY, LIMIT, or other non-streamable operators). Use observe() instead." ))?; @@ -1580,45 +1576,38 @@ impl SelectBuilder { let initial_rows = execute_physical_plan(&cache, &physical_plan) .map_err(|e| JsValue::from_str(&alloc::format!("Query execution error: {:?}", e)))?; + let dependencies = + LiveDependencySet::snapshot(compile_result.table_ids.values().copied().collect()); drop(cache); drop(table_id_map); - // Convert Rc → Row for ObservableQuery let initial_owned: Vec = initial_rows.iter().map(|rc| (**rc).clone()).collect(); - - // Create IVM observable with dataflow and initial result - let observable = ObservableQuery::with_initial(compile_result.dataflow, initial_owned); - let observable_rc = Rc::new(RefCell::new(observable)); - - // Register with query registry for IVM delta propagation - self.query_registry - .borrow_mut() - .register_ivm(observable_rc.clone()); - - if self.frozen_base.is_some() + let projection = if self.frozen_base.is_some() || !self.aggregates.is_empty() || !self.group_by_cols.is_empty() { - Ok(JsIvmObservableQuery::new_with_projection( - observable_rc, - output.schema, - output_columns, - binary_layout, - )) + RowsProjection::Projection { + schema: output.schema, + columns: output_columns, + } } else if let Some(cols) = self.parse_columns() { - Ok(JsIvmObservableQuery::new_with_projection( - observable_rc, + RowsProjection::Projection { schema, - cols, - binary_layout, - )) + columns: cols, + } } else { - Ok(JsIvmObservableQuery::new( - observable_rc, - schema, - binary_layout, - )) - } + RowsProjection::Full { schema } + }; + + let live_plan = LivePlan::rows_delta( + dependencies, + compile_result.dataflow, + initial_owned, + projection, + binary_layout, + ); + + Ok(live_plan.materialize_rows_delta(self.query_registry.clone())) } /// Gets the schema layout for binary decoding. @@ -1716,7 +1705,7 @@ impl PreparedSelectQuery { #[wasm_bindgen] pub struct InsertBuilder { cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, table_name: String, values_data: Option, @@ -1725,7 +1714,7 @@ pub struct InsertBuilder { impl InsertBuilder { pub(crate) fn new( cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, table: &str, ) -> Self { @@ -1789,7 +1778,7 @@ impl InsertBuilder { drop(cache); // Release borrow before notifying self.query_registry .borrow_mut() - .on_table_change_ivm(table_id, deltas, &inserted_ids); + .on_table_change_delta(table_id, deltas, &inserted_ids); } Ok(JsValue::from_f64(row_count as f64)) @@ -1800,7 +1789,7 @@ impl InsertBuilder { #[wasm_bindgen] pub struct UpdateBuilder { cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, table_name: String, set_values: Vec<(String, JsValue)>, @@ -1810,7 +1799,7 @@ pub struct UpdateBuilder { impl UpdateBuilder { pub(crate) fn new( cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, table: &str, ) -> Self { @@ -1951,7 +1940,7 @@ impl UpdateBuilder { drop(cache); self.query_registry .borrow_mut() - .on_table_change_ivm(table_id, deltas, &updated_ids); + .on_table_change_delta(table_id, deltas, &updated_ids); } Ok(JsValue::from_f64(update_count as f64)) @@ -1962,7 +1951,7 @@ impl UpdateBuilder { #[wasm_bindgen] pub struct DeleteBuilder { cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, table_name: String, where_clause: Option, @@ -1971,7 +1960,7 @@ pub struct DeleteBuilder { impl DeleteBuilder { pub(crate) fn new( cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, table: &str, ) -> Self { @@ -2036,7 +2025,7 @@ impl DeleteBuilder { // Notify query registry if let Some(table_id) = self.table_id_map.borrow().get(&self.table_name).copied() { - self.query_registry.borrow_mut().on_table_change_ivm( + self.query_registry.borrow_mut().on_table_change_delta( table_id, deltas, &deleted_ids, @@ -2097,7 +2086,7 @@ impl DeleteBuilder { if let Some(table_id) = self.table_id_map.borrow().get(&self.table_name).copied() { self.query_registry .borrow_mut() - .on_table_change_ivm(table_id, deltas, &deleted_ids); + .on_table_change_delta(table_id, deltas, &deleted_ids); } Ok(JsValue::from_f64(delete_count as f64)) @@ -2654,7 +2643,7 @@ mod tests { struct TestSelectContext { cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, schema_layout_cache: Rc>, plan_cache: Rc>, @@ -2733,7 +2722,7 @@ mod tests { } let cache = Rc::new(RefCell::new(cache)); - let query_registry = Rc::new(RefCell::new(QueryRegistry::new())); + let query_registry = Rc::new(RefCell::new(LiveRegistry::new())); query_registry .borrow_mut() .set_self_ref(query_registry.clone()); @@ -2799,7 +2788,7 @@ mod tests { } let cache = Rc::new(RefCell::new(cache)); - let query_registry = Rc::new(RefCell::new(QueryRegistry::new())); + let query_registry = Rc::new(RefCell::new(LiveRegistry::new())); query_registry .borrow_mut() .set_self_ref(query_registry.clone()); diff --git a/crates/database/src/reactive_bridge.rs b/crates/database/src/reactive_bridge.rs index 4f68412..07216ef 100644 --- a/crates/database/src/reactive_bridge.rs +++ b/crates/database/src/reactive_bridge.rs @@ -22,13 +22,240 @@ use alloc::string::String; use alloc::vec::Vec; use core::cell::RefCell; use cynos_core::schema::Table; -use cynos_core::Row; -use cynos_incremental::{Delta, TableId}; +use cynos_core::{Row, Value}; +use cynos_incremental::{DataflowNode, Delta, MaterializedView, TableId}; use cynos_reactive::ObservableQuery; use cynos_storage::TableCache; use hashbrown::{HashMap, HashSet}; use wasm_bindgen::prelude::*; +fn collect_changed_rows( + cache: &Rc>, + compiled_plan: &CompiledPhysicalPlan, + changed_ids: &HashSet, +) -> Option>)>> { + let table_name = compiled_plan.reactive_patch_table()?; + let cache = cache.borrow(); + let store = cache.get_table(table_name)?; + let mut changed_rows = Vec::with_capacity(changed_ids.len()); + for &row_id in changed_ids { + changed_rows.push((row_id, store.get(row_id))); + } + Some(changed_rows) +} + +fn query_results_equal( + old_summary: &QueryResultSummary, + new_summary: &QueryResultSummary, + old: &[Rc], + new: &[Rc], +) -> bool { + if old_summary != new_summary || old.len() != new.len() { + return false; + } + + old.iter().zip(new.iter()).all(|(old_row, new_row)| { + Rc::ptr_eq(old_row, new_row) + || (old_row.id() == new_row.id() + && old_row.version() == new_row.version() + && old_row.values() == new_row.values()) + }) +} + +#[derive(Default)] +struct GraphqlSubscribers { + callbacks: Vec<(usize, Box)>, + keepalive_ids: HashSet, + next_sub_id: usize, +} + +impl GraphqlSubscribers { + fn add_keepalive(&mut self) -> usize { + let id = self.next_sub_id; + self.next_sub_id += 1; + self.keepalive_ids.insert(id); + id + } + + fn add_callback(&mut self, callback: F) -> usize + where + F: Fn(&cynos_gql::GraphqlResponse) + 'static, + { + let id = self.next_sub_id; + self.next_sub_id += 1; + self.callbacks.push((id, Box::new(callback))); + id + } + + fn remove(&mut self, id: usize) -> bool { + if self.keepalive_ids.remove(&id) { + return true; + } + + let len_before = self.callbacks.len(); + self.callbacks.retain(|(sub_id, _)| *sub_id != id); + self.callbacks.len() < len_before + } + + fn total_count(&self) -> usize { + self.keepalive_ids.len() + self.callbacks.len() + } + + fn callback_count(&self) -> usize { + self.callbacks.len() + } + + fn emit(&self, response: &cynos_gql::GraphqlResponse) { + for (_, callback) in &self.callbacks { + callback(response); + } + } +} + +fn build_graphql_response( + cache: &TableCache, + catalog: &cynos_gql::GraphqlCatalog, + field: &cynos_gql::bind::BoundRootField, + rows: &[Rc], +) -> Result { + let root_field = cynos_gql::execute::render_root_field_rows(cache, catalog, field, rows)?; + Ok(cynos_gql::GraphqlResponse::new( + cynos_gql::ResponseValue::object(alloc::vec![root_field]), + )) +} + +fn build_graphql_response_batched( + cache: &TableCache, + catalog: &cynos_gql::GraphqlCatalog, + field: &cynos_gql::bind::BoundRootField, + plan: &cynos_gql::GraphqlBatchPlan, + state: &mut cynos_gql::GraphqlBatchState, + rows: &[Rc], +) -> Result { + cynos_gql::batch_render::render_graphql_response(cache, catalog, field, plan, state, rows) +} + +fn build_graphql_response_from_owned_rows( + cache: &TableCache, + catalog: &cynos_gql::GraphqlCatalog, + field: &cynos_gql::bind::BoundRootField, + rows: &[Row], +) -> Result { + let rows: Vec> = rows.iter().cloned().map(Rc::new).collect(); + build_graphql_response(cache, catalog, field, &rows) +} + +fn build_graphql_response_from_owned_rows_batched( + cache: &TableCache, + catalog: &cynos_gql::GraphqlCatalog, + field: &cynos_gql::bind::BoundRootField, + plan: &cynos_gql::GraphqlBatchPlan, + state: &mut cynos_gql::GraphqlBatchState, + rows: &[Row], +) -> Result { + let rows: Vec> = rows.iter().cloned().map(Rc::new).collect(); + build_graphql_response_batched(cache, catalog, field, plan, state, &rows) +} + +fn root_field_has_relations(field: &cynos_gql::bind::BoundRootField) -> bool { + match &field.kind { + cynos_gql::bind::BoundRootFieldKind::Typename => false, + cynos_gql::bind::BoundRootFieldKind::Collection { selection, .. } + | cynos_gql::bind::BoundRootFieldKind::ByPk { selection, .. } + | cynos_gql::bind::BoundRootFieldKind::Insert { selection, .. } + | cynos_gql::bind::BoundRootFieldKind::Update { selection, .. } + | cynos_gql::bind::BoundRootFieldKind::Delete { selection, .. } => { + selection_has_relations(selection) + } + } +} + +fn selection_has_relations(selection: &cynos_gql::bind::BoundSelectionSet) -> bool { + selection.fields.iter().any(field_has_relations) +} + +fn field_has_relations(field: &cynos_gql::bind::BoundField) -> bool { + matches!( + field, + cynos_gql::bind::BoundField::ForwardRelation { .. } + | cynos_gql::bind::BoundField::ReverseRelation { .. } + ) +} + +fn build_snapshot_batch_invalidation( + table_names: &HashMap, + changes: &HashMap>, + root_changed: bool, +) -> Result { + let mut changed_tables = Vec::with_capacity(changes.len()); + let mut dirty_table_rows = HashMap::new(); + for table_id in changes.keys() { + let Some(table_name) = table_names.get(table_id) else { + return Err(()); + }; + changed_tables.push(table_name.clone()); + if let Some(changed_ids) = changes.get(table_id) { + dirty_table_rows.insert(table_name.clone(), changed_ids.clone()); + } + } + + Ok(cynos_gql::GraphqlInvalidation { + root_changed, + changed_tables, + dirty_edge_keys: HashMap::new(), + dirty_table_rows, + }) +} + +fn build_delta_batch_invalidation( + plan: &cynos_gql::GraphqlBatchPlan, + table_names: &HashMap, + table_id: TableId, + deltas: &[Delta], + root_changed: bool, +) -> Result { + let Some(table_name) = table_names.get(&table_id) else { + return Err(()); + }; + let dirty_row_ids: HashSet = deltas.iter().map(|delta| delta.data.id()).collect(); + + let mut invalidation = cynos_gql::GraphqlInvalidation { + root_changed, + changed_tables: alloc::vec![table_name.clone()], + dirty_edge_keys: HashMap::new(), + dirty_table_rows: HashMap::from([(table_name.clone(), dirty_row_ids)]), + }; + + for edge_id in plan.edges_for_table(table_name) { + let edge = plan.edge(*edge_id); + let key_column_index = match edge.kind { + cynos_gql::render_plan::RelationEdgeKind::Forward => edge.relation.parent_column_index, + cynos_gql::render_plan::RelationEdgeKind::Reverse => edge.relation.child_column_index, + }; + + let mut dirty_keys = HashSet::::new(); + for delta in deltas { + let Some(value) = delta.data.get(key_column_index).cloned() else { + continue; + }; + if value.is_null() { + continue; + } + dirty_keys.insert(value); + } + + if !dirty_keys.is_empty() { + invalidation.dirty_edge_keys.insert(*edge_id, dirty_keys); + } + } + + Ok(invalidation) +} + +fn graphql_response_to_js_value(response: &cynos_gql::GraphqlResponse) -> JsValue { + gql_response_to_js(response).unwrap_or(JsValue::NULL) +} + /// A re-query based observable that re-executes the query on each change. /// This leverages the query optimizer and indexes for optimal performance. /// The physical plan and lowered execution artifact are cached to avoid repeated @@ -125,7 +352,9 @@ impl ReQueryObservable { return; } - if let Some(changed_rows) = self.collect_fast_path_rows(changed_ids) { + if let Some(changed_rows) = + collect_changed_rows(&self.cache, &self.compiled_plan, changed_ids) + { match self .compiled_plan .apply_reactive_patch(&mut self.result, &changed_rows) @@ -148,7 +377,7 @@ impl ReQueryObservable { match execute_compiled_physical_plan_with_summary(&cache, &self.compiled_plan) { Ok(output) => { // Only notify if result changed - if !Self::results_equal( + if !query_results_equal( &self.result_summary, &output.summary, &self.result, @@ -167,322 +396,463 @@ impl ReQueryObservable { } } } +} - fn collect_fast_path_rows( - &self, - changed_ids: &HashSet, - ) -> Option>)>> { - let table_name = self.compiled_plan.reactive_patch_table()?; - let cache = self.cache.borrow(); - let store = cache.get_table(table_name)?; - let mut changed_rows = Vec::with_capacity(changed_ids.len()); - for &row_id in changed_ids { - changed_rows.push((row_id, store.get(row_id))); - } - Some(changed_rows) - } - - /// Compares two result sets using a precomputed summary captured during execution. - /// This keeps the unchanged path O(1) after the query has already been re-executed. - fn results_equal( - old_summary: &QueryResultSummary, - new_summary: &QueryResultSummary, - old: &[Rc], - new: &[Rc], - ) -> bool { - if old_summary != new_summary || old.len() != new.len() { - return false; +pub struct GraphqlSubscriptionObservable { + compiled_plan: CompiledPhysicalPlan, + cache: Rc>, + catalog: cynos_gql::GraphqlCatalog, + field: cynos_gql::bind::BoundRootField, + batch_plan: Option, + batch_state: cynos_gql::GraphqlBatchState, + dependency_table_names: HashMap, + root_table_ids: HashSet, + root_rows: Vec>, + root_summary: QueryResultSummary, + response: Option, + response_dirty: bool, + subscribers: GraphqlSubscribers, +} + +impl GraphqlSubscriptionObservable { + pub fn new( + compiled_plan: CompiledPhysicalPlan, + cache: Rc>, + catalog: cynos_gql::GraphqlCatalog, + field: cynos_gql::bind::BoundRootField, + dependency_table_bindings: Vec<(TableId, String)>, + root_table_ids: HashSet, + initial_rows: Vec>, + initial_summary: QueryResultSummary, + ) -> Self { + Self { + compiled_plan, + cache, + batch_plan: cynos_gql::compile_batch_plan(&catalog, &field) + .ok() + .filter(|plan| plan.has_relations()), + batch_state: cynos_gql::GraphqlBatchState::default(), + dependency_table_names: dependency_table_bindings.into_iter().collect(), + catalog, + field, + root_table_ids, + root_rows: initial_rows, + root_summary: initial_summary, + response: None, + response_dirty: true, + subscribers: GraphqlSubscribers::default(), } + } - old.iter().zip(new.iter()).all(|(old_row, new_row)| { - Rc::ptr_eq(old_row, new_row) - || (old_row.id() == new_row.id() - && old_row.version() == new_row.version() - && old_row.values() == new_row.values()) - }) + pub fn attach_keepalive(&mut self) -> usize { + self.subscribers.add_keepalive() } -} -/// Registry for tracking re-query observables and routing table changes. -/// Supports batching of changes to avoid redundant re-queries during rapid updates. -pub struct QueryRegistry { - /// Map from table ID to list of queries that depend on it - queries: HashMap>>>, - /// Map from table ID to IVM-based queries - ivm_queries: HashMap>>>, - /// Pending changes to be flushed (table_id -> accumulated changed_ids) - pending_changes: Rc>>>, - /// Pending IVM deltas (table_id -> accumulated deltas) - pending_ivm_deltas: Rc>>>>, - /// Whether a flush is already scheduled - flush_scheduled: Rc>, - /// Self reference for scheduling flush callback - self_ref: Option>>, - /// Reusable flush closure to avoid Closure::once + forget() leak per DML - #[cfg(target_arch = "wasm32")] - flush_closure: Option>, -} + pub fn response_js_value(&mut self) -> JsValue { + if self.response.is_some() && !self.response_dirty { + return graphql_response_to_js_value(self.response.as_ref().unwrap()); + } -impl QueryRegistry { - /// Creates a new query registry. - pub fn new() -> Self { - Self { - queries: HashMap::new(), - ivm_queries: HashMap::new(), - pending_changes: Rc::new(RefCell::new(HashMap::new())), - pending_ivm_deltas: Rc::new(RefCell::new(HashMap::new())), - flush_scheduled: Rc::new(RefCell::new(false)), - self_ref: None, - #[cfg(target_arch = "wasm32")] - flush_closure: None, + if self.subscribers.callback_count() == 0 { + return self.render_response_js_value(); + } + + match self.current_response() { + Some(response) => graphql_response_to_js_value(response), + None => JsValue::NULL, } } - /// Sets the self reference for scheduling flush callbacks. - /// Must be called after wrapping in Rc>. - pub fn set_self_ref(&mut self, self_ref: Rc>) { - self.self_ref = Some(self_ref); + pub fn subscribe( + &mut self, + callback: F, + ) -> usize { + self.subscribers.add_callback(callback) } - /// Registers a re-query observable with its dependent table. - pub fn register(&mut self, query: Rc>, table_id: TableId) { - self.queries - .entry(table_id) - .or_insert_with(Vec::new) - .push(query); + pub fn unsubscribe(&mut self, id: usize) -> bool { + self.subscribers.remove(id) } - /// Registers an IVM-based observable query. - /// The query's dependencies are automatically extracted from its dataflow. - pub fn register_ivm(&mut self, query: Rc>) { - let deps: Vec = query.borrow().dependencies().to_vec(); - for table_id in deps { - self.ivm_queries - .entry(table_id) - .or_insert_with(Vec::new) - .push(query.clone()); - } + pub fn subscription_count(&self) -> usize { + self.subscribers.total_count() } - fn flush_requery_changes(&self, changes: HashMap>) { - let mut merged: HashMap>, HashSet)> = - HashMap::new(); + pub fn listener_count(&self) -> usize { + self.subscribers.callback_count() + } + pub fn on_change(&mut self, changes: &HashMap>) { + if self.subscribers.total_count() == 0 { + return; + } + + let mut root_changed_ids = HashSet::new(); + let mut saw_nested_change = false; for (table_id, changed_ids) in changes { - if let Some(queries) = self.queries.get(&table_id) { - for query in queries { - let key = Rc::as_ptr(query) as usize; - let entry = merged - .entry(key) - .or_insert_with(|| (query.clone(), HashSet::new())); - entry.1.extend(changed_ids.iter().copied()); + if self.root_table_ids.contains(table_id) { + root_changed_ids.extend(changed_ids.iter().copied()); + } else { + saw_nested_change = true; + } + } + + let mut root_changed = false; + if !root_changed_ids.is_empty() { + root_changed = match self.refresh_root_rows(&root_changed_ids) { + Some(changed) => changed, + None => return, + }; + } + + if !root_changed && !saw_nested_change { + return; + } + + if let Some(plan) = self.batch_plan.as_ref() { + match build_snapshot_batch_invalidation( + &self.dependency_table_names, + changes, + root_changed, + ) { + Ok(invalidation) => self.batch_state.apply_invalidation(plan, &invalidation), + Err(()) => { + self.batch_state = cynos_gql::GraphqlBatchState::default(); } } } + self.response_dirty = true; + if self.subscribers.callback_count() == 0 { + return; + } - for (_, (query, changed_ids)) in merged { - query.borrow_mut().on_change(&changed_ids); + if let Some(changed) = self.materialize_response_if_dirty() { + if changed { + if let Some(response) = self.response.as_ref() { + self.subscribers.emit(response); + } + } } } - /// Handles table changes by batching and scheduling a flush. - /// Multiple rapid changes are coalesced into a single re-query/propagation. - pub fn on_table_change(&mut self, table_id: TableId, changed_ids: &HashSet) { - // Accumulate changes for re-query observables + fn refresh_root_rows(&mut self, changed_ids: &HashSet) -> Option { + if let Some(changed_rows) = + collect_changed_rows(&self.cache, &self.compiled_plan, changed_ids) { - let mut pending = self.pending_changes.borrow_mut(); - pending - .entry(table_id) - .or_insert_with(HashSet::new) - .extend(changed_ids.iter().copied()); + match self + .compiled_plan + .apply_reactive_patch(&mut self.root_rows, &changed_rows) + { + Some(true) => { + self.root_summary = QueryResultSummary::from_rows(&self.root_rows); + return Some(true); + } + Some(false) => return Some(false), + None => {} + } } - // Schedule flush if not already scheduled - let mut scheduled = self.flush_scheduled.borrow_mut(); - if !*scheduled { - *scheduled = true; - drop(scheduled); - self.schedule_flush(); + let cache = self.cache.borrow(); + let output = + execute_compiled_physical_plan_with_summary(&cache, &self.compiled_plan).ok()?; + if query_results_equal( + &self.root_summary, + &output.summary, + &self.root_rows, + &output.rows, + ) { + return Some(false); } + + self.root_rows = output.rows; + self.root_summary = output.summary; + Some(true) } - /// Handles table changes with IVM deltas. - /// This is the new DBSP-based path that propagates deltas incrementally. - pub fn on_table_change_ivm( - &mut self, - table_id: TableId, - deltas: Vec>, - changed_ids: &HashSet, - ) { - // Accumulate IVM deltas - { - let mut pending = self.pending_ivm_deltas.borrow_mut(); - pending - .entry(table_id) - .or_insert_with(Vec::new) - .extend(deltas); + fn materialize_response_if_dirty(&mut self) -> Option { + if !self.response_dirty && self.response.is_some() { + return Some(false); } - // Also accumulate for re-query observables - { - let mut pending = self.pending_changes.borrow_mut(); - pending - .entry(table_id) - .or_insert_with(HashSet::new) - .extend(changed_ids.iter().copied()); + let cache = self.cache.borrow(); + let response = match self.batch_plan.as_ref() { + Some(plan) => build_graphql_response_batched( + &cache, + &self.catalog, + &self.field, + plan, + &mut self.batch_state, + &self.root_rows, + ) + .ok()?, + None => { + build_graphql_response(&cache, &self.catalog, &self.field, &self.root_rows).ok()? + } + }; + let changed = self + .response + .as_ref() + .map_or(true, |current| *current != response); + if changed { + self.response = Some(response); } + self.response_dirty = false; + Some(changed) + } + + fn current_response(&mut self) -> Option<&cynos_gql::GraphqlResponse> { + self.materialize_response_if_dirty()?; + self.response.as_ref() + } - let mut scheduled = self.flush_scheduled.borrow_mut(); - if !*scheduled { - *scheduled = true; - drop(scheduled); - self.schedule_flush(); + fn render_response_js_value(&mut self) -> JsValue { + let cache = self.cache.borrow(); + let response = match self.batch_plan.as_ref() { + Some(plan) => build_graphql_response_batched( + &cache, + &self.catalog, + &self.field, + plan, + &mut self.batch_state, + &self.root_rows, + ), + None => build_graphql_response(&cache, &self.catalog, &self.field, &self.root_rows), + }; + match response { + Ok(response) => graphql_response_to_js_value(&response), + Err(_) => JsValue::NULL, } } +} - /// Schedules a flush to run after the current microtask. - fn schedule_flush(&mut self) { - #[cfg(target_arch = "wasm32")] - { - // Lazily create the reusable flush closure once - if self.flush_closure.is_none() { - if let Some(ref self_ref) = self.self_ref { - let self_ref_clone = self_ref.clone(); - let pending_changes = self.pending_changes.clone(); - let pending_ivm_deltas = self.pending_ivm_deltas.clone(); - let flush_scheduled = self.flush_scheduled.clone(); - - self.flush_closure = Some(Closure::new(move |_: JsValue| { - *flush_scheduled.borrow_mut() = false; - - // Flush IVM deltas first (O(delta) path) - let ivm_changes: HashMap>> = - pending_ivm_deltas.borrow_mut().drain().collect(); - { - let registry = self_ref_clone.borrow(); - for (table_id, deltas) in &ivm_changes { - if let Some(queries) = registry.ivm_queries.get(table_id) { - for query in queries { - query - .borrow_mut() - .on_table_change(*table_id, deltas.clone()); - } - } - } - } - - // Then flush re-query changes (O(result_set) path) - let changes: HashMap> = - pending_changes.borrow_mut().drain().collect(); - { - let registry = self_ref_clone.borrow(); - registry.flush_requery_changes(changes); - } - - // GC: remove queries with no subscribers to prevent memory leaks - { - let mut registry = self_ref_clone.borrow_mut(); - registry.gc_dead_queries(); - } - })); - } - } +pub struct GraphqlDeltaObservable { + view: MaterializedView, + cache: Rc>, + catalog: cynos_gql::GraphqlCatalog, + field: cynos_gql::bind::BoundRootField, + batch_plan: Option, + batch_state: cynos_gql::GraphqlBatchState, + dependency_table_names: HashMap, + has_nested_relations: bool, + response: Option, + response_dirty: bool, + subscribers: GraphqlSubscribers, +} - if let Some(ref closure) = self.flush_closure { - let promise = js_sys::Promise::resolve(&JsValue::UNDEFINED); - let _ = promise.then(closure); - } +impl GraphqlDeltaObservable { + pub fn new( + dataflow: DataflowNode, + cache: Rc>, + catalog: cynos_gql::GraphqlCatalog, + field: cynos_gql::bind::BoundRootField, + dependency_table_bindings: Vec<(TableId, String)>, + initial_rows: Vec, + ) -> Self { + Self { + view: MaterializedView::with_initial(dataflow, initial_rows), + cache, + batch_plan: cynos_gql::compile_batch_plan(&catalog, &field) + .ok() + .filter(|plan| plan.has_relations()), + batch_state: cynos_gql::GraphqlBatchState::default(), + dependency_table_names: dependency_table_bindings.into_iter().collect(), + catalog, + has_nested_relations: root_field_has_relations(&field), + field, + response: None, + response_dirty: true, + subscribers: GraphqlSubscribers::default(), } + } - #[cfg(not(target_arch = "wasm32"))] - { - // In non-WASM environment, flush immediately (for testing) - self.flush_sync(); + pub fn attach_keepalive(&mut self) -> usize { + self.subscribers.add_keepalive() + } + + pub fn response_js_value(&mut self) -> JsValue { + if self.response.is_some() && !self.response_dirty { + return graphql_response_to_js_value(self.response.as_ref().unwrap()); } + + if self.subscribers.callback_count() == 0 { + return self.render_response_js_value(); + } + + match self.current_response() { + Some(response) => graphql_response_to_js_value(response), + None => JsValue::NULL, + } + } + + pub fn dependencies(&self) -> &[TableId] { + self.view.dependencies() + } + + pub fn subscribe( + &mut self, + callback: F, + ) -> usize { + self.subscribers.add_callback(callback) + } + + pub fn unsubscribe(&mut self, id: usize) -> bool { + self.subscribers.remove(id) } - /// Synchronous flush for testing in non-WASM environment. - #[cfg(not(target_arch = "wasm32"))] - fn flush_sync(&mut self) { - *self.flush_scheduled.borrow_mut() = false; - - // Flush IVM deltas - let ivm_changes: HashMap>> = - self.pending_ivm_deltas.borrow_mut().drain().collect(); - for (table_id, deltas) in &ivm_changes { - if let Some(queries) = self.ivm_queries.get(table_id) { - for query in queries { - query - .borrow_mut() - .on_table_change(*table_id, deltas.clone()); + pub fn subscription_count(&self) -> usize { + self.subscribers.total_count() + } + + pub fn listener_count(&self) -> usize { + self.subscribers.callback_count() + } + + pub fn on_table_change(&mut self, table_id: TableId, deltas: Vec>) { + if self.subscribers.total_count() == 0 { + return; + } + + let batch_invalidation = self.batch_plan.as_ref().map(|plan| { + build_delta_batch_invalidation( + plan, + &self.dependency_table_names, + table_id, + &deltas, + false, + ) + }); + let output_deltas = self.view.on_table_change(table_id, deltas); + if output_deltas.is_empty() && !self.has_nested_relations { + return; + } + + if let Some(plan) = self.batch_plan.as_ref() { + match batch_invalidation { + Some(Ok(mut invalidation)) => { + invalidation.root_changed = !output_deltas.is_empty(); + self.batch_state.apply_invalidation(plan, &invalidation); + } + Some(Err(())) => { + self.batch_state = cynos_gql::GraphqlBatchState::default(); } + None => {} } } + self.response_dirty = true; + if self.subscribers.callback_count() == 0 { + return; + } - // Flush re-query changes - let changes: HashMap> = - self.pending_changes.borrow_mut().drain().collect(); - self.flush_requery_changes(changes); - - self.gc_dead_queries(); + if let Some(changed) = self.materialize_response_if_dirty() { + if changed { + if let Some(response) = self.response.as_ref() { + self.subscribers.emit(response); + } + } + } } - /// Forces an immediate flush of all pending changes. - /// Useful for testing or when you need synchronous behavior. - pub fn flush(&mut self) { - *self.flush_scheduled.borrow_mut() = false; + fn materialize_response_if_dirty(&mut self) -> Option { + if !self.response_dirty && self.response.is_some() { + return Some(false); + } - // Flush IVM deltas - let ivm_changes: HashMap>> = - self.pending_ivm_deltas.borrow_mut().drain().collect(); - for (table_id, deltas) in &ivm_changes { - if let Some(queries) = self.ivm_queries.get(table_id) { - for query in queries { - query - .borrow_mut() - .on_table_change(*table_id, deltas.clone()); - } + let rows = self.view.result(); + let cache = self.cache.borrow(); + let response = match self.batch_plan.as_ref() { + Some(plan) => build_graphql_response_from_owned_rows_batched( + &cache, + &self.catalog, + &self.field, + plan, + &mut self.batch_state, + &rows, + ) + .ok()?, + None => { + build_graphql_response_from_owned_rows(&cache, &self.catalog, &self.field, &rows) + .ok()? } + }; + let changed = self + .response + .as_ref() + .map_or(true, |current| *current != response); + if changed { + self.response = Some(response); } + self.response_dirty = false; + Some(changed) + } - // Flush re-query changes - let changes: HashMap> = - self.pending_changes.borrow_mut().drain().collect(); - self.flush_requery_changes(changes); + fn current_response(&mut self) -> Option<&cynos_gql::GraphqlResponse> { + self.materialize_response_if_dirty()?; + self.response.as_ref() + } - self.gc_dead_queries(); + fn render_response_js_value(&mut self) -> JsValue { + let rows = self.view.result(); + let cache = self.cache.borrow(); + let response = match self.batch_plan.as_ref() { + Some(plan) => build_graphql_response_from_owned_rows_batched( + &cache, + &self.catalog, + &self.field, + plan, + &mut self.batch_state, + &rows, + ), + None => { + build_graphql_response_from_owned_rows(&cache, &self.catalog, &self.field, &rows) + } + }; + match response { + Ok(response) => graphql_response_to_js_value(&response), + Err(_) => JsValue::NULL, + } } +} + +#[derive(Clone)] +enum GraphqlSubscriptionInner { + Snapshot(Rc>), + Delta(Rc>), +} - /// Removes queries with no active subscribers from the registry. - /// Called after each flush to prevent memory leaks from abandoned queries. - fn gc_dead_queries(&mut self) { - for queries in self.ivm_queries.values_mut() { - queries.retain(|q| q.borrow().subscription_count() > 0); +impl GraphqlSubscriptionInner { + fn attach_keepalive(&self) -> usize { + match self { + Self::Snapshot(inner) => inner.borrow_mut().attach_keepalive(), + Self::Delta(inner) => inner.borrow_mut().attach_keepalive(), } - self.ivm_queries.retain(|_, v| !v.is_empty()); + } - for queries in self.queries.values_mut() { - queries.retain(|q| q.borrow().subscription_count() > 0); + fn response_js_value(&self) -> JsValue { + match self { + Self::Snapshot(inner) => inner.borrow_mut().response_js_value(), + Self::Delta(inner) => inner.borrow_mut().response_js_value(), } - self.queries.retain(|_, v| !v.is_empty()); } - /// Returns the number of registered queries (both re-query and IVM). - pub fn query_count(&self) -> usize { - let requery_count: usize = self.queries.values().map(|v| v.len()).sum(); - let ivm_count: usize = self.ivm_queries.values().map(|v| v.len()).sum(); - requery_count + ivm_count + fn subscribe(&self, callback: F) -> usize { + match self { + Self::Snapshot(inner) => inner.borrow_mut().subscribe(callback), + Self::Delta(inner) => inner.borrow_mut().subscribe(callback), + } } - /// Returns whether there are pending changes waiting to be flushed. - pub fn has_pending_changes(&self) -> bool { - !self.pending_changes.borrow().is_empty() + fn unsubscribe(&self, id: usize) -> bool { + match self { + Self::Snapshot(inner) => inner.borrow_mut().unsubscribe(id), + Self::Delta(inner) => inner.borrow_mut().unsubscribe(id), + } } -} -impl Default for QueryRegistry { - fn default() -> Self { - Self::new() + fn listener_count(&self) -> usize { + match self { + Self::Snapshot(inner) => inner.borrow().listener_count(), + Self::Delta(inner) => inner.borrow().listener_count(), + } } } @@ -823,51 +1193,34 @@ fn ivm_full_rows_to_js_array(rows: &[Row], schema: &Table) -> JsValue { /// /// The callback receives a standard GraphQL payload object with a single `data` /// property. The payload is emitted immediately on subscribe and again whenever -/// the root query result changes. +/// the rendered GraphQL response changes. #[wasm_bindgen] pub struct JsGraphqlSubscription { - inner: Rc>, - cache: Rc>, - catalog: cynos_gql::GraphqlCatalog, - field: cynos_gql::bind::BoundRootField, + inner: GraphqlSubscriptionInner, keepalive_sub_id: usize, } impl JsGraphqlSubscription { - pub(crate) fn new( - inner: Rc>, - cache: Rc>, - catalog: cynos_gql::GraphqlCatalog, - field: cynos_gql::bind::BoundRootField, - ) -> Self { - let keepalive_sub_id = inner.borrow_mut().subscribe(|_| {}); + pub(crate) fn new_snapshot(inner: Rc>) -> Self { + Self::new(GraphqlSubscriptionInner::Snapshot(inner)) + } + + pub(crate) fn new_delta(inner: Rc>) -> Self { + Self::new(GraphqlSubscriptionInner::Delta(inner)) + } + + fn new(inner: GraphqlSubscriptionInner) -> Self { + let keepalive_sub_id = inner.attach_keepalive(); Self { inner, - cache, - catalog, - field, keepalive_sub_id, } } - - fn payload_for_rows(&self, rows: &[Rc]) -> JsValue { - let cache = self.cache.borrow(); - let root_field = - match cynos_gql::execute::render_root_field_rows(&cache, &self.catalog, &self.field, rows) - { - Ok(field) => field, - Err(_) => return JsValue::NULL, - }; - - let response = - cynos_gql::GraphqlResponse::new(cynos_gql::ResponseValue::object(alloc::vec![root_field])); - gql_response_to_js(&response).unwrap_or(JsValue::NULL) - } } impl Drop for JsGraphqlSubscription { fn drop(&mut self) { - self.inner.borrow_mut().unsubscribe(self.keepalive_sub_id); + self.inner.unsubscribe(self.keepalive_sub_id); } } @@ -876,35 +1229,19 @@ impl JsGraphqlSubscription { /// Returns the current GraphQL payload. #[wasm_bindgen(js_name = getResult)] pub fn get_result(&self) -> JsValue { - let inner = self.inner.borrow(); - self.payload_for_rows(inner.result()) + self.inner.response_js_value() } /// Subscribes to GraphQL payload changes and emits the initial value immediately. pub fn subscribe(&self, callback: js_sys::Function) -> js_sys::Function { - let initial = { - let inner = self.inner.borrow(); - self.payload_for_rows(inner.result()) - }; - callback.call1(&JsValue::NULL, &initial).ok(); - let inner = self.inner.clone(); - let cache = self.cache.clone(); - let catalog = self.catalog.clone(); - let field = self.field.clone(); - let sub_id = inner.borrow_mut().subscribe(move |rows| { - let cache = cache.borrow(); - let payload = match cynos_gql::execute::render_root_field_rows(&cache, &catalog, &field, rows) { - Ok(root_field) => { - let response = cynos_gql::GraphqlResponse::new(cynos_gql::ResponseValue::object( - alloc::vec![root_field], - )); - gql_response_to_js(&response).unwrap_or(JsValue::NULL) - } - Err(_) => JsValue::NULL, - }; + let initial_callback = callback.clone(); + let sub_id = inner.subscribe(move |response| { + let payload = graphql_response_to_js_value(response); callback.call1(&JsValue::NULL, &payload).ok(); }); + let initial = inner.response_js_value(); + initial_callback.call1(&JsValue::NULL, &initial).ok(); let called = Rc::new(RefCell::new(false)); let called_c = called.clone(); @@ -912,7 +1249,7 @@ impl JsGraphqlSubscription { let mut c = called_c.borrow_mut(); if !*c { *c = true; - inner.borrow_mut().unsubscribe(sub_id); + inner.unsubscribe(sub_id); } }) as Box); unsubscribe.into_js_value().unchecked_into() @@ -921,7 +1258,7 @@ impl JsGraphqlSubscription { /// Returns the number of active subscriptions. #[wasm_bindgen(js_name = subscriptionCount)] pub fn subscription_count(&self) -> usize { - self.inner.borrow().subscription_count().saturating_sub(1) + self.inner.listener_count() } } @@ -1055,6 +1392,7 @@ fn projected_rows_to_js_array(rows: &[Rc], column_names: &[String]) -> JsVa #[cfg(test)] mod tests { use super::*; + use crate::live_runtime::LiveRegistry; use cynos_core::schema::TableBuilder; use cynos_core::{DataType, Value}; use cynos_query::ast::Expr; @@ -1091,8 +1429,8 @@ mod tests { } #[wasm_bindgen_test] - fn test_query_registry_new() { - let registry = QueryRegistry::new(); + fn test_live_registry_new() { + let registry = LiveRegistry::new(); assert_eq!(registry.query_count(), 0); } @@ -1172,7 +1510,7 @@ mod tests { let old_summary = QueryResultSummary::from_rows(&old_rows); let new_summary = QueryResultSummary::from_rows(&new_rows); assert!( - !ReQueryObservable::results_equal(&old_summary, &new_summary, &old_rows, &new_rows), + !query_results_equal(&old_summary, &new_summary, &old_rows, &new_rows), "Projected value changed from Alice to Alicia, so live query comparison should detect a change", ); } @@ -1196,12 +1534,7 @@ mod tests { }; assert!( - !ReQueryObservable::results_equal( - &colliding_summary, - &colliding_summary, - &old_rows, - &new_rows, - ), + !query_results_equal(&colliding_summary, &colliding_summary, &old_rows, &new_rows,), "Row comparison must remain deterministic even if two summaries collide", ); } diff --git a/crates/database/src/transaction.rs b/crates/database/src/transaction.rs index 445738d..f725916 100644 --- a/crates/database/src/transaction.rs +++ b/crates/database/src/transaction.rs @@ -4,8 +4,8 @@ use crate::convert::{js_array_to_rows, js_to_value}; use crate::expr::Expr; +use crate::live_runtime::LiveRegistry; use crate::query_builder::evaluate_predicate; -use crate::reactive_bridge::QueryRegistry; use alloc::rc::Rc; use alloc::string::{String, ToString}; use alloc::vec::Vec; @@ -20,7 +20,7 @@ use wasm_bindgen::prelude::*; #[wasm_bindgen] pub struct JsTransaction { cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, inner: Option, /// Pending changes: (table_id, changed_row_ids) @@ -30,7 +30,7 @@ pub struct JsTransaction { impl JsTransaction { pub(crate) fn new( cache: Rc>, - query_registry: Rc>, + query_registry: Rc>, table_id_map: Rc>>, ) -> Self { Self { diff --git a/crates/gql/live-batching-design.md b/crates/gql/live-batching-design.md new file mode 100644 index 0000000..92c8693 --- /dev/null +++ b/crates/gql/live-batching-design.md @@ -0,0 +1,945 @@ +# GraphQL Live Batching Design + +Status: proposed +Owner: cynos-gql / cynos-database +Scope: `crates/gql`, `crates/database`, `crates/storage` + +## 1. Context + +Cynos already has one important batching layer in the live runtime: + +- table changes are collected in `LiveRegistry` +- pending changes and pending deltas are coalesced before flush +- the same observable is invoked once per flush rather than once per row change + +That part lives in `crates/database/src/live_runtime.rs` and is the correct control-plane shape for: + +- `observe()` +- `changes()` +- `trace()` +- `subscribeGraphql()` + +The remaining performance problem is not event batching. It is GraphQL payload assembly. + +Today GraphQL live subscriptions already reuse the existing root-plan machinery: + +- root fields are lowered through `crates/gql/src/plan.rs` +- `subscribeGraphql()` chooses `Snapshot` or `Delta` backend in `crates/database/src/database.rs` +- `GraphqlSubscriptionObservable` and `GraphqlDeltaObservable` live on top of the shared runtime in `crates/database/src/reactive_bridge.rs` + +However, once root rows are available, GraphQL payload rendering still follows a row-by-row recursive execution shape: + +- `render_root_field_rows()` +- `render_row_list()` +- `execute_row_selection()` +- `execute_forward_relation()` +- `execute_reverse_relation()` + +That shape is correct, but it is not set-oriented: + +- one parent row can trigger one relation lookup +- nested reverse relations can perform repeated index probes +- nested forward relations can repeatedly fetch the same parent row +- if an index is missing, the fallback can degenerate into repeated scans + +This is a classic in-memory N+1 shape. It does not create extra network round-trips, but it still creates unnecessary CPU work, allocations, and repeated index traversal. + +The long-term architectural direction remains unchanged: + +- exactly two live execution kernels + - `SnapshotKernel` + - `DeltaKernel` +- one shared runtime control plane +- GraphQL remains an adapter/compiler layer +- GraphQL live should be able to choose `Snapshot` or `Delta` per query shape + +This design adds the missing batching layer inside the GraphQL adapter without changing those principles. + +## 2. Problem Statement + +We need a GraphQL batching mechanism that: + +- removes row-by-row nested relation fetching from GraphQL live payload assembly +- preserves the current live runtime abstraction and does not create a fourth runtime lane +- keeps `observe()` / `changes()` performance intact +- keeps `trace()` / DBSP-IVM performance intact +- works for both GraphQL snapshot-backed and delta-backed subscriptions +- supports multi-level nested relations +- preserves existing GraphQL semantics for: + - nested filters + - nested ordering + - nested limit/offset in snapshot mode + - directive-pruned selections +- allows the GraphQL layer to keep acting as a bridge above live query kernels instead of owning an independent execution model + +The most important constraint is: + +- batching must live in the GraphQL adapter layer, not in the SQL live kernels + +## 3. Goals + +### 3.1 Primary goals + +- Introduce a set-oriented GraphQL relation rendering path for live subscriptions. +- Eliminate N+1-style relation fetch patterns inside a single subscription render. +- Reuse the existing root query planner path and backend selector. +- Keep GraphQL live output as full payload snapshots for JS consumers. +- Make nested relation invalidation explicit and efficient. + +### 3.2 Secondary goals + +- Reuse planner/index capabilities for nested GraphQL relation fetches when possible. +- Minimize repeated row-to-GraphQL materialization work with per-subscription caches. +- Make batching behavior explicit and testable instead of implicit recursion. +- Provide a path for future subtree-delta or shared-subscription optimizations. + +## 4. Non-Goals + +This design does not require the first implementation to deliver all of the following: + +- a JS-visible GraphQL delta protocol (`path/op/value`) +- cross-subscription batching across unrelated GraphQL documents +- multi-root GraphQL subscriptions +- fragment execution support +- a global shared live-plan cache for identical GraphQL subscriptions + +The first complete implementation may still emit: + +- full GraphQL payload snapshots externally +- one subscription-local batching state per GraphQL subscription + +## 5. Current Execution Shape + +### 5.1 What already works well + +Current live GraphQL creation already has the right high-level structure: + +- `crates/database/src/database.rs` + - binds GraphQL documents + - builds the planner-backed root field plan + - collects dependency tables + - selects `Snapshot` vs `Delta` +- `crates/database/src/live_runtime.rs` + - registers GraphQL observables into the shared live registry + - keeps one control-plane batching mechanism for all live APIs +- `crates/database/src/reactive_bridge.rs` + - keeps GraphQL snapshot and delta adapters on top of the shared control plane + +This should remain intact. + +### 5.2 Where the N+1 shape comes from + +The current GraphQL render path is still row-recursive: + +- `render_row_list()` walks rows one by one +- `execute_row_selection()` walks fields one by one +- each relation field resolves itself independently + +Specifically: + +- forward relation execution fetches the target row for one source row at a time +- reverse relation execution fetches child rows for one parent row at a time +- reverse relation filtering, ordering, and pagination are applied after the fetch + +The fallback helpers amplify this cost: + +- `fetch_rows_by_column()` +- `fetch_rows_by_index_or_scan()` +- `apply_collection_query()` + +As a result: + +- `posts { author { ... } }` can repeatedly fetch the same author +- `users { posts { ... } }` can repeatedly probe the same foreign-key index +- `users { posts { comments { ... } } }` multiplies this cost by depth + +### 5.3 Why the current delta path still needs batching + +`GraphqlDeltaObservable` already benefits from incremental maintenance of root rows. That is valuable and should not be regressed. + +But today the delta path still does this after root-row maintenance: + +- mark payload dirty +- re-render the GraphQL payload tree from current rows + +So the current delta advantage is mostly: + +- better root-row maintenance + +not yet: + +- better nested payload maintenance + +This design addresses the missing nested layer. + +## 6. Design Principles + +1. Keep the two-kernel architecture intact + - `SnapshotKernel` remains cached-plan/requery/patch + - `DeltaKernel` remains DBSP-style dataflow/IVM + +2. Put batching in the GraphQL adapter + - no GraphQL-specific branching in SQL hot loops + - no GraphQL-specific state inside `ObservableQuery` + +3. Batch by relation edge and by frontier + - resolve one edge for many parents at once + - move level-by-level through the selection tree + +4. Planner first, storage fallback second + - use planner-backed nested fetches when semantics allow + - fall back to storage/index probing only when necessary + +5. Preserve API semantics + - same JS-facing payload shape + - same subscription lifecycle + - notify only when the final payload actually changes + +6. Keep GraphQL bridge-only + - GraphQL compiles plans + - GraphQL renders payloads + - live kernels remain owned by lower layers + +## 7. Proposed Architecture + +The new batching layer introduces three main pieces. + +### 7.1 `GraphqlBatchPlan` + +A compiled, immutable description of how to render a bound GraphQL root field in a set-oriented way. + +Responsibilities: + +- describe the selection tree as render nodes and relation edges +- precompute relation fetch strategy hints +- precompute dependency edges by table +- define cache and invalidation boundaries + +Location: + +- new module in `crates/gql`, for example `crates/gql/src/render_plan.rs` + +### 7.2 `GraphqlBatchRenderer` + +A runtime renderer that takes: + +- root rows +- a `GraphqlBatchPlan` +- subscription-local state +- invalidation information + +and produces: + +- a full `GraphqlResponse` + +Responsibilities: + +- perform relation fetches in batches +- bucket results by relation key +- reuse cached buckets and cached row materializations +- walk nested selections level-by-level instead of row-by-row + +Location: + +- new module in `crates/gql`, for example `crates/gql/src/batch_render.rs` + +### 7.3 `GraphqlBatchState` + +A mutable, subscription-local state object held by `GraphqlSubscriptionObservable` and `GraphqlDeltaObservable`. + +Responsibilities: + +- cache rendered node values +- cache relation buckets +- remember parent membership for buckets +- track dirty edges and dirty keys +- retain the previous rendered response for equality checks + +Location: + +- new module in `crates/gql` or `crates/database`, depending on whether ownership remains purely GraphQL-side +- preferred: state type lives in `crates/gql`, adapter holds it from `crates/database` + +## 8. Data Structures + +### 8.1 Batch plan + +The batch plan should represent the bound selection tree explicitly. + +```rust +pub struct GraphqlBatchPlan { + pub root_node: NodeId, + pub nodes: Vec, + pub edges: Vec, + pub table_to_edges: HashMap>, +} + +pub struct RenderNodePlan { + pub id: NodeId, + pub table_name: String, + pub scalar_fields: Vec, + pub relation_edges: Vec, +} + +pub struct RelationEdgePlan { + pub id: EdgeId, + pub parent_node: NodeId, + pub child_node: NodeId, + pub relation: RelationMeta, + pub query: Option, + pub strategy: RelationFetchStrategy, + pub cardinality: RelationCardinality, +} +``` + +This plan is compiled once at subscription creation time from: + +- `GraphqlCatalog` +- `BoundRootField` +- `BoundSelectionSet` +- table-id metadata from `cynos-database` + +### 8.2 Fetch strategies + +```rust +pub enum RelationFetchStrategy { + PlannerBatch, + IndexedProbeBatch, + ScanAndBucket, +} +``` + +Meaning: + +- `PlannerBatch` + - build one planner-backed fetch over a set of relation keys +- `IndexedProbeBatch` + - deduplicate relation keys and probe by unique key +- `ScanAndBucket` + - one scan filtered by a key set, then bucketize + +### 8.3 Batch state + +```rust +pub struct GraphqlBatchState { + pub response: Option, + pub row_value_cache: HashMap<(NodeId, u64, u64), ResponseValue>, + pub edge_bucket_cache: HashMap<(EdgeId, RelationKey), BucketRows>, + pub edge_parent_membership: HashMap<(EdgeId, RelationKey), SmallVec<[ParentSlot; 4]>>, + pub dirty_edges: HashSet, + pub dirty_keys: HashMap>, +} +``` + +Key ideas: + +- `row_value_cache` + - key includes row id and row version + - repeated references to the same row can reuse the same rendered scalar object +- `edge_bucket_cache` + - one bucket per relation key per edge + - forward relation buckets hold at most one row + - reverse relation buckets hold lists of rows +- `edge_parent_membership` + - maps one edge key to the parent slots currently depending on it + - lets invalidation target only affected subtrees + +### 8.4 Invalidation envelope + +The GraphQL adapter should consume a normalized invalidation description instead of ad hoc booleans. + +```rust +pub struct GraphqlInvalidation { + pub root_changed: bool, + pub changed_tables: SmallVec<[TableId; 4]>, + pub dirty_edges: SmallVec<[EdgeId; 8]>, + pub dirty_keys: HashMap>, +} +``` + +The snapshot and delta adapters can both produce this shape, with different precision levels. + +## 9. Compile-Time Strategy Selection + +### 9.1 Root plan stays unchanged + +The current root path remains the same: + +- root field is lowered to a planner-backed logical plan +- GraphQL live still selects snapshot or delta backend in `crates/database/src/database.rs` + +This design does not replace the current root-plan path. + +### 9.2 Relation-edge strategy rules + +Each relation edge in the nested selection tree chooses a fetch strategy once at compile time. + +#### `PlannerBatch` + +Preferred when: + +- the edge query can be expressed as one set-oriented planner query +- the relation filter can be combined with `relation_key IN (...)` +- nested ordering should reuse planner/index selection + +Typical use: + +- reverse relations with filter and ordering +- forward relations when planner-backed fetch is simpler than repeated probes + +#### `IndexedProbeBatch` + +Preferred when: + +- key lookups are highly selective +- the relation has a suitable single-column index +- per-key probing is cheaper than a broad batch query + +Important detail: + +- this is still batched by unique keys +- it is not one probe per parent row + +#### `ScanAndBucket` + +Used when: + +- no useful index exists +- planner batching is unavailable +- a scan over the child table plus key filtering is cheaper or simpler than repeated probes + +This is a correctness-preserving fallback and should remain available. + +### 9.3 Strategy heuristics + +The initial selector should be conservative and explicit. + +Recommended heuristics: + +- forward relation + - if parent column is the target table primary key or a unique single-column index exists: + - prefer `IndexedProbeBatch` + - otherwise: + - use `PlannerBatch` if supported + - else `ScanAndBucket` + +- reverse relation without `limit/offset` + - prefer `PlannerBatch` + +- reverse relation with `limit/offset` + - still batch fetch by relation key + - if planner can apply filter and order but not per-parent limit: + - fetch filtered and ordered rows once + - bucketize by relation key + - apply limit/offset per bucket after bucketization + - if planner path is unavailable: + - use `IndexedProbeBatch` or `ScanAndBucket` + +This rule preserves semantics while still removing the row-by-row execution shape. + +## 10. Runtime Pipeline + +### 10.1 Subscription creation + +At GraphQL subscription creation time: + +1. bind the GraphQL document +2. compile the root field plan as today +3. compile a `GraphqlBatchPlan` +4. create the live plan: + - `SnapshotKernel` or `DeltaKernel` +5. initialize `GraphqlBatchState` +6. render the initial response through the batch renderer + +This changes the adapter payload path, not the kernel selection path. + +### 10.2 Snapshot-backed subscription flow + +`GraphqlSubscriptionObservable` continues to: + +- patch or re-execute root rows on root-table changes +- mark payloads dirty when nested tables change + +With batching, the adapter then: + +- builds a `GraphqlInvalidation` +- reuses cached buckets for untouched edges +- re-fetches only dirty relation edges +- re-renders only affected node values when possible +- recomputes the final payload +- emits only if the final payload changed + +This keeps the current snapshot semantics but removes nested row-by-row relation execution. + +### 10.3 Delta-backed subscription flow + +`GraphqlDeltaObservable` continues to: + +- push deltas into `MaterializedView` +- maintain root rows incrementally + +With batching, the adapter additionally: + +- converts table deltas into edge/key invalidation +- invalidates only the touched buckets when possible +- re-renders only affected subtrees +- leaves untouched relation buckets and untouched rendered nodes intact + +This gives the delta path a second incremental layer: + +- incremental root rows +- batched, key-targeted nested rendering + +## 11. Batching Algorithm + +### 11.1 Frontier-based rendering + +The renderer must traverse the selection tree by frontier, not by row recursion. + +High-level algorithm: + +1. start with the root frontier of root rows +2. render scalar fields for all rows in the frontier +3. group relation work by edge +4. for each edge: + - collect unique relation keys from all parent rows in the frontier + - remove already-cached and not-dirty keys + - fetch missing buckets in one batched operation + - bucketize results by relation key +5. attach child rows to parent slots +6. build the next frontier from unique child rows +7. repeat until the deepest nested level is processed + +Pseudo-code: + +```rust +for frontier in frontier_queue { + render_scalars(frontier.rows, frontier.node_id, state); + + for edge_id in plan.nodes[frontier.node_id].relation_edges.iter().copied() { + let keys = collect_unique_keys(frontier.rows, edge_id); + let fetch_keys = subtract_cached_clean_keys(keys, state, invalidation); + let new_buckets = fetch_many(cache, plan.edge(edge_id), &fetch_keys); + state.merge_buckets(edge_id, new_buckets); + let next_rows = attach_children(frontier.rows, edge_id, state); + frontier_queue.push(next_rows); + } +} +``` + +### 11.2 Why this removes N+1 + +For a query like: + +```graphql +subscription { + users { + id + posts { + id + comments { + id + } + } + } +} +``` + +the current shape is approximately: + +- one `posts` fetch per `user` +- one `comments` fetch per `post` + +The new shape becomes: + +- one batched `posts` fetch for all visible `user.id` +- one batched `comments` fetch for all visible `post.id` + +The cost grows with: + +- number of relation edges +- number of unique relation keys +- number of matched rows + +not with: + +- number of parent rows multiplied by depth + +## 12. Fetch Strategy Details + +### 12.1 Planner-backed relation batching + +`PlannerBatch` is the preferred path for nested reverse relations because it lets Cynos reuse: + +- filter planning +- index selection +- sort planning +- execution artifact reuse + +Conceptually, the relation fetch becomes: + +- `child.relation_column IN (:keys)` +- plus the original nested GraphQL filter +- plus planner-backed ordering when applicable + +That means GraphQL nested relations can continue to benefit from the existing planner instead of reimplementing query logic inside the adapter. + +Recommended implementation pieces: + +- add a new planner helper in `crates/gql/src/plan.rs` +- for example: + - `build_relation_batch_plan(...)` +- allow the nested relation query builder to synthesize: + - a relation-key `IN` predicate + - combined with the user-specified nested filter + +The fetched row stream is then bucketized by relation key. + +### 12.2 Indexed probe batching + +For very selective keyed lookups, one planner batch may not be the best trade-off. + +In those cases: + +- deduplicate relation keys first +- probe once per unique key, not once per parent row +- prefer `visit_index_scan_with_options()` from `crates/storage/src/row_store.rs` + +Benefits: + +- avoids repeated work for repeated keys +- avoids materializing intermediate vectors when the visitor API is enough +- retains storage-level limit/offset/reverse support where useful + +### 12.3 Scan-and-bucket fallback + +When no useful index exists, the adapter should still avoid row-by-row scans. + +Instead: + +- build a `HashSet` of relation keys +- scan the child table once +- keep rows whose relation column is in the key set +- bucketize them + +This is the correct fallback because it replaces: + +- many probes or many scans + +with: + +- one scan plus bucketization + +## 13. Invalidation Model + +### 13.1 Snapshot path invalidation + +Snapshot-backed GraphQL subscriptions do not always have old/new row values for every nested-table change. Their invalidation is therefore coarser. + +Recommended behavior: + +- root-table changes + - keep existing patch-or-requery logic for root rows +- non-root dependency changes + - map changed tables to dependent relation edges + - mark those edges dirty + - keep root rows intact unless the root plan itself changed + +This already avoids the worst current behavior: + +- full root requery for pure nested relation churn + +### 13.2 Delta path invalidation + +Delta-backed GraphQL subscriptions receive concrete row deltas. They can therefore invalidate with much higher precision. + +For each changed row: + +- extract old and new relation keys for affected edges +- dirty both old and new buckets +- invalidate row render cache entries tied to the changed row id/version + +Examples: + +- reverse relation `users.posts` + - a post insert/update/delete dirties `author_id` buckets +- forward relation `posts.author` + - a user update dirties the `id` bucket + - a post update changing `author_id` dirties both the old and new author buckets + +This is what allows the delta path to become truly relation-aware instead of merely root-aware. + +### 13.3 Table-to-edge indexing + +`GraphqlBatchPlan` should precompute: + +- which relation edges depend on which tables + +This lets the adapters translate: + +- changed table ids + +into: + +- candidate dirty edges + +without walking the whole selection tree on every flush. + +## 14. Caching Model + +### 14.1 Row render cache + +Cache rendered values by: + +- node id +- row id +- row version + +This is especially important for: + +- shared forward relations +- multi-level graphs where the same row appears from multiple parents + +### 14.2 Relation bucket cache + +Cache the result of one edge for one relation key. + +Examples: + +- `(posts.author, author_id=2) -> Some(user#2)` +- `(users.posts, user_id=1) -> [post#10, post#11]` + +This ensures repeated keys are resolved once per subscription state. + +### 14.3 Parent membership cache + +Track which parent slots depend on which relation-key buckets. + +This is necessary for: + +- targeted subtree invalidation +- efficient recomposition after key-local changes + +Without it, the renderer would still have to re-walk too much of the tree after every nested update. + +## 15. Semantics + +### 15.1 External API semantics + +This design preserves: + +- `subscribeGraphql()` output shape +- `PreparedGraphqlQuery.subscribe()` output shape +- full payload snapshots returned by `get_result()` +- callback emission only when final payload changes + +### 15.2 GraphQL query semantics + +The batched renderer must preserve the semantics of: + +- nested scalar selections +- nested forward and reverse relations +- nested filters +- nested ordering +- nested limit/offset in snapshot mode +- `@include` / `@skip` pruned selections after binding + +Directive handling remains naturally compatible because batching happens after binding, when the active selection tree is already known. + +### 15.3 Delta capability semantics + +This design does not change the current delta eligibility gate in `crates/gql/src/bind.rs`. + +That means: + +- delta-backed GraphQL remains restricted to query shapes already deemed delta-capable +- batching improves rendering work inside that supported subset +- snapshot batching still handles the broader GraphQL surface + +## 16. Module Ownership and Code Placement + +Recommended file additions: + +- `crates/gql/src/render_plan.rs` + - compile `BoundRootField` into `GraphqlBatchPlan` +- `crates/gql/src/batch_fetch.rs` + - planner-backed and storage-backed batched relation fetch helpers +- `crates/gql/src/batch_render.rs` + - frontier renderer, bucketization, caches, invalidation application + +Recommended integration points: + +- `crates/gql/src/lib.rs` + - export the new plan and renderer types +- `crates/database/src/database.rs` + - compile `GraphqlBatchPlan` alongside the existing root plan +- `crates/database/src/live_runtime.rs` + - extend GraphQL adapter plans to carry batch-plan metadata +- `crates/database/src/reactive_bridge.rs` + - replace recursive payload rendering with `GraphqlBatchRenderer` + +No changes should be required to: + +- `observe()` external API +- `changes()` external API +- `trace()` external API +- `ObservableQuery` +- `MaterializedView` + +## 17. Performance Contract + +The following contracts must hold after the refactor. + +### 17.1 SQL live paths + +- `observe()` must not regress beyond noise +- `changes()` must not regress beyond noise +- `trace()` must not regress beyond noise + +Reason: + +- GraphQL batching must stay entirely outside SQL live hot loops + +### 17.2 GraphQL live paths + +- nested relation renders should scale with: + - number of unique relation keys + - number of matched rows + - number of dirty edges/keys + +not with: + +- number of parent rows times depth + +### 17.3 Snapshot-backed GraphQL + +Pure nested relation churn should no longer imply: + +- row-by-row relation re-fetching +- repeated fetches for repeated keys + +### 17.4 Delta-backed GraphQL + +Delta-backed GraphQL should retain: + +- incremental root row maintenance + +and additionally gain: + +- relation-key-local nested invalidation +- relation bucket reuse + +## 18. Testing and Verification + +### 18.1 Correctness tests + +Add or extend tests for: + +- forward relation batching +- reverse relation batching +- multi-level nested relations +- repeated keys on forward relations +- repeated keys on reverse relations +- nested filter correctness +- nested order-by correctness +- nested limit/offset correctness in snapshot mode +- delta invalidation for old/new relation keys + +Recommended locations: + +- `crates/gql` unit tests for plan compilation and bucketization +- `crates/database/src/database.rs` tests for live subscription correctness +- `js/packages/core/tests/graphql.test.ts` for wasm-facing end-to-end cases + +### 18.2 Performance tests + +Add dedicated GraphQL batching benchmarks that compare: + +- old recursive render shape vs batched render shape +- snapshot-backed nested subscriptions +- delta-backed nested subscriptions +- repeated-key forward relation graphs +- large-fanout reverse relation graphs +- multi-level relation graphs + +Recommended benchmark scenarios: + +- `users -> posts` +- `posts -> author` +- `users -> posts -> comments` +- many posts sharing the same author +- many comments sharing the same post + +### 18.3 Regression suite + +The following must continue to pass: + +- existing Rust unit tests +- existing wasm/browser tests +- existing performance suite in `cynos-perf` + +## 19. Risks and Mitigations + +### Risk 1: batching becomes a second query engine + +Mitigation: + +- use planner-backed nested fetches wherever possible +- keep batching focused on relation-key set execution and bucketization +- do not duplicate optimizer logic in the GraphQL adapter + +### Risk 2: cache invalidation complexity grows too fast + +Mitigation: + +- compile explicit `table_to_edges` mappings +- keep the invalidation envelope formal and typed +- allow snapshot mode to use coarse invalidation when precise key invalidation is not available + +### Risk 3: GraphQL code leaks into SQL hot paths + +Mitigation: + +- keep new batching state and rendering code entirely in GraphQL adapter modules +- keep `LiveRegistry`, `ObservableQuery`, and `MaterializedView` free of GraphQL-specific logic + +### Risk 4: per-parent limit/offset semantics are accidentally changed + +Mitigation: + +- treat per-parent limit/offset as bucket post-processing semantics +- add explicit tests for nested order + limit + offset combinations + +## 20. Acceptance Criteria + +This design is considered implemented when all of the following are true: + +- GraphQL live no longer resolves nested relations row-by-row in the general case +- one subscription render batches relation resolution by edge and by unique relation key +- snapshot-backed GraphQL subscriptions reuse batched relation fetches +- delta-backed GraphQL subscriptions reuse batched relation fetches and key-local invalidation +- existing live runtime abstractions remain intact +- `observe()` / `changes()` / `trace()` do not regress +- existing unit, wasm, and performance suites continue to pass + +## 21. Summary + +The correct place to solve GraphQL live N+1 in Cynos is not the live runtime kernel. It is the GraphQL adapter layer. + +The live runtime already batches change delivery correctly. What is missing is a second batching layer: + +- relation batching during GraphQL payload assembly + +This design introduces that layer by: + +- compiling a `GraphqlBatchPlan` +- rendering with a frontier-based `GraphqlBatchRenderer` +- caching relation buckets and rendered node values in `GraphqlBatchState` +- using coarse invalidation for snapshot mode and key-targeted invalidation for delta mode + +The result keeps Cynos aligned with the intended architecture: + +- two live kernels +- one shared runtime +- GraphQL as an upper-layer bridge +- better nested live performance without sacrificing current SQL live behavior diff --git a/crates/gql/src/batch_render.rs b/crates/gql/src/batch_render.rs new file mode 100644 index 0000000..ea3101d --- /dev/null +++ b/crates/gql/src/batch_render.rs @@ -0,0 +1,1130 @@ +use alloc::rc::Rc; +use alloc::string::String; +use alloc::vec::Vec; + +use cynos_core::{Row, Value}; +use cynos_index::KeyRange; +use cynos_storage::{RowStore, TableCache}; +use hashbrown::{HashMap, HashSet}; + +use crate::bind::{ + BoundCollectionQuery, BoundFilter, BoundRootField, BoundRootFieldKind, ColumnPredicate, + PredicateOp, +}; +use crate::catalog::{GraphqlCatalog, TableMeta}; +use crate::error::{GqlError, GqlErrorKind, GqlResult}; +use crate::execute::apply_collection_query; +use crate::plan::{build_table_query_plan, execute_logical_plan}; +use crate::render_plan::{ + EdgeId, GraphqlBatchPlan, NodeId, RelationEdgeKind, RelationEdgePlan, RelationFetchStrategy, + RenderFieldKind, +}; +use crate::response::{GraphqlResponse, ResponseField, ResponseValue}; + +#[derive(Clone, Debug, Default)] +pub struct GraphqlInvalidation { + pub root_changed: bool, + pub changed_tables: Vec, + pub dirty_edge_keys: HashMap>, + pub dirty_table_rows: HashMap>, +} + +#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)] +struct RowCacheKey { + node_id: NodeId, + row_id: u64, + row_version: u64, +} + +impl RowCacheKey { + fn new(node_id: NodeId, row: &Rc) -> Self { + Self { + node_id, + row_id: row.id(), + row_version: row.version(), + } + } +} + +#[derive(Clone, Debug, Default)] +pub struct GraphqlBatchState { + row_cache: HashMap, + row_sources: HashMap>, + row_dependencies: HashMap>, + node_row_index: HashMap>>, + edge_bucket_cache: HashMap>>>, + edge_parent_membership: HashMap>>, +} + +impl GraphqlBatchState { + pub fn apply_invalidation( + &mut self, + plan: &GraphqlBatchPlan, + invalidation: &GraphqlInvalidation, + ) { + let changed_tables: HashSet = invalidation.changed_tables.iter().cloned().collect(); + let mut pending = Vec::new(); + let mut seen = HashSet::new(); + + if invalidation.root_changed { + self.collect_node_rows(plan.root_node(), &mut pending); + } + + for (table_name, row_ids) in &invalidation.dirty_table_rows { + for &node_id in plan.nodes_for_table(table_name) { + for row_id in row_ids { + self.collect_row_id_entries(node_id, *row_id, &mut pending); + } + } + } + + let mut targeted_edges = HashSet::new(); + for edge in plan.edges() { + let keys = invalidation.dirty_edge_keys.get(&edge.id); + let edge_changed = changed_tables.contains(&edge.direct_table); + if !edge_changed && keys.is_none() { + continue; + } + targeted_edges.insert(edge.id); + if let Some(keys) = keys { + self.collect_edge_parent_rows(edge.id, keys, &mut pending); + if let Some(edge_cache) = self.edge_bucket_cache.get_mut(&edge.id) { + for key in keys { + edge_cache.remove(key); + } + } + } else { + self.collect_all_edge_parent_rows(edge.id, &mut pending); + self.edge_bucket_cache.remove(&edge.id); + } + } + + for (edge_id, keys) in &invalidation.dirty_edge_keys { + if targeted_edges.contains(edge_id) { + continue; + } + self.collect_edge_parent_rows(*edge_id, keys, &mut pending); + if let Some(edge_cache) = self.edge_bucket_cache.get_mut(edge_id) { + for key in keys { + edge_cache.remove(key); + } + } + } + + while let Some(row_key) = pending.pop() { + if !seen.insert(row_key) { + continue; + } + let parents = self.parent_rows_for_row(plan, row_key); + self.remove_row_entry(row_key); + pending.extend(parents); + } + } + + fn remember_row(&mut self, row_key: RowCacheKey, row: &Rc) { + self.row_sources.insert(row_key, row.clone()); + self.node_row_index + .entry(row_key.node_id) + .or_insert_with(HashMap::new) + .entry(row_key.row_id) + .or_insert_with(HashSet::new) + .insert(row_key); + } + + fn register_parent_membership(&mut self, row_key: RowCacheKey, edge_id: EdgeId, key: Value) { + let dependencies = self + .row_dependencies + .entry(row_key) + .or_insert_with(Vec::new); + if !dependencies + .iter() + .any(|(dep_edge_id, dep_key)| *dep_edge_id == edge_id && *dep_key == key) + { + dependencies.push((edge_id, key.clone())); + } + self.edge_parent_membership + .entry(edge_id) + .or_insert_with(HashMap::new) + .entry(key) + .or_insert_with(HashSet::new) + .insert(row_key); + } + + fn collect_node_rows(&self, node_id: NodeId, pending: &mut Vec) { + if let Some(node_rows) = self.node_row_index.get(&node_id) { + for row_keys in node_rows.values() { + pending.extend(row_keys.iter().copied()); + } + } + } + + fn collect_row_id_entries(&self, node_id: NodeId, row_id: u64, pending: &mut Vec) { + if let Some(row_keys) = self + .node_row_index + .get(&node_id) + .and_then(|rows| rows.get(&row_id)) + { + pending.extend(row_keys.iter().copied()); + } + } + + fn collect_edge_parent_rows( + &self, + edge_id: EdgeId, + keys: &HashSet, + pending: &mut Vec, + ) { + let Some(edge_membership) = self.edge_parent_membership.get(&edge_id) else { + return; + }; + for key in keys { + if let Some(parent_rows) = edge_membership.get(key) { + pending.extend(parent_rows.iter().copied()); + } + } + } + + fn collect_all_edge_parent_rows(&self, edge_id: EdgeId, pending: &mut Vec) { + let Some(edge_membership) = self.edge_parent_membership.get(&edge_id) else { + return; + }; + for parent_rows in edge_membership.values() { + pending.extend(parent_rows.iter().copied()); + } + } + + fn parent_rows_for_row( + &self, + plan: &GraphqlBatchPlan, + row_key: RowCacheKey, + ) -> Vec { + let Some(row) = self.row_sources.get(&row_key) else { + return Vec::new(); + }; + + let mut parents = HashSet::new(); + for &edge_id in plan.incoming_edges(row_key.node_id) { + let edge = plan.edge(edge_id); + let Some(key) = row.get(edge_target_column_index(edge)).cloned() else { + continue; + }; + if key.is_null() { + continue; + } + if let Some(edge_membership) = self.edge_parent_membership.get(&edge_id) { + if let Some(parent_rows) = edge_membership.get(&key) { + parents.extend(parent_rows.iter().copied()); + } + } + } + + parents.into_iter().collect() + } + + fn remove_row_entry(&mut self, row_key: RowCacheKey) { + self.row_cache.remove(&row_key); + + if let Some(dependencies) = self.row_dependencies.remove(&row_key) { + for (edge_id, key) in dependencies { + let mut remove_edge_membership = false; + if let Some(edge_membership) = self.edge_parent_membership.get_mut(&edge_id) { + if let Some(parent_rows) = edge_membership.get_mut(&key) { + parent_rows.remove(&row_key); + if parent_rows.is_empty() { + edge_membership.remove(&key); + } + } + remove_edge_membership = edge_membership.is_empty(); + } + if remove_edge_membership { + self.edge_parent_membership.remove(&edge_id); + } + } + } + + self.row_sources.remove(&row_key); + + if let Some(node_rows) = self.node_row_index.get_mut(&row_key.node_id) { + if let Some(row_versions) = node_rows.get_mut(&row_key.row_id) { + row_versions.remove(&row_key); + if row_versions.is_empty() { + node_rows.remove(&row_key.row_id); + } + } + if node_rows.is_empty() { + self.node_row_index.remove(&row_key.node_id); + } + } + } +} + +pub fn render_graphql_response( + cache: &TableCache, + catalog: &GraphqlCatalog, + field: &BoundRootField, + plan: &GraphqlBatchPlan, + state: &mut GraphqlBatchState, + rows: &[Rc], +) -> GqlResult { + let field = render_root_field(cache, catalog, field, plan, state, rows)?; + Ok(GraphqlResponse::new(ResponseValue::object(alloc::vec![ + field + ]))) +} + +fn render_root_field( + cache: &TableCache, + catalog: &GraphqlCatalog, + field: &BoundRootField, + plan: &GraphqlBatchPlan, + state: &mut GraphqlBatchState, + rows: &[Rc], +) -> GqlResult { + let value = match &field.kind { + BoundRootFieldKind::Collection { .. } + | BoundRootFieldKind::Insert { .. } + | BoundRootFieldKind::Update { .. } + | BoundRootFieldKind::Delete { .. } => ResponseValue::list(render_node_list( + cache, + catalog, + plan, + state, + plan.root_node(), + rows, + )?), + BoundRootFieldKind::ByPk { .. } => match rows.first() { + Some(row) => { + prefetch_node_edges( + cache, + catalog, + plan, + state, + plan.root_node(), + core::slice::from_ref(row), + )?; + render_node_object(cache, catalog, plan, state, plan.root_node(), row)? + } + None => ResponseValue::Null, + }, + BoundRootFieldKind::Typename => { + return Err(GqlError::new( + GqlErrorKind::Unsupported, + "typename root fields do not accept row rendering", + )); + } + }; + + Ok(ResponseField::new(field.response_key.clone(), value)) +} + +fn render_node_list( + cache: &TableCache, + catalog: &GraphqlCatalog, + plan: &GraphqlBatchPlan, + state: &mut GraphqlBatchState, + node_id: NodeId, + rows: &[Rc], +) -> GqlResult> { + if rows.is_empty() { + return Ok(Vec::new()); + } + + prefetch_node_edges(cache, catalog, plan, state, node_id, rows)?; + + let mut values = Vec::with_capacity(rows.len()); + for row in rows { + values.push(render_node_object( + cache, catalog, plan, state, node_id, row, + )?); + } + Ok(values) +} + +fn render_node_object( + cache: &TableCache, + catalog: &GraphqlCatalog, + plan: &GraphqlBatchPlan, + state: &mut GraphqlBatchState, + node_id: NodeId, + row: &Rc, +) -> GqlResult { + let row_key = RowCacheKey::new(node_id, row); + if let Some(cached) = state.row_cache.get(&row_key) { + return Ok(cached.clone()); + } + + state.remember_row(row_key, row); + + let node = plan.node(node_id); + let mut fields = Vec::with_capacity(node.fields.len()); + for field in &node.fields { + let value = match &field.kind { + RenderFieldKind::Typename { value } => { + ResponseValue::Scalar(Value::String(value.clone())) + } + RenderFieldKind::Column { column_index } => row + .get(*column_index) + .cloned() + .map(ResponseValue::Scalar) + .unwrap_or(ResponseValue::Null), + RenderFieldKind::ForwardRelation { edge_id } => { + render_forward_relation(cache, catalog, plan, state, *edge_id, row_key, row)? + } + RenderFieldKind::ReverseRelation { edge_id } => { + render_reverse_relation(cache, catalog, plan, state, *edge_id, row_key, row)? + } + }; + fields.push(ResponseField::new(field.response_key.clone(), value)); + } + + let value = ResponseValue::object(fields); + state.row_cache.insert(row_key, value.clone()); + Ok(value) +} + +fn render_forward_relation( + cache: &TableCache, + catalog: &GraphqlCatalog, + plan: &GraphqlBatchPlan, + state: &mut GraphqlBatchState, + edge_id: EdgeId, + parent_row_key: RowCacheKey, + row: &Rc, +) -> GqlResult { + let edge = plan.edge(edge_id); + let Some(key) = row.get(edge.relation.child_column_index).cloned() else { + return Ok(ResponseValue::Null); + }; + if key.is_null() { + return Ok(ResponseValue::Null); + } + + state.register_parent_membership(parent_row_key, edge_id, key.clone()); + + let child_row = state + .edge_bucket_cache + .get(&edge_id) + .and_then(|buckets| buckets.get(&key)) + .and_then(|rows| rows.first()) + .cloned(); + + match child_row { + Some(child_row) => { + prefetch_node_edges( + cache, + catalog, + plan, + state, + edge.child_node, + core::slice::from_ref(&child_row), + )?; + render_node_object(cache, catalog, plan, state, edge.child_node, &child_row) + } + None => Ok(ResponseValue::Null), + } +} + +fn render_reverse_relation( + cache: &TableCache, + catalog: &GraphqlCatalog, + plan: &GraphqlBatchPlan, + state: &mut GraphqlBatchState, + edge_id: EdgeId, + parent_row_key: RowCacheKey, + row: &Rc, +) -> GqlResult { + let edge = plan.edge(edge_id); + let Some(key) = row.get(edge.relation.parent_column_index).cloned() else { + return Ok(ResponseValue::list(Vec::new())); + }; + if key.is_null() { + return Ok(ResponseValue::list(Vec::new())); + } + + state.register_parent_membership(parent_row_key, edge_id, key.clone()); + + let child_rows = state + .edge_bucket_cache + .get(&edge_id) + .and_then(|buckets| buckets.get(&key)) + .cloned() + .unwrap_or_default(); + let items = render_node_list(cache, catalog, plan, state, edge.child_node, &child_rows)?; + Ok(ResponseValue::list(items)) +} + +fn prefetch_node_edges( + cache: &TableCache, + catalog: &GraphqlCatalog, + plan: &GraphqlBatchPlan, + state: &mut GraphqlBatchState, + node_id: NodeId, + rows: &[Rc], +) -> GqlResult<()> { + if rows.is_empty() { + return Ok(()); + } + + for field in &plan.node(node_id).fields { + let edge_id = match field.kind { + RenderFieldKind::ForwardRelation { edge_id } + | RenderFieldKind::ReverseRelation { edge_id } => edge_id, + RenderFieldKind::Typename { .. } | RenderFieldKind::Column { .. } => continue, + }; + + let edge = plan.edge(edge_id); + let keys = collect_edge_keys(edge, rows); + if keys.is_empty() { + continue; + } + + let missing_keys = { + let edge_cache = state + .edge_bucket_cache + .entry(edge_id) + .or_insert_with(HashMap::new); + keys.into_iter() + .filter(|key| !edge_cache.contains_key(key)) + .collect::>() + }; + + if missing_keys.is_empty() { + continue; + } + + let fetched = fetch_edge_buckets(cache, catalog, edge, &missing_keys)?; + let edge_cache = state + .edge_bucket_cache + .entry(edge_id) + .or_insert_with(HashMap::new); + for key in &missing_keys { + let rows = fetched.get(key).cloned().unwrap_or_default(); + edge_cache.insert(key.clone(), rows); + } + } + + Ok(()) +} + +fn collect_edge_keys(edge: &RelationEdgePlan, rows: &[Rc]) -> HashSet { + let mut keys = HashSet::new(); + for row in rows { + let value = match edge.kind { + RelationEdgeKind::Forward => row.get(edge.relation.child_column_index), + RelationEdgeKind::Reverse => row.get(edge.relation.parent_column_index), + }; + if let Some(value) = value.cloned() { + if !value.is_null() { + keys.insert(value); + } + } + } + keys +} + +fn fetch_edge_buckets( + cache: &TableCache, + catalog: &GraphqlCatalog, + edge: &RelationEdgePlan, + keys: &HashSet, +) -> GqlResult>>> { + if keys.is_empty() { + return Ok(HashMap::new()); + } + + let mut buckets = match edge.strategy { + RelationFetchStrategy::PlannerBatch => planner_batch_fetch(cache, catalog, edge, keys) + .or_else(|_| scan_and_bucket_fetch(cache, edge, keys)), + RelationFetchStrategy::IndexedProbeBatch => indexed_probe_fetch(cache, edge, keys) + .or_else(|_| planner_batch_fetch(cache, catalog, edge, keys)) + .or_else(|_| scan_and_bucket_fetch(cache, edge, keys)), + RelationFetchStrategy::ScanAndBucket => scan_and_bucket_fetch(cache, edge, keys), + }?; + + for key in keys { + buckets.entry(key.clone()).or_insert_with(Vec::new); + } + Ok(buckets) +} + +fn planner_batch_fetch( + cache: &TableCache, + catalog: &GraphqlCatalog, + edge: &RelationEdgePlan, + keys: &HashSet, +) -> GqlResult>>> { + let table_name = edge_target_table(edge); + let table = catalog.table(table_name).ok_or_else(|| { + GqlError::new( + GqlErrorKind::Binding, + alloc::format!("table `{}` is not available", table_name), + ) + })?; + let query = build_batch_query(table, edge, keys)?; + let plan = build_table_query_plan(table_name, table, &query)?; + let rows = execute_logical_plan(cache, table_name, plan)?; + + let mut buckets = bucket_rows(rows, edge_target_column_index(edge)); + if let Some(query) = edge.query.as_ref() { + apply_bucket_window(&mut buckets, query); + } + Ok(buckets) +} + +fn indexed_probe_fetch( + cache: &TableCache, + edge: &RelationEdgePlan, + keys: &HashSet, +) -> GqlResult>>> { + let table_name = edge_target_table(edge); + let store = cache.get_table(table_name).ok_or_else(|| { + GqlError::new( + GqlErrorKind::Execution, + alloc::format!("table `{}` was not found", table_name), + ) + })?; + + let mut buckets = HashMap::new(); + match edge.kind { + RelationEdgeKind::Forward => { + let pk_compatible = store.schema().primary_key().is_some_and(|pk| { + pk.columns().len() == 1 && pk.columns()[0].name == edge.relation.parent_column + }); + let index_name = find_single_column_index_name(store, &edge.relation.parent_column); + for key in keys { + let rows = if pk_compatible { + store.get_by_pk_values(core::slice::from_ref(key)) + } else if let Some(index_name) = index_name { + fetch_rows_by_known_index_or_scan( + store, + index_name, + &edge.relation.parent_column, + key, + ) + } else { + return Err(GqlError::new( + GqlErrorKind::Unsupported, + "indexed probe fetch requires a primary-key or single-column index", + )); + }; + buckets.insert(key.clone(), rows); + } + } + RelationEdgeKind::Reverse => { + let query = edge.query.as_ref().ok_or_else(|| { + GqlError::new( + GqlErrorKind::Unsupported, + "reverse indexed probe fetch requires a bound collection query", + ) + })?; + let index_name = if store.schema().get_index(&edge.relation.fk_name).is_some() { + Some(edge.relation.fk_name.as_str()) + } else { + find_single_column_index_name(store, &edge.relation.child_column) + }; + let Some(index_name) = index_name else { + return Err(GqlError::new( + GqlErrorKind::Unsupported, + "reverse indexed probe fetch requires an index on the relation key", + )); + }; + + for key in keys { + let rows = fetch_rows_by_known_index_or_scan( + store, + index_name, + &edge.relation.child_column, + key, + ); + buckets.insert(key.clone(), apply_collection_query(rows, query)); + } + } + } + Ok(buckets) +} + +fn scan_and_bucket_fetch( + cache: &TableCache, + edge: &RelationEdgePlan, + keys: &HashSet, +) -> GqlResult>>> { + let table_name = edge_target_table(edge); + let store = cache.get_table(table_name).ok_or_else(|| { + GqlError::new( + GqlErrorKind::Execution, + alloc::format!("table `{}` was not found", table_name), + ) + })?; + let key_column_index = edge_target_column_index(edge); + + let mut buckets: HashMap>> = HashMap::new(); + for row in store.scan() { + let Some(value) = row.get(key_column_index).cloned() else { + continue; + }; + if value.is_null() || !keys.contains(&value) { + continue; + } + buckets.entry(value).or_insert_with(Vec::new).push(row); + } + + if let Some(query) = edge.query.as_ref() { + for rows in buckets.values_mut() { + let materialized = apply_collection_query(core::mem::take(rows), query); + *rows = materialized; + } + } + + Ok(buckets) +} + +fn build_batch_query( + table: &TableMeta, + edge: &RelationEdgePlan, + keys: &HashSet, +) -> GqlResult { + let key_filter = relation_key_filter(table, edge, keys)?; + match edge.kind { + RelationEdgeKind::Forward => Ok(BoundCollectionQuery { + filter: Some(key_filter), + order_by: Vec::new(), + limit: None, + offset: 0, + }), + RelationEdgeKind::Reverse => { + let mut query = edge.query.clone().ok_or_else(|| { + GqlError::new( + GqlErrorKind::Unsupported, + "reverse relation batch query requires a bound collection query", + ) + })?; + query.filter = Some(match query.filter.take() { + Some(existing) => BoundFilter::And(alloc::vec![key_filter, existing]), + None => key_filter, + }); + query.limit = None; + query.offset = 0; + Ok(query) + } + } +} + +fn relation_key_filter( + table: &TableMeta, + edge: &RelationEdgePlan, + keys: &HashSet, +) -> GqlResult { + let column_index = edge_target_column_index(edge); + let column = table.column_by_index(column_index).ok_or_else(|| { + GqlError::new( + GqlErrorKind::Binding, + alloc::format!( + "column index {} was not found on `{}`", + column_index, + table.table_name + ), + ) + })?; + + let mut key_values: Vec<_> = keys.iter().cloned().collect(); + key_values.sort(); + Ok(BoundFilter::Column(ColumnPredicate { + column_index, + data_type: column.data_type, + ops: alloc::vec![PredicateOp::In(key_values)], + })) +} + +fn apply_bucket_window(buckets: &mut HashMap>>, query: &BoundCollectionQuery) { + if query.limit.is_none() && query.offset == 0 { + return; + } + + for rows in buckets.values_mut() { + let start = core::cmp::min(query.offset, rows.len()); + let end = match query.limit { + Some(limit) => start.saturating_add(limit).min(rows.len()), + None => rows.len(), + }; + *rows = rows[start..end].to_vec(); + } +} + +fn bucket_rows(rows: Vec>, key_column_index: usize) -> HashMap>> { + let mut buckets: HashMap>> = HashMap::new(); + for row in rows { + let Some(key) = row.get(key_column_index).cloned() else { + continue; + }; + if key.is_null() { + continue; + } + buckets.entry(key).or_insert_with(Vec::new).push(row); + } + buckets +} + +fn edge_target_table(edge: &RelationEdgePlan) -> &str { + match edge.kind { + RelationEdgeKind::Forward => &edge.relation.parent_table, + RelationEdgeKind::Reverse => &edge.relation.child_table, + } +} + +fn edge_target_column_index(edge: &RelationEdgePlan) -> usize { + match edge.kind { + RelationEdgeKind::Forward => edge.relation.parent_column_index, + RelationEdgeKind::Reverse => edge.relation.child_column_index, + } +} + +fn fetch_rows_by_known_index_or_scan( + store: &RowStore, + index_name: &str, + column_name: &str, + value: &Value, +) -> Vec> { + if store.schema().get_index(index_name).is_some() { + return store.index_scan(index_name, Some(&KeyRange::only(value.clone()))); + } + + let Some(column_index) = store.schema().get_column_index(column_name) else { + return Vec::new(); + }; + store + .scan() + .filter(|row| { + row.get(column_index) + .map(|candidate| candidate.sql_eq(value)) + .unwrap_or(false) + }) + .collect() +} + +fn find_single_column_index_name<'a>(store: &'a RowStore, column_name: &str) -> Option<&'a str> { + store + .schema() + .indices() + .iter() + .find(|index| index.columns().len() == 1 && index.columns()[0].name == column_name) + .map(|index| index.name()) +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::plan::build_root_field_plan; + use crate::query::{execute_query, PreparedQuery}; + use cynos_core::schema::TableBuilder; + use cynos_core::DataType; + use hashbrown::{HashMap, HashSet}; + + fn build_cache() -> TableCache { + let mut cache = TableCache::new(); + + let users = TableBuilder::new("users") + .unwrap() + .add_column("id", DataType::Int64) + .unwrap() + .add_column("name", DataType::String) + .unwrap() + .add_primary_key(&["id"], false) + .unwrap() + .build() + .unwrap(); + let posts = TableBuilder::new("posts") + .unwrap() + .add_column("id", DataType::Int64) + .unwrap() + .add_column("author_id", DataType::Int64) + .unwrap() + .add_column("title", DataType::String) + .unwrap() + .add_primary_key(&["id"], false) + .unwrap() + .add_foreign_key_with_graphql_names( + "fk_posts_author", + "author_id", + "users", + "id", + Some("author"), + Some("posts"), + ) + .unwrap() + .build() + .unwrap(); + let comments = TableBuilder::new("comments") + .unwrap() + .add_column("id", DataType::Int64) + .unwrap() + .add_column("post_id", DataType::Int64) + .unwrap() + .add_column("body", DataType::String) + .unwrap() + .add_primary_key(&["id"], false) + .unwrap() + .add_foreign_key_with_graphql_names( + "fk_comments_post", + "post_id", + "posts", + "id", + Some("post"), + Some("comments"), + ) + .unwrap() + .build() + .unwrap(); + + cache.create_table(users).unwrap(); + cache.create_table(posts).unwrap(); + cache.create_table(comments).unwrap(); + + cache + .get_table_mut("users") + .unwrap() + .insert(Row::new( + 1, + alloc::vec![Value::Int64(1), Value::String("Alice".into())], + )) + .unwrap(); + cache + .get_table_mut("users") + .unwrap() + .insert(Row::new( + 2, + alloc::vec![Value::Int64(2), Value::String("Bob".into())], + )) + .unwrap(); + + cache + .get_table_mut("posts") + .unwrap() + .insert(Row::new( + 10, + alloc::vec![ + Value::Int64(10), + Value::Int64(1), + Value::String("Hello".into()), + ], + )) + .unwrap(); + cache + .get_table_mut("posts") + .unwrap() + .insert(Row::new( + 11, + alloc::vec![ + Value::Int64(11), + Value::Int64(1), + Value::String("Rust".into()), + ], + )) + .unwrap(); + cache + .get_table_mut("posts") + .unwrap() + .insert(Row::new( + 12, + alloc::vec![ + Value::Int64(12), + Value::Int64(2), + Value::String("DB".into()) + ], + )) + .unwrap(); + + cache + .get_table_mut("comments") + .unwrap() + .insert(Row::new( + 100, + alloc::vec![ + Value::Int64(100), + Value::Int64(10), + Value::String("first".into()), + ], + )) + .unwrap(); + cache + .get_table_mut("comments") + .unwrap() + .insert(Row::new( + 101, + alloc::vec![ + Value::Int64(101), + Value::Int64(11), + Value::String("second".into()), + ], + )) + .unwrap(); + cache + .get_table_mut("comments") + .unwrap() + .insert(Row::new( + 102, + alloc::vec![ + Value::Int64(102), + Value::Int64(11), + Value::String("third".into()), + ], + )) + .unwrap(); + + cache + } + + fn execute_with_batch( + cache: &TableCache, + catalog: &GraphqlCatalog, + query: &str, + ) -> GraphqlResponse { + let prepared = PreparedQuery::parse(query).unwrap(); + let bound = prepared.bind(catalog, None).unwrap(); + let field = bound.fields.into_iter().next().unwrap(); + let root_plan = build_root_field_plan(catalog, &field).unwrap(); + let rows = + execute_logical_plan(cache, &root_plan.table_name, root_plan.logical_plan).unwrap(); + let plan = crate::compile_batch_plan(catalog, &field).unwrap(); + let mut state = GraphqlBatchState::default(); + render_graphql_response(cache, catalog, &field, &plan, &mut state, &rows).unwrap() + } + + fn prepare_batch_execution( + cache: &TableCache, + catalog: &GraphqlCatalog, + query: &str, + ) -> ( + crate::bind::BoundRootField, + GraphqlBatchPlan, + Vec>, + GraphqlBatchState, + ) { + let prepared = PreparedQuery::parse(query).unwrap(); + let bound = prepared.bind(catalog, None).unwrap(); + let field = bound.fields.into_iter().next().unwrap(); + let root_plan = build_root_field_plan(catalog, &field).unwrap(); + let rows = + execute_logical_plan(cache, &root_plan.table_name, root_plan.logical_plan).unwrap(); + let plan = crate::compile_batch_plan(catalog, &field).unwrap(); + let mut state = GraphqlBatchState::default(); + render_graphql_response(cache, catalog, &field, &plan, &mut state, &rows).unwrap(); + (field, plan, rows, state) + } + + #[test] + fn batch_renderer_matches_recursive_execution_for_reverse_relation_order_limit() { + let cache = build_cache(); + let catalog = GraphqlCatalog::from_table_cache(&cache); + let query = "{ users(orderBy: [{ field: ID, direction: ASC }]) { id name posts(orderBy: [{ field: ID, direction: DESC }], limit: 1) { id title } } }"; + + let expected = execute_query(&cache, &catalog, query, None, None).unwrap(); + let actual = execute_with_batch(&cache, &catalog, query); + + assert_eq!(actual, expected); + } + + #[test] + fn batch_renderer_matches_recursive_execution_for_multilevel_relations() { + let cache = build_cache(); + let catalog = GraphqlCatalog::from_table_cache(&cache); + let query = "{ posts(orderBy: [{ field: ID, direction: ASC }]) { id title author { id name posts(where: { id: { gte: 11 } }, orderBy: [{ field: ID, direction: ASC }]) { id title comments(orderBy: [{ field: ID, direction: ASC }]) { id body } } } } }"; + + let expected = execute_query(&cache, &catalog, query, None, None).unwrap(); + let actual = execute_with_batch(&cache, &catalog, query); + + assert_eq!(actual, expected); + } + + #[test] + fn batch_invalidation_keeps_unrelated_roots_for_nested_comment_updates() { + let cache = build_cache(); + let catalog = GraphqlCatalog::from_table_cache(&cache); + let query = "{ posts(orderBy: [{ field: ID, direction: ASC }]) { id title author { id name posts(orderBy: [{ field: ID, direction: ASC }]) { id title comments(orderBy: [{ field: ID, direction: ASC }]) { id body } } } } }"; + + let (_field, plan, rows, mut state) = prepare_batch_execution(&cache, &catalog, query); + let comments_edge_id = plan + .edges() + .iter() + .find(|edge| edge.direct_table == "comments") + .map(|edge| edge.id) + .unwrap(); + + state.apply_invalidation( + &plan, + &GraphqlInvalidation { + root_changed: false, + changed_tables: alloc::vec!["comments".into()], + dirty_edge_keys: HashMap::from([( + comments_edge_id, + HashSet::from([Value::Int64(11)]), + )]), + dirty_table_rows: HashMap::from([("comments".into(), HashSet::from([101_u64]))]), + }, + ); + + assert!(state + .edge_bucket_cache + .get(&comments_edge_id) + .is_some_and(|buckets| buckets.contains_key(&Value::Int64(10)))); + assert!(state + .edge_bucket_cache + .get(&comments_edge_id) + .is_none_or(|buckets| !buckets.contains_key(&Value::Int64(11)))); + + let root_node = plan.root_node(); + let root_post_10 = RowCacheKey::new(root_node, &rows[0]); + let root_post_11 = RowCacheKey::new(root_node, &rows[1]); + let root_post_12 = RowCacheKey::new(root_node, &rows[2]); + assert!(!state.row_cache.contains_key(&root_post_10)); + assert!(!state.row_cache.contains_key(&root_post_11)); + assert!(state.row_cache.contains_key(&root_post_12)); + } + + #[test] + fn batch_invalidation_keeps_unrelated_roots_for_forward_relation_updates() { + let cache = build_cache(); + let catalog = GraphqlCatalog::from_table_cache(&cache); + let query = "{ posts(orderBy: [{ field: ID, direction: ASC }]) { id title author { id name posts(orderBy: [{ field: ID, direction: ASC }]) { id title } } } }"; + + let (_field, plan, rows, mut state) = prepare_batch_execution(&cache, &catalog, query); + let author_edge_id = plan + .edges() + .iter() + .find(|edge| edge.kind == RelationEdgeKind::Forward && edge.relation.name == "author") + .map(|edge| edge.id) + .unwrap(); + let user_node_id = *plan.nodes_for_table("users").first().unwrap(); + + state.apply_invalidation( + &plan, + &GraphqlInvalidation { + root_changed: false, + changed_tables: alloc::vec!["users".into()], + dirty_edge_keys: HashMap::from([( + author_edge_id, + HashSet::from([Value::Int64(2)]), + )]), + dirty_table_rows: HashMap::from([("users".into(), HashSet::from([2_u64]))]), + }, + ); + + let root_node = plan.root_node(); + let root_post_10 = RowCacheKey::new(root_node, &rows[0]); + let root_post_11 = RowCacheKey::new(root_node, &rows[1]); + let root_post_12 = RowCacheKey::new(root_node, &rows[2]); + assert!(state.row_cache.contains_key(&root_post_10)); + assert!(state.row_cache.contains_key(&root_post_11)); + assert!(!state.row_cache.contains_key(&root_post_12)); + + let cached_user_1 = state + .row_cache + .keys() + .any(|key| key.node_id == user_node_id && key.row_id == 1); + let cached_user_2 = state + .row_cache + .keys() + .any(|key| key.node_id == user_node_id && key.row_id == 2); + assert!(cached_user_1); + assert!(!cached_user_2); + } +} diff --git a/crates/gql/src/bind.rs b/crates/gql/src/bind.rs index bd6d502..83b47f2 100644 --- a/crates/gql/src/bind.rs +++ b/crates/gql/src/bind.rs @@ -153,6 +153,115 @@ pub struct JsonPredicate { pub type VariableValues = BTreeMap; +pub fn collect_dependency_tables(field: &BoundRootField) -> Vec { + let mut tables = hashbrown::HashSet::new(); + collect_root_field_dependency_tables(field, &mut tables); + let mut tables: Vec<_> = tables.into_iter().collect(); + tables.sort(); + tables +} + +pub fn is_delta_capable_root_field(field: &BoundRootField) -> bool { + match &field.kind { + BoundRootFieldKind::Collection { + query, selection, .. + } => { + query.order_by.is_empty() + && query.limit.is_none() + && query.offset == 0 + && is_delta_capable_selection(selection) + } + BoundRootFieldKind::ByPk { selection, .. } => is_delta_capable_selection(selection), + BoundRootFieldKind::Typename + | BoundRootFieldKind::Insert { .. } + | BoundRootFieldKind::Update { .. } + | BoundRootFieldKind::Delete { .. } => false, + } +} + +fn is_delta_capable_selection(selection: &BoundSelectionSet) -> bool { + selection.fields.iter().all(is_delta_capable_field) +} + +fn is_delta_capable_field(field: &BoundField) -> bool { + match field { + BoundField::Typename { .. } | BoundField::Column { .. } => true, + BoundField::ForwardRelation { selection, .. } => is_delta_capable_selection(selection), + BoundField::ReverseRelation { + query, selection, .. + } => { + query.order_by.is_empty() + && query.limit.is_none() + && query.offset == 0 + && is_delta_capable_selection(selection) + } + } +} + +fn collect_root_field_dependency_tables( + field: &BoundRootField, + tables: &mut hashbrown::HashSet, +) { + match &field.kind { + BoundRootFieldKind::Typename => {} + BoundRootFieldKind::Collection { + table_name, + selection, + .. + } + | BoundRootFieldKind::ByPk { + table_name, + selection, + .. + } + | BoundRootFieldKind::Insert { + table_name, + selection, + .. + } + | BoundRootFieldKind::Update { + table_name, + selection, + .. + } + | BoundRootFieldKind::Delete { + table_name, + selection, + .. + } => { + tables.insert(table_name.clone()); + collect_selection_dependency_tables(selection, tables); + } + } +} + +fn collect_selection_dependency_tables( + selection: &BoundSelectionSet, + tables: &mut hashbrown::HashSet, +) { + for field in &selection.fields { + match field { + BoundField::Typename { .. } | BoundField::Column { .. } => {} + BoundField::ForwardRelation { + relation, + selection, + .. + } => { + tables.insert(relation.parent_table.clone()); + collect_selection_dependency_tables(selection, tables); + } + BoundField::ReverseRelation { + relation, + selection, + .. + } => { + tables.insert(relation.child_table.clone()); + collect_selection_dependency_tables(selection, tables); + } + } + } +} + pub fn bind_document( document: &Document, catalog: &GraphqlCatalog, diff --git a/crates/gql/src/cache.rs b/crates/gql/src/cache.rs index c7b1cce..2e7b5bb 100644 --- a/crates/gql/src/cache.rs +++ b/crates/gql/src/cache.rs @@ -47,7 +47,9 @@ impl SchemaCache { self.sdl = Some(sdl); } - self.schema.clone().unwrap_or_else(|| GraphqlSchema::from_table_cache(cache)) + self.schema + .clone() + .unwrap_or_else(|| GraphqlSchema::from_table_cache(cache)) } pub fn sdl(&mut self, epoch: u64, cache: &TableCache) -> String { diff --git a/crates/gql/src/catalog.rs b/crates/gql/src/catalog.rs index b16f304..a80a658 100644 --- a/crates/gql/src/catalog.rs +++ b/crates/gql/src/catalog.rs @@ -146,7 +146,11 @@ impl GraphqlCatalog { } pub fn mutation_field(&self, field_name: &str) -> Option<&RootFieldMeta> { - lookup_root_field(&self.mutation_fields, &self.mutation_field_lookup, field_name) + lookup_root_field( + &self.mutation_fields, + &self.mutation_field_lookup, + field_name, + ) } pub fn subscription_field(&self, field_name: &str) -> Option<&RootFieldMeta> { @@ -397,7 +401,9 @@ fn build_reverse_relations(tables: &[Table]) -> BTreeMap let mut map: BTreeMap> = BTreeMap::new(); for table in tables { for fk in table.constraints().get_foreign_keys() { - map.entry(fk.parent_table.clone()).or_default().push(fk.clone()); + map.entry(fk.parent_table.clone()) + .or_default() + .push(fk.clone()); } } map diff --git a/crates/gql/src/execute.rs b/crates/gql/src/execute.rs index dea8b03..d48e2fd 100644 --- a/crates/gql/src/execute.rs +++ b/crates/gql/src/execute.rs @@ -53,7 +53,9 @@ pub fn execute_bound_operation( let mut fields = Vec::with_capacity(operation.fields.len()); for field in &operation.fields { - fields.push(execute_root_field_readonly(cache, catalog, operation, field)?); + fields.push(execute_root_field_readonly( + cache, catalog, operation, field, + )?); } Ok(GraphqlResponse::new(ResponseValue::object(fields))) } @@ -86,12 +88,26 @@ pub fn render_root_field_rows( rows: &[Rc], ) -> GqlResult { let value = match &field.kind { - BoundRootFieldKind::Collection { table_name, selection, .. } - | BoundRootFieldKind::Insert { table_name, selection, .. } - | BoundRootFieldKind::Update { table_name, selection, .. } - | BoundRootFieldKind::Delete { table_name, selection, .. } => { - render_row_list(cache, catalog, table_name, rows, selection)? + BoundRootFieldKind::Collection { + table_name, + selection, + .. + } + | BoundRootFieldKind::Insert { + table_name, + selection, + .. } + | BoundRootFieldKind::Update { + table_name, + selection, + .. + } + | BoundRootFieldKind::Delete { + table_name, + selection, + .. + } => render_row_list(cache, catalog, table_name, rows, selection)?, BoundRootFieldKind::ByPk { table_name, selection, @@ -262,10 +278,13 @@ fn execute_insert_field( } let response = render_root_field_rows(cache, catalog, field, &inserted_rows)?; - Ok((response, vec![TableChange { - table_name: table_name.to_string(), - row_changes, - }])) + Ok(( + response, + vec![TableChange { + table_name: table_name.to_string(), + row_changes, + }], + )) } fn execute_update_field( @@ -323,10 +342,13 @@ fn execute_update_field( } let response = render_root_field_rows(cache, catalog, field, &updated_rows)?; - Ok((response, vec![TableChange { - table_name: table_name.to_string(), - row_changes, - }])) + Ok(( + response, + vec![TableChange { + table_name: table_name.to_string(), + row_changes, + }], + )) } fn execute_delete_field( @@ -358,10 +380,13 @@ fn execute_delete_field( .iter() .map(|row| RowChange::Delete((**row).clone())) .collect(); - Ok((response, vec![TableChange { - table_name: table_name.to_string(), - row_changes, - }])) + Ok(( + response, + vec![TableChange { + table_name: table_name.to_string(), + row_changes, + }], + )) } fn select_collection_rows( @@ -410,7 +435,9 @@ fn render_row_list( ) -> GqlResult { let mut values = Vec::with_capacity(rows.len()); for row in rows { - values.push(execute_row_selection(cache, catalog, table_name, row, selection)?); + values.push(execute_row_selection( + cache, catalog, table_name, row, selection, + )?); } Ok(ResponseValue::list(values)) } @@ -432,7 +459,9 @@ fn execute_row_selection( let mut fields = Vec::with_capacity(selection.fields.len()); for field in &selection.fields { let value = match field { - BoundField::Typename { value, .. } => ResponseValue::Scalar(Value::String(value.clone())), + BoundField::Typename { value, .. } => { + ResponseValue::Scalar(Value::String(value.clone())) + } BoundField::Column { column_index, .. } => row .get(*column_index) .cloned() @@ -517,13 +546,23 @@ fn execute_reverse_relation( ) })?; - let mut rows = - fetch_rows_by_index_or_scan(store, &relation.fk_name, &relation.child_column, &source_value); + let mut rows = fetch_rows_by_index_or_scan( + store, + &relation.fk_name, + &relation.child_column, + &source_value, + ); rows = apply_collection_query(rows, query); let mut values = Vec::with_capacity(rows.len()); for row in rows { - values.push(execute_row_selection(cache, catalog, &relation.child_table, &row, selection)?); + values.push(execute_row_selection( + cache, + catalog, + &relation.child_table, + &row, + selection, + )?); } Ok(ResponseValue::list(values)) } @@ -539,7 +578,11 @@ fn fetch_rows_by_column(store: &RowStore, column_name: &str, value: &Value) -> V store .scan() - .filter(|row| row.get(column_index).map(|candidate| candidate.sql_eq(value)).unwrap_or(false)) + .filter(|row| { + row.get(column_index) + .map(|candidate| candidate.sql_eq(value)) + .unwrap_or(false) + }) .collect() } @@ -549,7 +592,10 @@ fn fetch_rows_by_index_or_scan( column_name: &str, value: &Value, ) -> Vec> { - let rows = store.index_scan(index_name, Some(&cynos_index::KeyRange::only(value.clone()))); + let rows = store.index_scan( + index_name, + Some(&cynos_index::KeyRange::only(value.clone())), + ); if !rows.is_empty() || store.schema().get_column_index(column_name).is_none() { return rows; } @@ -559,7 +605,11 @@ fn fetch_rows_by_index_or_scan( }; store .scan() - .filter(|row| row.get(column_index).map(|candidate| candidate.sql_eq(value)).unwrap_or(false)) + .filter(|row| { + row.get(column_index) + .map(|candidate| candidate.sql_eq(value)) + .unwrap_or(false) + }) .collect() } @@ -572,7 +622,10 @@ fn find_single_column_index_name<'a>(store: &'a RowStore, column_name: &str) -> .map(|index| index.name()) } -fn apply_collection_query(mut rows: Vec>, query: &BoundCollectionQuery) -> Vec> { +pub(crate) fn apply_collection_query( + mut rows: Vec>, + query: &BoundCollectionQuery, +) -> Vec> { if let Some(filter) = &query.filter { rows.retain(|row| matches_filter(row, filter)); } @@ -628,7 +681,9 @@ fn matches_column_predicate(row: &Row, predicate: &ColumnPredicate) -> bool { PredicateOp::Between(lower, upper) => value >= lower && value <= upper, PredicateOp::Like(pattern) => match value { Value::String(value) => like(value, pattern), - Value::Bytes(value) => core::str::from_utf8(value).map(|value| like(value, pattern)).unwrap_or(false), + Value::Bytes(value) => core::str::from_utf8(value) + .map(|value| like(value, pattern)) + .unwrap_or(false), _ => false, }, PredicateOp::Json(predicate) => matches_json_predicate(value, predicate), diff --git a/crates/gql/src/lib.rs b/crates/gql/src/lib.rs index f965225..3ee0908 100644 --- a/crates/gql/src/lib.rs +++ b/crates/gql/src/lib.rs @@ -3,18 +3,21 @@ extern crate alloc; pub mod ast; +pub mod batch_render; pub mod bind; pub mod cache; pub mod catalog; pub mod error; pub mod execute; -pub mod plan; pub mod parser; +pub mod plan; pub mod query; +pub mod render_plan; pub mod response; pub mod schema; pub use ast::{Document, InputValue, OperationDefinition, OperationType, SelectionSet}; +pub use batch_render::{GraphqlBatchState, GraphqlInvalidation}; pub use bind::{BoundOperation, VariableValues}; pub use cache::SchemaCache; pub use catalog::GraphqlCatalog; @@ -22,5 +25,6 @@ pub use error::{GqlError, GqlErrorKind, GqlResult}; pub use execute::{OperationOutcome, RowChange, TableChange}; pub use plan::{build_root_field_plan, RootFieldPlan}; pub use query::{execute_operation, execute_query, PreparedQuery}; +pub use render_plan::{compile_batch_plan, GraphqlBatchPlan}; pub use response::{GraphqlResponse, ResponseField, ResponseValue}; pub use schema::{render_schema_sdl, GraphqlSchema}; diff --git a/crates/gql/src/render_plan.rs b/crates/gql/src/render_plan.rs new file mode 100644 index 0000000..31fdbaf --- /dev/null +++ b/crates/gql/src/render_plan.rs @@ -0,0 +1,332 @@ +use alloc::string::String; +use alloc::vec; +use alloc::vec::Vec; + +use hashbrown::{HashMap, HashSet}; + +use crate::bind::{ + BoundCollectionQuery, BoundField, BoundRootField, BoundRootFieldKind, BoundSelectionSet, +}; +use crate::catalog::{GraphqlCatalog, RelationMeta, TableMeta}; +use crate::error::{GqlError, GqlErrorKind, GqlResult}; + +pub type NodeId = usize; +pub type EdgeId = usize; + +#[derive(Clone, Debug)] +pub struct GraphqlBatchPlan { + root_node: NodeId, + nodes: Vec, + edges: Vec, + table_node_lookup: HashMap>, + table_edge_lookup: HashMap>, + incoming_edges: Vec>, + has_relations: bool, +} + +impl GraphqlBatchPlan { + pub fn root_node(&self) -> NodeId { + self.root_node + } + + pub fn nodes(&self) -> &[RenderNodePlan] { + &self.nodes + } + + pub fn edges(&self) -> &[RelationEdgePlan] { + &self.edges + } + + pub fn nodes_for_table(&self, table_name: &str) -> &[NodeId] { + self.table_node_lookup + .get(table_name) + .map(Vec::as_slice) + .unwrap_or(&[]) + } + + pub fn edges_for_table(&self, table_name: &str) -> &[EdgeId] { + self.table_edge_lookup + .get(table_name) + .map(Vec::as_slice) + .unwrap_or(&[]) + } + + pub fn incoming_edges(&self, node_id: NodeId) -> &[EdgeId] { + self.incoming_edges + .get(node_id) + .map(Vec::as_slice) + .unwrap_or(&[]) + } + + pub fn node(&self, node_id: NodeId) -> &RenderNodePlan { + &self.nodes[node_id] + } + + pub fn edge(&self, edge_id: EdgeId) -> &RelationEdgePlan { + &self.edges[edge_id] + } + + pub fn has_relations(&self) -> bool { + self.has_relations + } +} + +#[derive(Clone, Debug)] +pub struct RenderNodePlan { + pub id: NodeId, + pub table_name: String, + pub fields: Vec, + pub dependency_tables: Vec, +} + +#[derive(Clone, Debug)] +pub struct RenderFieldPlan { + pub response_key: String, + pub kind: RenderFieldKind, +} + +#[derive(Clone, Debug)] +pub enum RenderFieldKind { + Typename { value: String }, + Column { column_index: usize }, + ForwardRelation { edge_id: EdgeId }, + ReverseRelation { edge_id: EdgeId }, +} + +#[derive(Clone, Debug)] +pub struct RelationEdgePlan { + pub id: EdgeId, + pub parent_node: NodeId, + pub kind: RelationEdgeKind, + pub relation: RelationMeta, + pub child_node: NodeId, + pub query: Option, + pub strategy: RelationFetchStrategy, + pub direct_table: String, +} + +#[derive(Clone, Copy, Debug, PartialEq, Eq)] +pub enum RelationEdgeKind { + Forward, + Reverse, +} + +#[derive(Clone, Copy, Debug, PartialEq, Eq)] +pub enum RelationFetchStrategy { + PlannerBatch, + IndexedProbeBatch, + ScanAndBucket, +} + +pub fn compile_batch_plan( + catalog: &GraphqlCatalog, + field: &BoundRootField, +) -> GqlResult { + let (table_name, selection) = match &field.kind { + BoundRootFieldKind::Collection { + table_name, + selection, + .. + } + | BoundRootFieldKind::ByPk { + table_name, + selection, + .. + } + | BoundRootFieldKind::Insert { + table_name, + selection, + .. + } + | BoundRootFieldKind::Update { + table_name, + selection, + .. + } + | BoundRootFieldKind::Delete { + table_name, + selection, + .. + } => (table_name.as_str(), selection), + BoundRootFieldKind::Typename => { + return Err(GqlError::new( + GqlErrorKind::Unsupported, + "typename root fields do not use batch render plans", + )); + } + }; + + let mut builder = BatchPlanBuilder { + catalog, + nodes: Vec::new(), + edges: Vec::new(), + has_relations: false, + }; + let root_node = builder.compile_node(table_name, selection)?; + let mut table_node_lookup: HashMap> = HashMap::new(); + for node in &builder.nodes { + table_node_lookup + .entry(node.table_name.clone()) + .or_insert_with(Vec::new) + .push(node.id); + } + let mut table_edge_lookup: HashMap> = HashMap::new(); + let mut incoming_edges = vec![Vec::new(); builder.nodes.len()]; + for edge in &builder.edges { + table_edge_lookup + .entry(edge.direct_table.clone()) + .or_insert_with(Vec::new) + .push(edge.id); + incoming_edges[edge.child_node].push(edge.id); + } + Ok(GraphqlBatchPlan { + root_node, + nodes: builder.nodes, + edges: builder.edges, + table_node_lookup, + table_edge_lookup, + incoming_edges, + has_relations: builder.has_relations, + }) +} + +struct BatchPlanBuilder<'a> { + catalog: &'a GraphqlCatalog, + nodes: Vec, + edges: Vec, + has_relations: bool, +} + +impl<'a> BatchPlanBuilder<'a> { + fn compile_node( + &mut self, + table_name: &str, + selection: &BoundSelectionSet, + ) -> GqlResult { + let node_id = self.nodes.len(); + self.nodes.push(RenderNodePlan { + id: node_id, + table_name: table_name.into(), + fields: Vec::new(), + dependency_tables: Vec::new(), + }); + + let mut fields = Vec::with_capacity(selection.fields.len()); + let mut dependencies = HashSet::new(); + dependencies.insert(table_name.into()); + + for field in &selection.fields { + match field { + BoundField::Typename { + response_key, + value, + } => fields.push(RenderFieldPlan { + response_key: response_key.clone(), + kind: RenderFieldKind::Typename { + value: value.clone(), + }, + }), + BoundField::Column { + response_key, + column_index, + } => fields.push(RenderFieldPlan { + response_key: response_key.clone(), + kind: RenderFieldKind::Column { + column_index: *column_index, + }, + }), + BoundField::ForwardRelation { + response_key, + relation, + selection, + } => { + self.has_relations = true; + let child_node = self.compile_node(&relation.parent_table, selection)?; + dependencies.extend(self.nodes[child_node].dependency_tables.iter().cloned()); + let edge_id = self.edges.len(); + self.edges.push(RelationEdgePlan { + id: edge_id, + parent_node: node_id, + kind: RelationEdgeKind::Forward, + relation: relation.clone(), + child_node, + query: None, + strategy: choose_forward_strategy(self.catalog, relation), + direct_table: relation.parent_table.clone(), + }); + fields.push(RenderFieldPlan { + response_key: response_key.clone(), + kind: RenderFieldKind::ForwardRelation { edge_id }, + }); + } + BoundField::ReverseRelation { + response_key, + relation, + query, + selection, + } => { + self.has_relations = true; + let child_node = self.compile_node(&relation.child_table, selection)?; + dependencies.extend(self.nodes[child_node].dependency_tables.iter().cloned()); + let edge_id = self.edges.len(); + self.edges.push(RelationEdgePlan { + id: edge_id, + parent_node: node_id, + kind: RelationEdgeKind::Reverse, + relation: relation.clone(), + child_node, + query: Some(query.clone()), + strategy: choose_reverse_strategy(query), + direct_table: relation.child_table.clone(), + }); + fields.push(RenderFieldPlan { + response_key: response_key.clone(), + kind: RenderFieldKind::ReverseRelation { edge_id }, + }); + } + } + } + + let mut dependency_tables: Vec<_> = dependencies.into_iter().collect(); + dependency_tables.sort(); + self.nodes[node_id] = RenderNodePlan { + id: node_id, + table_name: table_name.into(), + fields, + dependency_tables, + }; + Ok(node_id) + } +} + +fn choose_forward_strategy( + catalog: &GraphqlCatalog, + relation: &RelationMeta, +) -> RelationFetchStrategy { + let Some(parent_table) = catalog.table(&relation.parent_table) else { + return RelationFetchStrategy::PlannerBatch; + }; + if is_single_column_primary_key(parent_table, &relation.parent_column) { + RelationFetchStrategy::IndexedProbeBatch + } else { + RelationFetchStrategy::PlannerBatch + } +} + +fn choose_reverse_strategy(query: &BoundCollectionQuery) -> RelationFetchStrategy { + if query.filter.is_none() + && query.order_by.is_empty() + && query.limit.is_none() + && query.offset == 0 + { + RelationFetchStrategy::IndexedProbeBatch + } else { + RelationFetchStrategy::PlannerBatch + } +} + +fn is_single_column_primary_key(table: &TableMeta, column_name: &str) -> bool { + table + .primary_key() + .is_some_and(|pk| pk.columns.len() == 1 && pk.columns[0].name == column_name) +} diff --git a/crates/gql/src/schema.rs b/crates/gql/src/schema.rs index aaed727..3cf9da6 100644 --- a/crates/gql/src/schema.rs +++ b/crates/gql/src/schema.rs @@ -363,10 +363,7 @@ fn build_query_fields(tables: &[Table], type_names: &BTreeMap) - fields } -fn build_mutation_fields( - tables: &[Table], - type_names: &BTreeMap, -) -> Vec { +fn build_mutation_fields(tables: &[Table], type_names: &BTreeMap) -> Vec { let mut fields = Vec::new(); for table in tables { let table_name = table.name().to_string(); @@ -379,7 +376,10 @@ fn build_mutation_fields( name: format!("insert{}", type_name), args: vec![InputValueDef { name: "input".to_string(), - ty: TypeRef::list(TypeRef::named(format!("{}InsertInput", type_name), true), true), + ty: TypeRef::list( + TypeRef::named(format!("{}InsertInput", type_name), true), + true, + ), }], ty: TypeRef::list(TypeRef::named(type_name.clone(), true), true), }); @@ -422,7 +422,10 @@ fn collection_arguments(type_name: &str) -> Vec { }, InputValueDef { name: "orderBy".to_string(), - ty: TypeRef::list(TypeRef::named(format!("{}OrderByInput", type_name), true), false), + ty: TypeRef::list( + TypeRef::named(format!("{}OrderByInput", type_name), true), + false, + ), }, InputValueDef { name: "limit".to_string(), @@ -440,11 +443,17 @@ fn build_where_input(table: &Table) -> InputObjectTypeDef { let mut fields = vec![ InputValueDef { name: "AND".to_string(), - ty: TypeRef::list(TypeRef::named(format!("{}WhereInput", type_name), true), false), + ty: TypeRef::list( + TypeRef::named(format!("{}WhereInput", type_name), true), + false, + ), }, InputValueDef { name: "OR".to_string(), - ty: TypeRef::list(TypeRef::named(format!("{}WhereInput", type_name), true), false), + ty: TypeRef::list( + TypeRef::named(format!("{}WhereInput", type_name), true), + false, + ), }, ]; @@ -571,7 +580,9 @@ fn build_reverse_relations(tables: &[Table]) -> BTreeMap let mut map: BTreeMap> = BTreeMap::new(); for table in tables { for fk in table.constraints().get_foreign_keys() { - map.entry(fk.parent_table.clone()).or_default().push(fk.clone()); + map.entry(fk.parent_table.clone()) + .or_default() + .push(fk.clone()); } } map diff --git a/js/packages/core/src/wasm.d.ts b/js/packages/core/src/wasm.d.ts index 224f829..2e8f943 100644 --- a/js/packages/core/src/wasm.d.ts +++ b/js/packages/core/src/wasm.d.ts @@ -380,7 +380,7 @@ export enum JsDataType { * * The callback receives a standard GraphQL payload object with a single `data` * property. The payload is emitted immediately on subscribe and again whenever - * the root query result changes. + * the rendered GraphQL response changes. */ export class JsGraphqlSubscription { private constructor(); @@ -882,197 +882,197 @@ export type InitInput = RequestInfo | URL | Response | BufferSource | WebAssembl export interface InitOutput { readonly memory: WebAssembly.Memory; + readonly __wbg_column_free: (a: number, b: number) => void; + readonly __wbg_columnoptions_free: (a: number, b: number) => void; readonly __wbg_database_free: (a: number, b: number) => void; + readonly __wbg_deletebuilder_free: (a: number, b: number) => void; + readonly __wbg_expr_free: (a: number, b: number) => void; + readonly __wbg_foreignkeyoptions_free: (a: number, b: number) => void; + readonly __wbg_get_columnoptions_auto_increment: (a: number) => number; + readonly __wbg_get_columnoptions_nullable: (a: number) => number; + readonly __wbg_get_columnoptions_primary_key: (a: number) => number; + readonly __wbg_get_columnoptions_unique: (a: number) => number; + readonly __wbg_insertbuilder_free: (a: number, b: number) => void; + readonly __wbg_jschangesstream_free: (a: number, b: number) => void; + readonly __wbg_jsgraphqlsubscription_free: (a: number, b: number) => void; + readonly __wbg_jsivmobservablequery_free: (a: number, b: number) => void; + readonly __wbg_jsobservablequery_free: (a: number, b: number) => void; + readonly __wbg_jsonbcolumn_free: (a: number, b: number) => void; + readonly __wbg_jstable_free: (a: number, b: number) => void; + readonly __wbg_jstablebuilder_free: (a: number, b: number) => void; + readonly __wbg_jstransaction_free: (a: number, b: number) => void; readonly __wbg_preparedgraphqlquery_free: (a: number, b: number) => void; - readonly database_new: (a: number, b: number) => number; - readonly database_create: (a: number, b: number) => number; - readonly database_name: (a: number, b: number) => void; - readonly database_createTable: (a: number, b: number, c: number) => number; - readonly database_registerTable: (a: number, b: number, c: number) => void; - readonly database_table: (a: number, b: number, c: number) => number; - readonly database_dropTable: (a: number, b: number, c: number, d: number) => void; - readonly database_tableNames: (a: number) => number; - readonly database_tableCount: (a: number) => number; - readonly database_select: (a: number, b: number) => number; - readonly database_insert: (a: number, b: number, c: number) => number; - readonly database_update: (a: number, b: number, c: number) => number; - readonly database_delete: (a: number, b: number, c: number) => number; - readonly database_transaction: (a: number) => number; - readonly database_clear: (a: number) => void; - readonly database_clearTable: (a: number, b: number, c: number, d: number) => void; - readonly database_totalRowCount: (a: number) => number; - readonly database_hasTable: (a: number, b: number, c: number) => number; - readonly database_graphqlSchema: (a: number, b: number) => void; - readonly database_graphql: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void; - readonly database_subscribeGraphql: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void; - readonly database_prepareGraphql: (a: number, b: number, c: number, d: number, e: number, f: number) => void; - readonly preparedgraphqlquery_exec: (a: number, b: number, c: number) => void; - readonly preparedgraphqlquery_subscribe: (a: number, b: number, c: number) => void; - readonly __wbg_column_free: (a: number, b: number) => void; - readonly column_new: (a: number, b: number, c: number, d: number) => number; - readonly column_with_index: (a: number, b: number) => number; - readonly column_name: (a: number, b: number) => void; - readonly column_tableName: (a: number, b: number) => void; + readonly __wbg_preparedselectquery_free: (a: number, b: number) => void; + readonly __wbg_selectbuilder_free: (a: number, b: number) => void; + readonly __wbg_set_columnoptions_auto_increment: (a: number, b: number) => void; + readonly __wbg_set_columnoptions_nullable: (a: number, b: number) => void; + readonly __wbg_set_columnoptions_primary_key: (a: number, b: number) => void; + readonly __wbg_set_columnoptions_unique: (a: number, b: number) => void; + readonly __wbg_updatebuilder_free: (a: number, b: number) => void; + readonly col: (a: number, b: number) => number; + readonly column_between: (a: number, b: number, c: number) => number; readonly column_eq: (a: number, b: number) => number; - readonly column_ne: (a: number, b: number) => number; + readonly column_get: (a: number, b: number, c: number) => number; readonly column_gt: (a: number, b: number) => number; readonly column_gte: (a: number, b: number) => number; + readonly column_in: (a: number, b: number) => number; + readonly column_isNotNull: (a: number) => number; + readonly column_isNull: (a: number) => number; + readonly column_like: (a: number, b: number, c: number) => number; readonly column_lt: (a: number, b: number) => number; readonly column_lte: (a: number, b: number) => number; - readonly column_between: (a: number, b: number, c: number) => number; + readonly column_match: (a: number, b: number, c: number) => number; + readonly column_name: (a: number, b: number) => void; + readonly column_ne: (a: number, b: number) => number; + readonly column_new: (a: number, b: number, c: number, d: number) => number; readonly column_notBetween: (a: number, b: number, c: number) => number; - readonly column_in: (a: number, b: number) => number; readonly column_notIn: (a: number, b: number) => number; - readonly column_like: (a: number, b: number, c: number) => number; readonly column_notLike: (a: number, b: number, c: number) => number; - readonly column_match: (a: number, b: number, c: number) => number; readonly column_notMatch: (a: number, b: number, c: number) => number; - readonly column_isNull: (a: number) => number; - readonly column_isNotNull: (a: number) => number; - readonly column_get: (a: number, b: number, c: number) => number; - readonly __wbg_jsonbcolumn_free: (a: number, b: number) => void; - readonly jsonbcolumn_eq: (a: number, b: number) => number; - readonly jsonbcolumn_contains: (a: number, b: number) => number; - readonly jsonbcolumn_exists: (a: number) => number; - readonly __wbg_expr_free: (a: number, b: number) => void; + readonly column_tableName: (a: number, b: number) => void; + readonly column_with_index: (a: number, b: number) => number; + readonly columnoptions_new: () => number; + readonly columnoptions_primaryKey: (a: number, b: number) => number; + readonly columnoptions_setAutoIncrement: (a: number, b: number) => number; + readonly columnoptions_setNullable: (a: number, b: number) => number; + readonly columnoptions_setUnique: (a: number, b: number) => number; + readonly database_clear: (a: number) => void; + readonly database_clearTable: (a: number, b: number, c: number, d: number) => void; + readonly database_create: (a: number, b: number) => number; + readonly database_createTable: (a: number, b: number, c: number) => number; + readonly database_delete: (a: number, b: number, c: number) => number; + readonly database_dropTable: (a: number, b: number, c: number, d: number) => void; + readonly database_graphql: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void; + readonly database_graphqlSchema: (a: number, b: number) => void; + readonly database_hasTable: (a: number, b: number, c: number) => number; + readonly database_insert: (a: number, b: number, c: number) => number; + readonly database_name: (a: number, b: number) => void; + readonly database_new: (a: number, b: number) => number; + readonly database_prepareGraphql: (a: number, b: number, c: number, d: number, e: number, f: number) => void; + readonly database_registerTable: (a: number, b: number, c: number) => void; + readonly database_select: (a: number, b: number) => number; + readonly database_subscribeGraphql: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void; + readonly database_table: (a: number, b: number, c: number) => number; + readonly database_tableCount: (a: number) => number; + readonly database_tableNames: (a: number) => number; + readonly database_totalRowCount: (a: number) => number; + readonly database_transaction: (a: number) => number; + readonly database_update: (a: number, b: number, c: number) => number; + readonly deletebuilder_exec: (a: number) => number; + readonly deletebuilder_where: (a: number, b: number) => number; readonly expr_and: (a: number, b: number) => number; - readonly expr_or: (a: number, b: number) => number; readonly expr_not: (a: number) => number; - readonly __wbg_selectbuilder_free: (a: number, b: number) => void; - readonly __wbg_preparedselectquery_free: (a: number, b: number) => void; - readonly selectbuilder_from: (a: number, b: number, c: number) => number; - readonly selectbuilder_where: (a: number, b: number) => number; - readonly selectbuilder_orderBy: (a: number, b: number, c: number, d: number) => number; - readonly selectbuilder_limit: (a: number, b: number) => number; - readonly selectbuilder_offset: (a: number, b: number) => number; - readonly selectbuilder_union: (a: number, b: number, c: number) => void; - readonly selectbuilder_unionAll: (a: number, b: number, c: number) => void; - readonly selectbuilder_innerJoin: (a: number, b: number, c: number, d: number) => number; - readonly selectbuilder_leftJoin: (a: number, b: number, c: number, d: number) => number; - readonly selectbuilder_groupBy: (a: number, b: number) => number; - readonly selectbuilder_count: (a: number) => number; - readonly selectbuilder_countCol: (a: number, b: number, c: number) => number; - readonly selectbuilder_sum: (a: number, b: number, c: number) => number; - readonly selectbuilder_avg: (a: number, b: number, c: number) => number; - readonly selectbuilder_min: (a: number, b: number, c: number) => number; - readonly selectbuilder_max: (a: number, b: number, c: number) => number; - readonly selectbuilder_stddev: (a: number, b: number, c: number) => number; - readonly selectbuilder_geomean: (a: number, b: number, c: number) => number; - readonly selectbuilder_distinct: (a: number, b: number, c: number) => number; - readonly selectbuilder_exec: (a: number) => number; - readonly selectbuilder_prepare: (a: number, b: number) => void; - readonly selectbuilder_explain: (a: number, b: number) => void; - readonly selectbuilder_observe: (a: number, b: number) => void; - readonly selectbuilder_changes: (a: number, b: number) => void; - readonly selectbuilder_trace: (a: number, b: number) => void; - readonly selectbuilder_getSchemaLayout: (a: number, b: number) => void; - readonly selectbuilder_execBinary: (a: number) => number; - readonly preparedselectquery_exec: (a: number) => number; - readonly preparedselectquery_execBinary: (a: number) => number; - readonly preparedselectquery_getSchemaLayout: (a: number) => number; - readonly __wbg_insertbuilder_free: (a: number, b: number) => void; - readonly insertbuilder_values: (a: number, b: number) => number; + readonly expr_or: (a: number, b: number) => number; + readonly foreignkeyoptions_fieldName: (a: number, b: number, c: number) => number; + readonly foreignkeyoptions_new: () => number; + readonly foreignkeyoptions_reverseFieldName: (a: number, b: number, c: number) => number; + readonly init: () => void; readonly insertbuilder_exec: (a: number) => number; - readonly __wbg_updatebuilder_free: (a: number, b: number) => void; - readonly updatebuilder_set: (a: number, b: number, c: number) => number; - readonly updatebuilder_where: (a: number, b: number) => number; - readonly updatebuilder_exec: (a: number) => number; - readonly __wbg_deletebuilder_free: (a: number, b: number) => void; - readonly deletebuilder_where: (a: number, b: number) => number; - readonly deletebuilder_exec: (a: number) => number; - readonly __wbg_jsobservablequery_free: (a: number, b: number) => void; - readonly jsobservablequery_subscribe: (a: number, b: number) => number; - readonly jsobservablequery_getResult: (a: number) => number; - readonly jsobservablequery_getResultBinary: (a: number) => number; - readonly jsobservablequery_getSchemaLayout: (a: number) => number; - readonly jsobservablequery_length: (a: number) => number; - readonly jsobservablequery_isEmpty: (a: number) => number; - readonly jsobservablequery_subscriptionCount: (a: number) => number; - readonly __wbg_jsivmobservablequery_free: (a: number, b: number) => void; - readonly jsivmobservablequery_subscribe: (a: number, b: number) => number; + readonly insertbuilder_values: (a: number, b: number) => number; + readonly jschangesstream_getResult: (a: number) => number; + readonly jschangesstream_getResultBinary: (a: number) => number; + readonly jschangesstream_getSchemaLayout: (a: number) => number; + readonly jschangesstream_subscribe: (a: number, b: number) => number; + readonly jsgraphqlsubscription_getResult: (a: number) => number; + readonly jsgraphqlsubscription_subscribe: (a: number, b: number) => number; + readonly jsgraphqlsubscription_subscriptionCount: (a: number) => number; readonly jsivmobservablequery_getResult: (a: number) => number; readonly jsivmobservablequery_getResultBinary: (a: number) => number; readonly jsivmobservablequery_getSchemaLayout: (a: number) => number; - readonly jsivmobservablequery_length: (a: number) => number; readonly jsivmobservablequery_isEmpty: (a: number) => number; + readonly jsivmobservablequery_length: (a: number) => number; + readonly jsivmobservablequery_subscribe: (a: number, b: number) => number; readonly jsivmobservablequery_subscriptionCount: (a: number) => number; - readonly __wbg_jsgraphqlsubscription_free: (a: number, b: number) => void; - readonly jsgraphqlsubscription_getResult: (a: number) => number; - readonly jsgraphqlsubscription_subscribe: (a: number, b: number) => number; - readonly jsgraphqlsubscription_subscriptionCount: (a: number) => number; - readonly __wbg_jschangesstream_free: (a: number, b: number) => void; - readonly jschangesstream_subscribe: (a: number, b: number) => number; - readonly jschangesstream_getResult: (a: number) => number; - readonly jschangesstream_getResultBinary: (a: number) => number; - readonly jschangesstream_getSchemaLayout: (a: number) => number; - readonly __wbg_columnoptions_free: (a: number, b: number) => void; - readonly __wbg_get_columnoptions_primary_key: (a: number) => number; - readonly __wbg_set_columnoptions_primary_key: (a: number, b: number) => void; - readonly __wbg_get_columnoptions_nullable: (a: number) => number; - readonly __wbg_set_columnoptions_nullable: (a: number, b: number) => void; - readonly __wbg_get_columnoptions_unique: (a: number) => number; - readonly __wbg_set_columnoptions_unique: (a: number, b: number) => void; - readonly __wbg_get_columnoptions_auto_increment: (a: number) => number; - readonly __wbg_set_columnoptions_auto_increment: (a: number, b: number) => void; - readonly __wbg_foreignkeyoptions_free: (a: number, b: number) => void; - readonly columnoptions_new: () => number; - readonly columnoptions_primaryKey: (a: number, b: number) => number; - readonly columnoptions_setNullable: (a: number, b: number) => number; - readonly columnoptions_setUnique: (a: number, b: number) => number; - readonly columnoptions_setAutoIncrement: (a: number, b: number) => number; - readonly foreignkeyoptions_new: () => number; - readonly foreignkeyoptions_fieldName: (a: number, b: number, c: number) => number; - readonly foreignkeyoptions_reverseFieldName: (a: number, b: number, c: number) => number; - readonly __wbg_jstablebuilder_free: (a: number, b: number) => void; - readonly jstablebuilder_new: (a: number, b: number) => number; - readonly jstablebuilder_build: (a: number, b: number) => void; - readonly jstablebuilder_column: (a: number, b: number, c: number, d: number, e: number) => number; - readonly jstablebuilder_primaryKey: (a: number, b: number) => number; - readonly jstablebuilder_index: (a: number, b: number, c: number, d: number) => number; - readonly jstablebuilder_uniqueIndex: (a: number, b: number, c: number, d: number) => number; - readonly jstablebuilder_jsonbIndex: (a: number, b: number, c: number, d: number) => number; - readonly jstablebuilder_foreignKey: (a: number, b: number, c: number, d: number, e: number, f: number, g: number, h: number, i: number, j: number) => number; - readonly jstablebuilder_name: (a: number, b: number) => void; - readonly __wbg_jstable_free: (a: number, b: number) => void; - readonly jstable_name: (a: number, b: number) => void; + readonly jsobservablequery_getResult: (a: number) => number; + readonly jsobservablequery_getResultBinary: (a: number) => number; + readonly jsobservablequery_getSchemaLayout: (a: number) => number; + readonly jsobservablequery_isEmpty: (a: number) => number; + readonly jsobservablequery_length: (a: number) => number; + readonly jsobservablequery_subscribe: (a: number, b: number) => number; + readonly jsobservablequery_subscriptionCount: (a: number) => number; + readonly jsonbcolumn_contains: (a: number, b: number) => number; + readonly jsonbcolumn_eq: (a: number, b: number) => number; + readonly jsonbcolumn_exists: (a: number) => number; readonly jstable_col: (a: number, b: number, c: number) => number; - readonly jstable_columnNames: (a: number) => number; readonly jstable_columnCount: (a: number) => number; + readonly jstable_columnNames: (a: number) => number; readonly jstable_getColumnType: (a: number, b: number, c: number) => number; readonly jstable_isColumnNullable: (a: number, b: number, c: number) => number; + readonly jstable_name: (a: number, b: number) => void; readonly jstable_primaryKeyColumns: (a: number) => number; - readonly __wbg_jstransaction_free: (a: number, b: number) => void; - readonly jstransaction_insert: (a: number, b: number, c: number, d: number, e: number) => void; - readonly jstransaction_update: (a: number, b: number, c: number, d: number, e: number, f: number) => void; - readonly jstransaction_delete: (a: number, b: number, c: number, d: number, e: number) => void; + readonly jstablebuilder_build: (a: number, b: number) => void; + readonly jstablebuilder_column: (a: number, b: number, c: number, d: number, e: number) => number; + readonly jstablebuilder_foreignKey: (a: number, b: number, c: number, d: number, e: number, f: number, g: number, h: number, i: number, j: number) => number; + readonly jstablebuilder_index: (a: number, b: number, c: number, d: number) => number; + readonly jstablebuilder_jsonbIndex: (a: number, b: number, c: number, d: number) => number; + readonly jstablebuilder_name: (a: number, b: number) => void; + readonly jstablebuilder_new: (a: number, b: number) => number; + readonly jstablebuilder_primaryKey: (a: number, b: number) => number; + readonly jstablebuilder_uniqueIndex: (a: number, b: number, c: number, d: number) => number; + readonly jstransaction_active: (a: number) => number; readonly jstransaction_commit: (a: number, b: number) => void; + readonly jstransaction_delete: (a: number, b: number, c: number, d: number, e: number) => void; + readonly jstransaction_insert: (a: number, b: number, c: number, d: number, e: number) => void; readonly jstransaction_rollback: (a: number, b: number) => void; - readonly jstransaction_active: (a: number) => number; readonly jstransaction_state: (a: number, b: number) => void; - readonly init: () => void; - readonly col: (a: number, b: number) => number; + readonly jstransaction_update: (a: number, b: number, c: number, d: number, e: number, f: number) => void; + readonly preparedgraphqlquery_exec: (a: number, b: number, c: number) => void; + readonly preparedgraphqlquery_subscribe: (a: number, b: number, c: number) => void; + readonly preparedselectquery_exec: (a: number) => number; + readonly preparedselectquery_execBinary: (a: number) => number; + readonly preparedselectquery_getSchemaLayout: (a: number) => number; + readonly selectbuilder_avg: (a: number, b: number, c: number) => number; + readonly selectbuilder_changes: (a: number, b: number) => void; + readonly selectbuilder_count: (a: number) => number; + readonly selectbuilder_countCol: (a: number, b: number, c: number) => number; + readonly selectbuilder_distinct: (a: number, b: number, c: number) => number; + readonly selectbuilder_exec: (a: number) => number; + readonly selectbuilder_execBinary: (a: number) => number; + readonly selectbuilder_explain: (a: number, b: number) => void; + readonly selectbuilder_from: (a: number, b: number, c: number) => number; + readonly selectbuilder_geomean: (a: number, b: number, c: number) => number; + readonly selectbuilder_getSchemaLayout: (a: number, b: number) => void; + readonly selectbuilder_groupBy: (a: number, b: number) => number; + readonly selectbuilder_innerJoin: (a: number, b: number, c: number, d: number) => number; + readonly selectbuilder_leftJoin: (a: number, b: number, c: number, d: number) => number; + readonly selectbuilder_limit: (a: number, b: number) => number; + readonly selectbuilder_max: (a: number, b: number, c: number) => number; + readonly selectbuilder_min: (a: number, b: number, c: number) => number; + readonly selectbuilder_observe: (a: number, b: number) => void; + readonly selectbuilder_offset: (a: number, b: number) => number; + readonly selectbuilder_orderBy: (a: number, b: number, c: number, d: number) => number; + readonly selectbuilder_prepare: (a: number, b: number) => void; + readonly selectbuilder_stddev: (a: number, b: number, c: number) => number; + readonly selectbuilder_sum: (a: number, b: number, c: number) => number; + readonly selectbuilder_trace: (a: number, b: number) => void; + readonly selectbuilder_union: (a: number, b: number, c: number) => void; + readonly selectbuilder_unionAll: (a: number, b: number, c: number) => void; + readonly selectbuilder_where: (a: number, b: number) => number; + readonly updatebuilder_exec: (a: number) => number; + readonly updatebuilder_set: (a: number, b: number, c: number) => number; + readonly updatebuilder_where: (a: number, b: number) => number; readonly column_new_simple: (a: number, b: number) => number; + readonly __wbg_binaryresult_free: (a: number, b: number) => void; readonly __wbg_schemalayout_free: (a: number, b: number) => void; + readonly binaryresult_asView: (a: number) => number; + readonly binaryresult_free: (a: number) => void; + readonly binaryresult_isEmpty: (a: number) => number; + readonly binaryresult_len: (a: number) => number; + readonly binaryresult_ptr: (a: number) => number; + readonly binaryresult_toUint8Array: (a: number) => number; readonly schemalayout_columnCount: (a: number) => number; - readonly schemalayout_columnName: (a: number, b: number, c: number) => void; - readonly schemalayout_columnType: (a: number, b: number) => number; - readonly schemalayout_columnOffset: (a: number, b: number) => number; readonly schemalayout_columnFixedSize: (a: number, b: number) => number; + readonly schemalayout_columnName: (a: number, b: number, c: number) => void; readonly schemalayout_columnNullable: (a: number, b: number) => number; - readonly schemalayout_rowStride: (a: number) => number; + readonly schemalayout_columnOffset: (a: number, b: number) => number; + readonly schemalayout_columnType: (a: number, b: number) => number; readonly schemalayout_nullMaskSize: (a: number) => number; - readonly __wbg_binaryresult_free: (a: number, b: number) => void; - readonly binaryresult_ptr: (a: number) => number; - readonly binaryresult_len: (a: number) => number; - readonly binaryresult_isEmpty: (a: number) => number; - readonly binaryresult_toUint8Array: (a: number) => number; - readonly binaryresult_asView: (a: number) => number; - readonly binaryresult_free: (a: number) => void; - readonly __wasm_bindgen_func_elem_66: (a: number, b: number) => void; - readonly __wasm_bindgen_func_elem_2236: (a: number, b: number) => void; - readonly __wasm_bindgen_func_elem_6293: (a: number, b: number, c: number, d: number) => void; - readonly __wasm_bindgen_func_elem_2243: (a: number, b: number, c: number) => void; - readonly __wasm_bindgen_func_elem_987: (a: number, b: number) => void; + readonly schemalayout_rowStride: (a: number) => number; + readonly __wasm_bindgen_func_elem_80: (a: number, b: number) => void; + readonly __wasm_bindgen_func_elem_2475: (a: number, b: number) => void; + readonly __wasm_bindgen_func_elem_6812: (a: number, b: number, c: number, d: number) => void; + readonly __wasm_bindgen_func_elem_2482: (a: number, b: number, c: number) => void; + readonly __wasm_bindgen_func_elem_1114: (a: number, b: number) => void; readonly __wbindgen_export: (a: number, b: number) => number; readonly __wbindgen_export2: (a: number, b: number, c: number, d: number) => number; readonly __wbindgen_export3: (a: number) => void; diff --git a/js/packages/core/src/wasm.js b/js/packages/core/src/wasm.js index 8fe2ce9..7c7458c 100644 --- a/js/packages/core/src/wasm.js +++ b/js/packages/core/src/wasm.js @@ -1128,7 +1128,7 @@ export const JsDataType = Object.freeze({ * * The callback receives a standard GraphQL payload object with a single `data` * property. The payload is emitted immediately on subscribe and again whenever - * the root query result changes. + * the rendered GraphQL response changes. */ export class JsGraphqlSubscription { static __wrap(ptr) { @@ -2744,7 +2744,7 @@ function __wbg_get_imports() { const a = state0.a; state0.a = 0; try { - return __wasm_bindgen_func_elem_6293(a, state0.b, arg0, arg1); + return __wasm_bindgen_func_elem_6812(a, state0.b, arg0, arg1); } finally { state0.a = a; } @@ -2845,12 +2845,12 @@ function __wbg_get_imports() { }, arguments); }, __wbindgen_cast_0000000000000001: function(arg0, arg1) { // Cast intrinsic for `Closure(Closure { dtor_idx: 1, function: Function { arguments: [], shim_idx: 2, ret: Unit, inner_ret: Some(Unit) }, mutable: true }) -> Externref`. - const ret = makeMutClosure(arg0, arg1, wasm.__wasm_bindgen_func_elem_66, __wasm_bindgen_func_elem_987); + const ret = makeMutClosure(arg0, arg1, wasm.__wasm_bindgen_func_elem_80, __wasm_bindgen_func_elem_1114); return addHeapObject(ret); }, __wbindgen_cast_0000000000000002: function(arg0, arg1) { - // Cast intrinsic for `Closure(Closure { dtor_idx: 220, function: Function { arguments: [Externref], shim_idx: 221, ret: Unit, inner_ret: Some(Unit) }, mutable: true }) -> Externref`. - const ret = makeMutClosure(arg0, arg1, wasm.__wasm_bindgen_func_elem_2236, __wasm_bindgen_func_elem_2243); + // Cast intrinsic for `Closure(Closure { dtor_idx: 236, function: Function { arguments: [Externref], shim_idx: 237, ret: Unit, inner_ret: Some(Unit) }, mutable: true }) -> Externref`. + const ret = makeMutClosure(arg0, arg1, wasm.__wasm_bindgen_func_elem_2475, __wasm_bindgen_func_elem_2482); return addHeapObject(ret); }, __wbindgen_cast_0000000000000003: function(arg0) { @@ -2881,16 +2881,16 @@ function __wbg_get_imports() { }; } -function __wasm_bindgen_func_elem_987(arg0, arg1) { - wasm.__wasm_bindgen_func_elem_987(arg0, arg1); +function __wasm_bindgen_func_elem_1114(arg0, arg1) { + wasm.__wasm_bindgen_func_elem_1114(arg0, arg1); } -function __wasm_bindgen_func_elem_2243(arg0, arg1, arg2) { - wasm.__wasm_bindgen_func_elem_2243(arg0, arg1, addHeapObject(arg2)); +function __wasm_bindgen_func_elem_2482(arg0, arg1, arg2) { + wasm.__wasm_bindgen_func_elem_2482(arg0, arg1, addHeapObject(arg2)); } -function __wasm_bindgen_func_elem_6293(arg0, arg1, arg2, arg3) { - wasm.__wasm_bindgen_func_elem_6293(arg0, arg1, addHeapObject(arg2), addHeapObject(arg3)); +function __wasm_bindgen_func_elem_6812(arg0, arg1, arg2, arg3) { + wasm.__wasm_bindgen_func_elem_6812(arg0, arg1, addHeapObject(arg2), addHeapObject(arg3)); } const BinaryResultFinalization = (typeof FinalizationRegistry === 'undefined') diff --git a/js/packages/core/tests/graphql.test.ts b/js/packages/core/tests/graphql.test.ts index 08258fb..af50499 100644 --- a/js/packages/core/tests/graphql.test.ts +++ b/js/packages/core/tests/graphql.test.ts @@ -50,6 +50,10 @@ function dataOf(payload: { data: T }): T { return payload.data; } +function sortById(rows: T[]): T[] { + return [...rows].sort((left, right) => left.id - right.id); +} + describe('GraphQL', () => { it('exposes schema, root planner filters, and nested relations', async () => { const db = createGraphqlDb(); @@ -162,6 +166,259 @@ describe('GraphQL', () => { expect(subscription.subscriptionCount()).toBe(0); }); + it('uses the unified delta-backed live plan for scalar root subscriptions', async () => { + const db = createGraphqlDb(); + const subscription = db.subscribeGraphql(` + subscription UserCard { + usersByPk(pk: { id: 1 }) { + id + name + } + } + `); + + expect(dataOf(subscription.getResult())).toEqual({ usersByPk: null }); + + db.graphql(` + mutation { + insertUsers(input: [{ id: 1, name: "Alice" }]) { + id + name + } + } + `); + + await flushGraphqlReactivity(); + expect(dataOf(subscription.getResult())).toEqual({ + usersByPk: { id: 1, name: 'Alice' }, + }); + + db.graphql(` + mutation { + updateUsers(where: { id: { eq: 1 } }, set: { name: "Alicia" }) { + id + name + } + } + `); + + await flushGraphqlReactivity(); + expect(dataOf(subscription.getResult())).toEqual({ + usersByPk: { id: 1, name: 'Alicia' }, + }); + }); + + it('uses the delta-backed live plan for multi-level nested subscriptions without sorting', async () => { + const db = createGraphqlDb(); + + await db.insert('users').values([ + { id: 2, name: 'Bob' }, + ]).exec(); + await db.insert('posts').values([ + { id: 10, author_id: 2, title: 'First' }, + ]).exec(); + + const subscription = db.subscribeGraphql(` + subscription PostAuthorGraph { + postsByPk(pk: { id: 10 }) { + id + title + author { + id + name + posts { + id + title + } + } + } + } + `); + + expect(dataOf(subscription.getResult())).toEqual({ + postsByPk: { + id: 10, + title: 'First', + author: { + id: 2, + name: 'Bob', + posts: [{ id: 10, title: 'First' }], + }, + }, + }); + + const seen: Array<{ + data: { + postsByPk: { + id: number; + title: string; + author: { + id: number; + name: string; + posts: Array<{ id: number; title: string }>; + }; + }; + }; + }> = []; + const unsubscribe = subscription.subscribe((payload) => { + seen.push(payload); + }); + + db.graphql(` + mutation { + insertPosts(input: [{ id: 11, author_id: 2, title: "Second" }]) { + id + } + } + `); + + await flushGraphqlReactivity(); + + const current = dataOf(subscription.getResult()); + expect(current.postsByPk.id).toBe(10); + expect(current.postsByPk.author.name).toBe('Bob'); + expect(sortById(current.postsByPk.author.posts)).toEqual([ + { id: 10, title: 'First' }, + { id: 11, title: 'Second' }, + ]); + expect(sortById(dataOf(seen.at(-1)!).postsByPk.author.posts)).toEqual([ + { id: 10, title: 'First' }, + { id: 11, title: 'Second' }, + ]); + + unsubscribe(); + }); + + it('pushes nested relation updates for GraphQL subscriptions', async () => { + const db = createGraphqlDb(); + + await db.insert('users').values([ + { id: 1, name: 'Alice' }, + ]).exec(); + + const subscription = db.subscribeGraphql(` + subscription WatchUsersWithPosts { + users(orderBy: [{ field: ID, direction: ASC }]) { + id + name + posts(orderBy: [{ field: ID, direction: ASC }]) { + id + title + } + } + } + `); + + expect(dataOf(subscription.getResult())).toEqual({ + users: [{ id: 1, name: 'Alice', posts: [] }], + }); + + const seen: Array<{ + data: { + users: Array<{ + id: number; + name: string; + posts: Array<{ id: number; title: string }>; + }>; + }; + }> = []; + const unsubscribe = subscription.subscribe((payload) => { + seen.push(payload); + }); + + expect(dataOf(seen[0]!)).toEqual({ + users: [{ id: 1, name: 'Alice', posts: [] }], + }); + + db.graphql(` + mutation { + insertPosts(input: [{ id: 10, author_id: 1, title: "Hello" }]) { + id + title + } + } + `); + + await flushGraphqlReactivity(); + expect(dataOf(subscription.getResult())).toEqual({ + users: [{ id: 1, name: 'Alice', posts: [{ id: 10, title: 'Hello' }] }], + }); + expect(dataOf(seen.at(-1)!)).toEqual({ + users: [{ id: 1, name: 'Alice', posts: [{ id: 10, title: 'Hello' }] }], + }); + + db.graphql(` + mutation { + updatePosts(where: { id: { eq: 10 } }, set: { title: "Updated" }) { + id + title + } + } + `); + + await flushGraphqlReactivity(); + expect(dataOf(subscription.getResult())).toEqual({ + users: [{ id: 1, name: 'Alice', posts: [{ id: 10, title: 'Updated' }] }], + }); + expect(dataOf(seen.at(-1)!)).toEqual({ + users: [{ id: 1, name: 'Alice', posts: [{ id: 10, title: 'Updated' }] }], + }); + + unsubscribe(); + }); + + it('keeps getResult current even without external GraphQL subscribers', async () => { + const db = createGraphqlDb(); + + await db.insert('users').values([ + { id: 1, name: 'Alice' }, + ]).exec(); + + const subscription = db.subscribeGraphql(` + subscription WatchUsersWithPosts { + users(orderBy: [{ field: ID, direction: ASC }]) { + id + name + posts(orderBy: [{ field: ID, direction: ASC }]) { + id + title + } + } + } + `); + + expect(subscription.subscriptionCount()).toBe(0); + expect(dataOf(subscription.getResult())).toEqual({ + users: [{ id: 1, name: 'Alice', posts: [] }], + }); + + db.graphql(` + mutation { + insertPosts(input: [{ id: 10, author_id: 1, title: "Hello" }]) { + id + } + } + `); + + await flushGraphqlReactivity(); + expect(dataOf(subscription.getResult())).toEqual({ + users: [{ id: 1, name: 'Alice', posts: [{ id: 10, title: 'Hello' }] }], + }); + + db.graphql(` + mutation { + updateUsers(where: { id: { eq: 1 } }, set: { name: "Alicia" }) { + id + } + } + `); + + await flushGraphqlReactivity(); + expect(dataOf(subscription.getResult())).toEqual({ + users: [{ id: 1, name: 'Alicia', posts: [{ id: 10, title: 'Hello' }] }], + }); + }); + it('supports prepared GraphQL mutation and subscription documents', async () => { const db = createGraphqlDb(); const preparedSubscription = db.prepareGraphql(