From 61f530b58022bfea2c1d71fcbac905732d7b7598 Mon Sep 17 00:00:00 2001 From: Stephen Collins Date: Thu, 14 Aug 2025 09:18:31 -0500 Subject: [PATCH 1/2] v0.6.0 --- CHANGELOG.md | 26 ++ CONTEXT_REFACTOR_TASKS.md | 612 -------------------------------------- TASKS.md | 512 ------------------------------- intent_kit/__init__.py | 2 +- pyproject.toml | 2 +- uv.lock | 2 +- 6 files changed, 29 insertions(+), 1127 deletions(-) delete mode 100644 CONTEXT_REFACTOR_TASKS.md delete mode 100644 TASKS.md diff --git a/CHANGELOG.md b/CHANGELOG.md index 0c0f329..38305ff 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,32 @@ All notable changes to this project will be documented in this file. +## [v0.6.0] - 2025-08-14 + +### Added +- **DAG Architecture** - Complete refactor from tree-based to directed acyclic graph (DAG) patterns for enhanced flexibility and node reuse +- **Node Reuse Capabilities** - Nodes can now be shared across multiple execution paths, enabling more efficient and modular intent processing +- **Enhanced Context System** - Simplified context management with improved debugging and execution tracing +- **Structured Logging** - Comprehensive logging system with structured output for better observability +- **Flattened Examples Directory** - Reorganized examples for better discoverability and maintenance + +### Changed +- **Core Architecture Pattern** - Migrated from hierarchical tree structures to flexible DAG patterns, allowing nodes to have multiple parents and children +- **Node Class Construction** - Simplified node creation and configuration with cleaner, more maintainable patterns +- **Context Implementation** - Streamlined context handling with reduced complexity and improved performance +- **Evaluation Framework** - Updated evaluation system to work with new DAG architecture +- **Documentation Updates** - Comprehensive documentation refresh to reflect DAG-based patterns and capabilities + +### Removed +- **Excessive Logging** - Removed verbose logging in favor of structured, targeted logging +- **Tree-Specific Constraints** - Eliminated hierarchical limitations that prevented node reuse and complex routing patterns + +### Breaking Changes +- **Architecture Pattern** - Complete shift from tree-based to DAG-based execution patterns +- **Node Relationships** - Nodes can now have multiple parents and children, breaking previous tree-only constraints +- **Context API** - Simplified context interface with updated methods and properties +- **Graph Construction** - Updated graph building patterns to support DAG structures + ## [v0.5.0] - 2025-08-03 ### Added diff --git a/CONTEXT_REFACTOR_TASKS.md b/CONTEXT_REFACTOR_TASKS.md deleted file mode 100644 index 7f70f62..0000000 --- a/CONTEXT_REFACTOR_TASKS.md +++ /dev/null @@ -1,612 +0,0 @@ -Here's a **TASKS.md** that blends the Context refactor/move plan with the merge-policy + patch protocol feedback we discussed. -It's structured with markdown checkboxes so you can drop it directly into your repo and feed it to an LLM coding assistant. - ---- - -# Context Refactor & Relocation Tasks - -## Overview - -This document outlines the refactor to move the context system into `intent_kit/core/context/` with a new protocol-based architecture that supports deterministic merging, stable fingerprinting, and backwards compatibility. - ---- - -# Answers (decisions) - -1. **Implementation order** - -* **Stage 0 (must first):** `protocols.py` (ContextProtocol/ContextPatch/MergePolicyName), `default.py` with **KV only** (get/set/keys/snapshot) + stubbed `apply_patch`/`fingerprint`, `__init__.py`, deprecated re-export. -* **Stage 1:** Wire traversal to type `ctx: ContextProtocol` (no behavior change), keep existing ctx usage working. -* **Stage 2:** Implement merge policies (`policies.py`) + real `apply_patch` in `DefaultContext` (LWW default). -* **Stage 3:** Implement `fingerprint` + glob include (basic `*`), exclude `tmp.*` and `private.*` by default. -* **Stage 4 (incremental):** Convert 1–2 core nodes to emit `ctx_patch`; keep direct `ctx.set` allowed. -* **Stage 5:** Tests (policies, conflicts, fingerprint, fan-in determinism, adapter). - -2. **Current context usage** - -* **Yes**—do a *quick* usage scan first (30–60 min scope): where `ctx.set`, `ctx.keys`, `ctx.logger`, and any `ctx.get_history`/ops/errors are used. This ensures the Stage 0 interface doesn't break anything and tells you which namespaces to reserve. - -3. **Reduce policy / registry** - -* **Defer.** Ship with `last_write_wins`, `first_write_wins`, `append_list`, `merge_dict`. Implement `reduce` as a **NotImplemented** path that raises a clear error with guidance ("register a reducer in v2"). Add the registry hook later. - -4. **Glob patterns for fingerprint** - -* **Support simple shell-style globs** in Stage 3: `*` and `?` with `fnmatch`. That covers `user.*`, `shared.*`, and `node..*`. No need for brace sets or character classes yet. -* Default `include` if `None`: `["user.*", "shared.*"]`. -* Always exclude prefixes: `tmp.*`, `private.*`. - -5. **Error handling** - -* **Use existing `ContextConflictError` if present;** otherwise define it in `core.exceptions` or locally as a fallback in `policies.py/default.py` (as in the skeleton). When you wire traversal, import from the shared exceptions module to keep one canonical type. - -6. **Testing strategy** - -* **Add the scaffold in the same PR** (light but real). - - * Unit tests for `policies.py` and `DefaultContext.apply_patch` - * Fingerprint stability tests - * Fan-in merge determinism test (simulate two patches, stable order) - * Adapter hydration test -* Don't block on integration tests for nodes yet—add those when you convert the first node to patches. - ---- - -# Execution Plan (checklist) - -## Stage 0 — Protocol + Minimal DefaultContext - -* [x] Add `core/context/protocols.py` (exact skeleton already provided). -* [x] Add `core/context/default.py` with KV + `snapshot` + stub `apply_patch`/`fingerprint`. -* [x] Add `core/context/__init__.py` and `adapters.py` (DictBackedContext). -* [x] ~~Add deprecation re-export `intent_kit/context/__init__.py`.~~ (Removed old context entirely - no backwards compatibility) -* [x] Quick repo scan to confirm only `get/set/keys/logger` are needed immediately. - -**DoD:** Project imports resolve; traversal still runs with old behavior. ✅ **COMPLETED** - -## Stage 1 — Type Traversal Against Protocol - -* [x] Change traversal signature/uses to `ctx: ContextProtocol`. -* [x] Keep existing memoization and `ctx.set` calls intact (no behavior change). -* [x] CI green. - -**DoD:** No runtime behavior changes; types enforce the new surface. ✅ **COMPLETED** - -## Stage 2 — Merge Policies + Patch Application - -* [x] Implement `policies.py`: `last_write_wins`, `first_write_wins`, `append_list`, `merge_dict`. -* [x] In `default.apply_patch`: - - * [x] Enforce `private.*` write protection. - * [x] Per-key policy map; default to LWW. - * [x] Deterministic loop over keys; wrap unexpected errors as `ContextConflictError`. - * [ ] (Optional) record per-key provenance in a private metadata map for future observability. - -**DoD:** Patches merge deterministically; conflicts raise `ContextConflictError`. ✅ **COMPLETED** - -## Stage 3 — Fingerprint - -* [x] Implement `_select_keys_for_fingerprint` with `fnmatch` globs. -* [x] Default includes: `["user.*", "shared.*"]`. -* [x] Exclude `tmp.*`, `private.*`. -* [x] `canonical_fingerprint` returns canonical JSON; leave hashing for later. -* [x] **BONUS:** Implemented glob pattern matching with `fnmatch` for flexible key selection. - -**DoD:** Fingerprint stable across key order; unaffected by `tmp.*`/`private.*`. ✅ **COMPLETED** - -## Stage 4 — Node Pilot to Patches - -* [x] Update `classifier` and `extractor` to return `ctx_patch` (keep existing direct `set` as fallback). -* [x] In traversal: if `result.ctx_patch`, set `provenance` if missing, then `ctx.apply_patch`. - -**DoD:** Mixed mode works; patches preferred. ✅ **COMPLETED** - -## Stage 5 — Tests - -* [x] `tests/context/test_policies.py` - * [x] LWW/FWW basic - * [x] append_list (list vs non-list) - * [x] merge_dict (dict vs non-dict → conflict) -* [x] `tests/context/test_default_context.py` - * [x] apply_patch write protect `private.*` - * [x] per-key policy overrides - * [x] deterministic application order -* [x] `tests/context/test_fingerprint.py` - * [x] glob include works (`user.*`, `shared.*`) - * [x] `tmp.*` changes don't affect fingerprint -* [x] `tests/context/test_adapters.py` - * [x] DictBackedContext hydrates existing mapping - -**DoD:** All tests pass locally and in CI; coverage for policies + fingerprint. ✅ **COMPLETED** - ---- - -# Non-goals (explicit) - -* No reducer registry in this PR (raise with helpful message). -* No deep-merge semantics for nested dicts (shallow `merge_dict` only). -* No strict enforcement of ContextDependencies yet (warning-level only later). - ---- - -# Acceptance Criteria (engineer-facing) - -* ✅ `intent_kit.core.context` is the **only** import path used by traversal and nodes. -* ✅ Traversal compiles against `ContextProtocol` and applies patches if present. -* ✅ Fan-in merges are deterministic and policy-driven; unreconcilable merges raise `ContextConflictError`. -* ✅ Fingerprint is stable and excludes ephemeral/private keys. -* ~~Back-compat re-export exists and warns.~~ (Removed - no backwards compatibility) - ---- - -# Ready-to-Drop-In File Skeletons - -Here are **ready-to-drop-in file skeletons** for `core/context/` (plus the deprecation shim). They compile, have clear TODOs, and keep imports clean so your LLM assistant can fill in logic without guessing. - ---- - -# 📁 Proposed File Tree - -``` -intent_kit/ - core/ - context/ - __init__.py - protocols.py - default.py - policies.py - fingerprint.py - adapters.py - context/ - __init__.py # (deprecated re-export) -``` - ---- - -# intent\_kit/core/context/**init**.py - -```python -""" -Core Context public API. - -Re-export the protocol, default implementation, and key types from submodules. -""" - -from .protocols import ( - ContextProtocol, - ContextPatch, - MergePolicyName, - LoggerLike, -) - -from .default import DefaultContext -from .adapters import DictBackedContext - -__all__ = [ - "ContextProtocol", - "ContextPatch", - "MergePolicyName", - "LoggerLike", - "DefaultContext", - "DictBackedContext", -] -``` - ---- - -# intent\_kit/core/context/protocols.py - -```python -from __future__ import annotations - -from typing import Any, Iterable, Mapping, Optional, Protocol, TypedDict, Literal - - -MergePolicyName = Literal[ - "last_write_wins", - "first_write_wins", - "append_list", - "merge_dict", - "reduce", -] - - -class ContextPatch(TypedDict, total=False): - """ - Patch contract applied by traversal after node execution. - - data: dotted-key map of values to set/merge - policy: per-key merge policies (optional; default policy applies otherwise) - provenance: node id or source identifier for auditability - tags: optional set of tags (e.g., {"affects_memo"}) - """ - data: Mapping[str, Any] - policy: Mapping[str, MergePolicyName] - provenance: str - tags: set[str] - - -class LoggerLike(Protocol): - def info(self, msg: str, *args: Any, **kwargs: Any) -> None: ... - def warning(self, msg: str, *args: Any, **kwargs: Any) -> None: ... - def error(self, msg: str, *args: Any, **kwargs: Any) -> None: ... - def debug(self, msg: str, *args: Any, **kwargs: Any) -> None: ... - - -class ContextProtocol(Protocol): - """ - Minimal, enforceable context surface used by traversal and nodes. - - Implementations should: - - store values using dotted keys (recommended), - - support deterministic merging (apply_patch), - - provide stable memoization (fingerprint). - """ - - # Core KV - def get(self, key: str, default: Any = None) -> Any: ... - def set(self, key: str, value: Any, modified_by: Optional[str] = None) -> None: ... - def has(self, key: str) -> bool: ... - def keys(self) -> Iterable[str]: ... - - # Patching & snapshots - def snapshot(self) -> Mapping[str, Any]: ... - def apply_patch(self, patch: ContextPatch) -> None: ... - def merge_from(self, other: Mapping[str, Any]) -> None: ... - - # Deterministic fingerprint for memoization - def fingerprint(self, include: Optional[Iterable[str]] = None) -> str: ... - - # Telemetry (optional but expected) - @property - def logger(self) -> LoggerLike: ... - - # Hooks (no-op allowed) - def add_error(self, *, where: str, err: str, meta: Optional[Mapping[str, Any]] = None) -> None: ... - def track_operation(self, *, name: str, status: str, meta: Optional[Mapping[str, Any]] = None) -> None: ... -``` - ---- - -# intent\_kit/core/context/default.py - -```python -from __future__ import annotations - -import json -import logging -from typing import Any, Dict, Iterable, Mapping, Optional - -from .protocols import ContextProtocol, ContextPatch, MergePolicyName, LoggerLike -from .fingerprint import canonical_fingerprint # TODO: implement in fingerprint.py -from .policies import apply_merge # TODO: implement in policies.py - -# Try to use the shared exceptions if present. -try: - from intent_kit.core.exceptions import ContextConflictError -except Exception: # pragma: no cover - class ContextConflictError(RuntimeError): - """Fallback if shared exception isn't available during early refactor.""" - - -DEFAULT_EXCLUDED_FP_PREFIXES = ("tmp.", "private.") - - -class DefaultContext(ContextProtocol): - """ - Reference dotted-key context with deterministic merge + memoization. - - Storage model: - - _data: Dict[str, Any] with dotted keys - - _logger: LoggerLike - """ - - def __init__(self, *, logger: Optional[LoggerLike] = None) -> None: - self._data: Dict[str, Any] = {} - self._logger: LoggerLike = logger or logging.getLogger("intent_kit") - - # ---------- Core KV ---------- - def get(self, key: str, default: Any = None) -> Any: - return self._data.get(key, default) - - def set(self, key: str, value: Any, modified_by: Optional[str] = None) -> None: - # TODO: optionally record provenance/modified_by - self._data[key] = value - - def has(self, key: str) -> bool: - return key in self._data - - def keys(self) -> Iterable[str]: - # Returning a stable view helps reproducibility - return sorted(self._data.keys()) - - # ---------- Patching & snapshots ---------- - def snapshot(self) -> Mapping[str, Any]: - # Shallow copy is enough for deterministic reads/merges - return dict(self._data) - - def apply_patch(self, patch: ContextPatch) -> None: - """ - Deterministically apply a patch according to per-key or default policy. - - TODO: - - Respect per-key policies (patch.get("policy", {})) - - Default policy: last_write_wins - - Disallow writes to "private.*" - - Raise ContextConflictError on irreconcilable merges - - Track provenance on write - """ - data = patch.get("data", {}) - policies = patch.get("policy", {}) - provenance = patch.get("provenance", "unknown") - - for key, incoming in data.items(): - if key.startswith("private."): - raise ContextConflictError(f"Write to protected namespace: {key}") - - policy: MergePolicyName = policies.get(key, "last_write_wins") - existing = self._data.get(key, None) - - try: - merged = apply_merge(policy=policy, existing=existing, incoming=incoming, key=key) - except ContextConflictError: - raise - except Exception as e: # wrap unexpected policy errors - raise ContextConflictError(f"Merge failed for {key}: {e}") from e - - self._data[key] = merged - # TODO: optionally track provenance per key, e.g., self._meta[key] = provenance - - # TODO: handle patch.tags (e.g., mark keys affecting memoization) - - def merge_from(self, other: Mapping[str, Any]) -> None: - """ - Merge values from another mapping using last_write_wins semantics. - - NOTE: This is a coarse merge; use apply_patch for policy-aware merging. - """ - for k, v in other.items(): - if k.startswith("private."): - continue - self._data[k] = v - - # ---------- Fingerprint ---------- - def fingerprint(self, include: Optional[Iterable[str]] = None) -> str: - """ - Return a stable, canonical fingerprint string for memoization. - - TODO: - - Expand glob patterns in `include` (e.g., "user.*", "shared.*") - - Exclude DEFAULT_EXCLUDED_FP_PREFIXES by default - - Canonicalize via `canonical_fingerprint` - """ - selected = _select_keys_for_fingerprint( - data=self._data, - include=include, - exclude_prefixes=DEFAULT_EXCLUDED_FP_PREFIXES, - ) - return canonical_fingerprint(selected) - - # ---------- Telemetry ---------- - @property - def logger(self) -> LoggerLike: - return self._logger - - def add_error(self, *, where: str, err: str, meta: Optional[Mapping[str, Any]] = None) -> None: - # TODO: integrate with error tracking (StackContext/Langfuse/etc.) - self._logger.error("CTX error at %s: %s | meta=%s", where, err, meta) - - def track_operation(self, *, name: str, status: str, meta: Optional[Mapping[str, Any]] = None) -> None: - # TODO: integrate with operation tracking - self._logger.debug("CTX op %s status=%s meta=%s", name, status, meta) - - -def _select_keys_for_fingerprint( - data: Mapping[str, Any], - include: Optional[Iterable[str]], - exclude_prefixes: Iterable[str], -) -> Dict[str, Any]: - """ - Build a dict of keys → values to feed into the fingerprint. - - TODO: - - Implement glob expansion for `include` - - If include is None, use a conservative default (e.g., only 'user.*' & 'shared.*') - """ - if include: - # TODO: glob match keys against patterns in include - # Placeholder: naive exact match - keys = sorted({k for k in data.keys() if k in include}) - else: - # Default conservative subset - keys = sorted([k for k in data.keys() if k.startswith(("user.", "shared."))]) - - # Exclude protected/ephemeral prefixes - filtered = [k for k in keys if not k.startswith(tuple(exclude_prefixes))] - return {k: data[k] for k in filtered} -``` - ---- - -# intent\_kit/core/context/policies.py - -```python -from __future__ import annotations -from typing import Any - -# Try to use the shared exceptions if present. -try: - from intent_kit.core.exceptions import ContextConflictError -except Exception: # pragma: no cover - class ContextConflictError(RuntimeError): - """Fallback if shared exception isn't available during early refactor.""" - - -def apply_merge(*, policy: str, existing: Any, incoming: Any, key: str) -> Any: - """ - Route to a concrete merge policy implementation. - - Supported (initial set): - - last_write_wins (default) - - first_write_wins - - append_list - - merge_dict (shallow) - - reduce (requires registered reducer) - """ - if policy == "last_write_wins": - return _last_write_wins(existing, incoming) - if policy == "first_write_wins": - return _first_write_wins(existing, incoming) - if policy == "append_list": - return _append_list(existing, incoming, key) - if policy == "merge_dict": - return _merge_dict(existing, incoming, key) - if policy == "reduce": - # TODO: wire a reducer registry; for now fail explicitly - raise ContextConflictError(f"Reducer not registered for key: {key}") - - raise ContextConflictError(f"Unknown merge policy: {policy}") - - -def _last_write_wins(existing: Any, incoming: Any) -> Any: - return incoming - - -def _first_write_wins(existing: Any, incoming: Any) -> Any: - return existing if existing is not None else incoming - - -def _append_list(existing: Any, incoming: Any, key: str) -> Any: - if existing is None: - existing = [] - if not isinstance(existing, list): - raise ContextConflictError(f"append_list expects list at {key}; got {type(existing).__name__}") - return [*existing, incoming] if not isinstance(incoming, list) else [*existing, *incoming] - - -def _merge_dict(existing: Any, incoming: Any, key: str) -> Any: - if existing is None: - existing = {} - if not isinstance(existing, dict) or not isinstance(incoming, dict): - raise ContextConflictError(f"merge_dict expects dicts at {key}") - out = dict(existing) - out.update(incoming) - return out -``` - ---- - -# intent\_kit/core/context/fingerprint.py - -```python -from __future__ import annotations -import json -from typing import Any, Mapping - - -def canonical_fingerprint(selected: Mapping[str, Any]) -> str: - """ - Produce a deterministic fingerprint string from selected key/values. - - TODO: - - Consider stable float formatting if needed - - Consider hashing (e.g., blake2b) over the JSON string if shorter keys are desired - """ - # Canonical JSON: sort keys, no whitespace churn - return json.dumps(selected, sort_keys=True, separators=(",", ":")) -``` - ---- - -# intent\_kit/core/context/adapters.py - -```python -from __future__ import annotations - -import logging -from typing import Any, Mapping, Optional - -from .default import DefaultContext -from .protocols import LoggerLike - - -class DictBackedContext(DefaultContext): - """ - Adapter that hydrates from an existing dict-like context once, - then behaves like DefaultContext. - - This is intended as a back-compat shim during migration. - """ - - def __init__(self, backing: Mapping[str, Any], *, logger: Optional[LoggerLike] = None) -> None: - super().__init__(logger=logger or logging.getLogger("intent_kit")) - # Single hydration step - for k, v in backing.items(): - if isinstance(k, str): - self._data[k] = v -``` - ---- - -# intent\_kit/context/**init**.py (Deprecated Re-Export) - -```python -""" -DEPRECATED: intent_kit.context - -Use: `from intent_kit.core.context import ...` - -This module re-exports the core.context API for a transition period. -""" - -from warnings import warn - -warn( - "intent_kit.context is deprecated; use intent_kit.core.context", - DeprecationWarning, - stacklevel=2, -) - -# Re-export from the new location -from intent_kit.core.context import ( - ContextProtocol, - ContextPatch, - MergePolicyName, - LoggerLike, - DefaultContext, - DictBackedContext, -) - -__all__ = [ - "ContextProtocol", - "ContextPatch", - "MergePolicyName", - "LoggerLike", - "DefaultContext", - "DictBackedContext", -] -``` - ---- - -## Notes for your LLM Coding Assistant - -* **Open TODOs:** - - * Implement glob expansion + exclusions in `_select_keys_for_fingerprint` (default.py). - * Flesh out `canonical_fingerprint` if you want a hashed output. - * Add a reducer registry for `reduce` in `policies.py` when needed. - * Optional provenance/meta tracking on writes in `DefaultContext.apply_patch`. - -* **Strict Mode (optional next PR):** - - * Block writes outside node-declared `ContextDependencies.outputs`. - * Record per-key provenance to aid audit trails. - -* **Traversal touch points (separate PR):** - - * Type `ctx: ContextProtocol`. - * Use `ctx.apply_patch(result.ctx_patch)` if present. - * Swap memoization to `ctx.fingerprint(include=dag.stable_context_keys)`. - -If you want, I can also generate a tiny **unit test scaffold** (pytest) for merge policies and fingerprint stability to go with this. diff --git a/TASKS.md b/TASKS.md deleted file mode 100644 index 716db1f..0000000 --- a/TASKS.md +++ /dev/null @@ -1,512 +0,0 @@ -# TASKS.md — Refactor **intent-kit** from Trees to DAGs (pre-v1, no back-compat) - -## ✅ Completed Milestones: 0, 1, 2, 3, 4, 5, 6, 7 ✅ - -## Ground rules - -* No `parent`/`children` in any code or JSON. -* Edges are first-class; labels optional (`null` means default/fall-through). -* Multiple entrypoints supported. -* Deterministic traversal; hard fail on cycles. -* Fan-out and fan-in are supported. -* Context propagation via immutable patches with deterministic merging. -* Tight tests, clear docs, observable execution. - ---- - -## Deliverables - -* ✅ `intent_kit/core`: new DAG primitives, traversal, validation, loader. -* ✅ Nodes updated to return `ExecutionResult(next_edges=[...])`. -* ✅ JSON schema switched to `{entrypoints, nodes, edges}`. -* ✅ Example graphs + README snippets. -* ✅ Pytest suite: traversal, validation, fan-out/fan-in, remediation. -* ✅ Logging/metrics for per-edge hops. - ---- - -## Milestone 0 — Repo hygiene ✅ - -* [x] Create feature branch: `feature/dag-core`. -* [x] Enable `pytest -q` in CI (or keep existing). -* [x] Add `ruff`/`black` config (if not present). -* [x] Protect branch with required checks. - -**Done when:** CI runs on branch and fails if tests fail or lints fail. - ---- - -## Milestone 1 — Core DAG types ✅ - -**Files:** `intent_kit/core/graph.py` - -* [x] Define `GraphNode` dataclass: - - * `id: str`, `type: str`, `config: dict = {}`. -* [x] Define `IntentDAG` dataclass: - - * `nodes: dict[str, GraphNode]` - * `adj: dict[str, dict[str|None, set[str]]]` (outgoing) - * `rev: dict[str, set[str]]` (incoming) - * `entrypoints: list[str]` -* [x] Provide helper methods: - - * [x] `add_node(id, type, **config) -> GraphNode` - * [x] `add_edge(src, dst, label: str|None) -> None` - * [x] `freeze() -> None` (optionally make sets immutable to catch mutation bugs) - -**Acceptance:** - -* [x] Type hints pass; basic import sanity test runs. -* [x] Adding nodes/edges produces expected `adj/rev`. - ---- - -## Milestone 2 — Node execution interface ✅ - -**Files:** `intent_kit/core/node_iface.py` - -* [x] Define `ExecutionResult`: - - * `data: Any = None` - * `next_edges: list[str]|None = None` - * `terminate: bool = False` - * `metrics: dict = {}` - * `context_patch: dict = {}` - * [x] Provide `merge_metrics(other: dict)`. -* [x] Define `NodeProtocol` protocol/ABC: - - * `execute(user_input: str, ctx: "Context") -> ExecutionResult` - -**Acceptance:** - -* [x] Stub implementation compiles; example node can return `next_edges`. - ---- - -## Milestone 3 — DAG loader (JSON → `IntentDAG`) ✅ - -**Files:** `intent_kit/core/loader.py` - -* [x] Define JSON contract: - -```json -{ - "entrypoints": ["rootA"], - "nodes": { - "rootA": {"type": "classifier", "config": {}}, - "wx": {"type": "action", "config": {}} - }, - "edges": [ - {"from": "rootA", "to": "wx", "label": "weather"} - ] -} -``` - -* [x] Implement `load_dag(obj: dict) -> IntentDAG`. -* [x] Validate presence/shape of `entrypoints`, `nodes`, `edges` (but leave cycle checks to validator). -* [x] Factory hook: `resolve_impl(node: GraphNode) -> NodeProtocol` (DI point; wire later). - -**Acceptance:** - -* [x] Loading a minimal JSON yields `IntentDAG` with correct adjacency. - ---- - -## Milestone 4 — Validation (strict) ✅ - -**Files:** `intent_kit/core/validate.py` - -* [x] `validate_ids(dag)` — all ref’d ids exist. -* [x] `validate_acyclic(dag)` — DFS/Kahn; raise `CycleError` with path. -* [x] `validate_entrypoints(dag)` — non-empty list; every entrypoint exists. -* [x] `validate_reachability(dag)` — compute reachable from entrypoints; list unreachable. -* [x] `validate_labels(dag, producer_labels: dict[node_id, set[label]])` (optional lint): - - * If a node emits labels (declared by node type), ensure those labels exist on `adj[src]`. - * Classifiers must emit explicit labels (no default `null`). - * Reserved labels: `"error"` for error routing, `"done"` for terminal convenience. -* [x] `validate(dag)` orchestrator; returns issues or raises. - -**Acceptance:** - -* [x] Unit tests for: good graph, cycle, bad id, no entrypoints, unreachable node. - ---- - -## Milestone 5 — Traversal engine ✅ - -**Files:** `intent_kit/core/traversal.py` - -* [x] `run_dag(dag: IntentDAG, ctx, user_input: str) -> tuple[ExecutionResult, dict]` - - * Worklist (BFS) starting from `entrypoints`. - * Track `seen_steps: set[tuple[node_id, label]]` to avoid re-enqueue of same labeled hop. - * Aggregate `metrics` across node results. - * Respect `terminate=True` (stop entire traversal). - * If `next_edges` empty or `None`, do not enqueue children. - * **Context merging**: Apply `context_patch` from each node, merge deterministically (last-writer-wins by BFS order). - * **Error handling**: Catch `NodeError`, apply error context patch, route via `"error"` edge if exists, else stop. - * **Memoization**: Optional per-node memoization using `(node_id, context_hash, input_hash)` key. -* [x] Deterministic behavior: - - * Stable queue order by insertion (entrypoints order preserved). -* [x] Hard caps: - - * [x] `max_steps` (configurable; default e.g., 1000). - * [x] `max_fanout_per_node` (default e.g., 16). - * On exceed → raise `TraversalLimitError`. - -**Acceptance:** - -* [x] Tests: linear path, fan-out, fan-in, early terminate, limits enforced. - ---- - -## Milestone 6 — Implementation resolver (DI) ✅ - -**Files:** `intent_kit/core/registry.py` - -* [x] `NodeRegistry` mapping `type` → class implementing `NodeProtocol`. -* [x] `resolve_impl(node: GraphNode) -> NodeProtocol` using registry with fallback error. -* [x] Decorator `@register_node("type")`. - -**Acceptance:** - -* [x] Register two demo nodes; traversal uses them successfully. - ---- - -## Milestone 7 — Update built-in nodes to DAG contract ✅ - -**Files:** `intent_kit/nodes/**` - -* [x] Replace any tree-era returns with `ExecutionResult(next_edges=[...], context_patch={...})`. -* [x] Ensure classifiers return explicit label(s) (strings) that match outgoing edge labels (no default `null`). -* [x] Ensure actions set `terminate=True` when they represent terminal states (if applicable). -* [x] Ensure remediation nodes expose `"resume"` (or chosen label) if intended. -* [x] Add `context_merge_decl` and `memoize` config options where appropriate. - -**Acceptance:** - -* [x] All built-in nodes compile and pass minimal smoke tests with the new interface. -* [x] Created new DAG nodes (`DAGActionNode`, `DAGClassifierNode`) that implement NodeProtocol directly. -* [x] Removed all tree-era concepts (children, parent) from DAG nodes. -* [x] Factory functions registered with NodeRegistry for DAG node types. - ---- - -## Milestone 8 — Logging & metrics - -**Files:** `intent_kit/runtime/logging.py`, `intent_kit/runtime/metrics.py` - -* [ ] Per-hop log record: `{from, label, to, node_type, duration_ms, tokens, cost, success, error?, context_patch?}`. -* [ ] Execution trace collector: ordered list of hops with context merge history. -* [ ] Aggregation utilities: sum tokens/cost, count node invocations, context conflict detection. -* [ ] Hook traversal to emit logs; allow injection of logger for tests. - -**Acceptance:** - -* [ ] Running an example produces a readable trace; metrics totals are correct. - ---- - -## Milestone 9 — Example graphs - -**Files:** `intent_kit/examples/*.json` - -* [ ] `demo_weather_payment.json` — classifier routes to two actions, then joins to summarize. -* [ ] `demo_shared_remediation.json` — two actions share a remediation node with context merging. -* [ ] `demo_multiple_entrypoints.json` — chat vs API entrypoints converge to router with fan-in. -* [ ] `demo_fanout_fanin.json` — branch to A/B then converge with context patch merging. - -**Acceptance:** - -* [ ] `pytest` examples test loads + validates + traverses; traces show expected order. - ---- - -## Milestone 10 — Pytest suite - -**Files:** `tests/test_loader.py`, `tests/test_validate.py`, `tests/test_traversal.py`, `tests/test_nodes.py` - -**Loader** - -* [ ] Loads minimal JSON, complex JSON. -* [ ] Errors when missing keys or bad shapes. - -**Validate** - -* [ ] Detects cycles with explicit cycle path in message. -* [ ] Detects unreachable nodes. -* [ ] Fails when entrypoints missing. -* [ ] Passes on valid graphs. - -**Traversal** - -* [ ] Linear path executes all nodes once. -* [ ] Fan-out executes both branches; fan-in merges without duplicates. -* [ ] Early terminate stops processing. -* [ ] Limits (max\_steps, max\_fanout) trigger exceptions. -* [ ] Deterministic order across runs. -* [ ] Context patches merge correctly in fan-in scenarios. -* [ ] Error routing via `"error"` edges works as expected. -* [ ] Memoization prevents duplicate node executions. - -**Nodes** - -* [ ] Classifier emits correct labels. -* [ ] Remediation path taken on simulated error. -* [ ] Context patches are applied and merged correctly. -* [ ] Memoization works for repeated node executions. - -**Acceptance:** - -* [ ] `pytest -q` green; coverage for `core` ≥ 85%. - ---- - -## Milestone 11 — Developer ergonomics - -**Files:** `intent_kit/core/builder.py` - -* [ ] Fluent builder for programmatic graphs: - - * `g = GraphBuilder().entrypoints("root").node("root","classifier").edge("root","wx","weather")...` -* [ ] `GraphBuilder.build() -> IntentDAG` + `validate(dag)`. - -**Acceptance:** - -* [ ] Example using builder matches JSON example behavior. - ---- - -## Milestone 12 — CLI (optional but useful) - -**Files:** `intent_kit/cli.py` - -* [ ] `intent-kit validate FILE.json` -* [ ] `intent-kit run FILE.json --input "..." --trace` -* [ ] `--max-steps`, `--fanout-cap` flags. -* [ ] Exit codes: 0 success, non-zero on validation/traversal errors. - -**Acceptance:** - -* [ ] Manual runs show trace and metrics. CI smoke test executes CLI on example. - ---- - -## Milestone 13 — Documentation updates - -**Files:** `README.md`, `docs/dag.md` - -* [ ] **README**: - - * Replace tree language with DAG concepts. - * Show JSON schema (`entrypoints`, `nodes`, `edges`) with context merging examples. - * 30-second demo snippet with fan-in/fan-out patterns. -* [ ] **docs/dag.md**: - - * Why DAG vs Tree. - * Patterns: shared remediation, fan-out/fan-in, multiple entrypoints, terminate-and-restart (clarify) without cycles. - * Context merging strategies and conflict resolution. - * Error handling and routing patterns. - * ASCII diagrams. - -**Acceptance:** - -* [ ] Docs build; internal links valid; examples runnable. - ---- - -## Milestone 14 — Removal of legacy code - -* [ ] Delete `parent`/`children` fields and all tree traversal code. -* [ ] Remove/rename any “Tree\*” modules. -* [ ] Update imports throughout. - -**Acceptance:** - -* [ ] Ripgrep for `children`, `parent`, `Tree` returns nothing meaningful. -* [ ] All tests still green. - ---- - -## Milestone 15 — Final polish - -* [ ] Add type guards and defensive errors with actionable messages. -* [ ] Ensure exceptions include node ids and labels for debugging. -* [ ] Ensure logs redact sensitive data if any. -* [ ] Pin dependencies; bump version `0.x` with CHANGELOG. - -**Acceptance:** - -* [ ] Dry run with examples yields clean, readable traces; no TODOs in code. - ---- - -## Reference interfaces (copy/paste) - -```python -# intent_kit/core/graph.py -from dataclasses import dataclass, field -from typing import Dict, Set, Optional - -EdgeLabel = Optional[str] - -@dataclass -class GraphNode: - id: str - type: str - config: dict = field(default_factory=dict) - -@dataclass -class IntentDAG: - nodes: Dict[str, GraphNode] = field(default_factory=dict) - adj: Dict[str, Dict[EdgeLabel, Set[str]]] = field(default_factory=dict) - rev: Dict[str, Set[str]] = field(default_factory=dict) - entrypoints: list[str] = field(default_factory=list) -``` - -```python -# intent_kit/core/node_iface.py -from typing import Any, Optional, List, Dict, Protocol - -class ExecutionResult: - def __init__(self, data: Any=None, next_edges: Optional[List[str]]=None, - terminate: bool=False, metrics: Optional[Dict]=None, context_patch: Optional[Dict]=None): - self.data = data - self.next_edges = next_edges - self.terminate = terminate - self.metrics = metrics or {} - self.context_patch = context_patch or {} - -class NodeProtocol(Protocol): - def execute(self, user_input: str, ctx: "Context") -> ExecutionResult: ... -``` - -```python -# intent_kit/core/traversal.py -from collections import deque -from time import perf_counter - -class TraversalLimitError(RuntimeError): ... -class NodeError(RuntimeError): ... -class TraversalError(RuntimeError): ... -class ContextConflictError(RuntimeError): ... - -def run_dag(dag, ctx, user_input, max_steps=1000, max_fanout_per_node=16, resolve_impl=None): - q = deque(dag.entrypoints) - seen = set() # (node_id, label) - steps = 0 - last = None - totals = {} - context_patches = {} # node_id -> merged context patch - - while q: - nid = q.popleft() - steps += 1 - if steps > max_steps: - raise TraversalLimitError("Exceeded max_steps") - - node = dag.nodes[nid] - impl = resolve_impl(node) - - # Apply merged context patch for this node - if nid in context_patches: - ctx.update(context_patches[nid]) - - t0 = perf_counter() - try: - res = impl.execute(user_input, ctx) - except NodeError as e: - # Error handling: apply error context, route via "error" edge if exists - error_patch = {"last_error": str(e), "error_node": nid} - if "error" in dag.adj.get(nid, {}): - # Route to error handler - for error_target in dag.adj[nid]["error"]: - step = (error_target, "error") - if step not in seen: - seen.add(step) - q.append(error_target) - context_patches[error_target] = error_patch - else: - # Stop traversal - raise TraversalError(f"Node {nid} failed: {e}") - continue - - dt = (perf_counter() - t0) * 1000 - - # metrics/log - m = res.metrics or {} - for k,v in m.items(): totals[k] = totals.get(k, 0) + v - ctx.logger.info({"node": nid, "type": node.type, "duration_ms": round(dt,2), "context_patch": res.context_patch}) - - last = res - if res.terminate: - break - - labels = res.next_edges or [] - if not labels: - continue - - fanout_count = 0 - for lab in labels: - for nxt in dag.adj.get(nid, {}).get(lab, set()): - step = (nxt, lab) - if step not in seen: - seen.add(step) - q.append(nxt) - fanout_count += 1 - if fanout_count > max_fanout_per_node: - raise TraversalLimitError("Exceeded max_fanout_per_node") - - # Merge context patches for downstream nodes - if res.context_patch: - if nxt not in context_patches: - context_patches[nxt] = {} - context_patches[nxt].update(res.context_patch) - - return last, totals -``` - ---- - -## Progress Summary - -### ✅ Completed Milestones (0-7) -- **Milestone 0**: Repo hygiene (branch, CI, linting) ✅ -- **Milestone 1**: Core DAG types (GraphNode, IntentDAG, helper methods) ✅ -- **Milestone 2**: Node execution interface (ExecutionResult, NodeProtocol protocol) ✅ -- **Milestone 3**: DAG loader (JSON → IntentDAG, validation) ✅ -- **Milestone 4**: Validation (cycle detection, reachability, labels) ✅ -- **Milestone 5**: Traversal engine (BFS, context merging, error handling) ✅ -- **Milestone 6**: Implementation resolver (DI) ✅ -- **Milestone 7**: Update built-in nodes to DAG contract ✅ - -### 📊 Test Coverage -- **Total Tests**: 111 tests across all core modules -- **Adapter Tests**: 16 comprehensive tests covering all scenarios -- **All Tests Passing**: ✅ - -### 🎯 Next Up -- **Milestone 8**: Logging & metrics - ---- - -## Quick smoke command (after wiring examples) - -* [ ] `pytest -q` -* [ ] `python -m intent_kit.cli validate intent_kit/examples/demo_weather_payment.json` -* [ ] `python -m intent_kit.cli run intent_kit/examples/demo_weather_payment.json --input "what's the weather?" --trace` - ---- - -## Review checklist (pre-merge) - -* [ ] No references to `parent`, `children`, or `Tree*`. -* [ ] All examples validate and run. -* [ ] Deterministic traversal order proven by test (seeded). -* [ ] Cycle detection test shows readable path. -* [ ] Docs match code; code samples compile. -* [ ] CI green. \ No newline at end of file diff --git a/intent_kit/__init__.py b/intent_kit/__init__.py index 7d10abe..9f95d07 100644 --- a/intent_kit/__init__.py +++ b/intent_kit/__init__.py @@ -19,7 +19,7 @@ # run_dag moved to DAGBuilder.run() -__version__ = "0.1.0" +__version__ = "0.6.0" __all__ = [ "IntentDAG", diff --git a/pyproject.toml b/pyproject.toml index ea944bc..6e966b0 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta" [project] name = "intentkit-py" -version = "0.5.0" +version = "0.6.0" description = "An open-source Python library for building intent classification and execution systems that work with any AI backend." authors = [ {name = "Stephen Collins", email = "stephen@stephencollins.tech"} diff --git a/uv.lock b/uv.lock index dbf726b..4e0bcb5 100644 --- a/uv.lock +++ b/uv.lock @@ -659,7 +659,7 @@ wheels = [ [[package]] name = "intentkit-py" -version = "0.5.0" +version = "0.6.0" source = { editable = "." } [package.optional-dependencies] From 22dc81cf03a23bdf2c41d718c80252b88db9f203 Mon Sep 17 00:00:00 2001 From: Stephen Collins Date: Thu, 14 Aug 2025 09:22:28 -0500 Subject: [PATCH 2/2] update README.md --- README.md | 209 +++++++++++++++++++++++++++++++++++------------------- 1 file changed, 138 insertions(+), 71 deletions(-) diff --git a/README.md b/README.md index 2393029..fedde11 100644 --- a/README.md +++ b/README.md @@ -18,13 +18,14 @@ Build reliable, auditable AI applications that understand user intent and take i ## What is Intent Kit? -Intent Kit helps you build AI-powered applications that understand what users want and take the right actions. Think of it as a smart router that can: +Intent Kit helps you build AI-powered applications that understand what users want and take the right actions. Built on a flexible DAG (Directed Acyclic Graph) architecture, it provides: -- **Understand user requests** using any AI model (OpenAI, Anthropic, Google, or your own) -- **Extract important details** like names, dates, and preferences automatically -- **Take actions** like sending messages, making calculations, or calling APIs -- **Handle complex requests** that involve multiple steps -- **Keep track of conversations** so your app remembers context +- **Smart Intent Understanding** using any AI model (OpenAI, Anthropic, Google, or your own) +- **Automatic Parameter Extraction** for names, dates, preferences, and more +- **Flexible Action Execution** like sending messages, making calculations, or calling APIs +- **Complex Multi-Step Workflows** with reusable nodes and flexible routing +- **Context-Aware Conversations** that remember user preferences and conversation history +- **Node Reuse & Modularity** - share nodes across different execution paths The best part? You stay in complete control. You define exactly what your app can do and how it should respond. @@ -32,6 +33,9 @@ The best part? You stay in complete control. You define exactly what your app ca ## Why Intent Kit? +### **Flexible & Scalable** +DAG-based architecture allows complex workflows with node reuse, fan-out/fan-in patterns, and multiple entry points. + ### **Reliable & Auditable** Every decision is traceable. Test your workflows thoroughly and deploy with confidence knowing exactly how your AI will behave. @@ -64,32 +68,67 @@ pip install 'intentkit-py[anthropic]' # Anthropic pip install 'intentkit-py[all]' # All providers ``` -### 2. Build Your First Workflow +### 2. Build Your First DAG Workflow ```python -from intent_kit.nodes.actions import ActionNode -from intent_kit import IntentGraphBuilder, llm_classifier +from intent_kit import DAGBuilder, run_dag +from intent_kit.core.context import DefaultContext # Define actions your app can take -greet = ActionNode( - name="greet", - action=lambda name: f"Hello {name}!", - param_schema={"name": str}, - description="Greet the user by name" -) - -# Create a classifier to understand requests -classifier = llm_classifier( - name="main", - description="Route to appropriate action", - children=[greet], - llm_config={"provider": "openai", "model": "gpt-3.5-turbo"} -) +def greet(name: str) -> str: + return f"Hello {name}!" -# Build and test your workflow -graph = IntentGraphBuilder().root(classifier).build() -result = graph.route("Hello Alice") -print(result.output) # → "Hello Alice!" +def get_weather(city: str) -> str: + return f"Weather in {city} is sunny" + +# Create DAG +builder = DAGBuilder() + +# Set default LLM configuration +builder.with_default_llm_config({ + "provider": "openai", + "model": "gpt-3.5-turbo" +}) + +# Add classifier node +builder.add_node("classifier", "classifier", + output_labels=["greet", "weather"], + description="Route to appropriate action") + +# Add extractors +builder.add_node("extract_name", "extractor", + param_schema={"name": str}, + description="Extract name from greeting", + output_key="extracted_params") + +builder.add_node("extract_city", "extractor", + param_schema={"city": str}, + description="Extract city from weather request", + output_key="extracted_params") + +# Add actions +builder.add_node("greet_action", "action", + function=greet, + param_schema={"name": str}, + description="Greet the user") + +builder.add_node("weather_action", "action", + function=get_weather, + param_schema={"city": str}, + description="Get weather information") + +# Add edges +builder.add_edge("classifier", "extract_name", "greet") +builder.add_edge("classifier", "extract_city", "weather") +builder.add_edge("extract_name", "greet_action") +builder.add_edge("extract_city", "weather_action") + +# Build and test your DAG +dag = builder.build() +context = DefaultContext() + +result, final_context = run_dag(dag, "Hello Alice", context) +print(result.data) # → "Hello Alice!" ``` ### 3. Using JSON Configuration @@ -97,7 +136,7 @@ print(result.output) # → "Hello Alice!" For more complex workflows, use JSON configuration: ```python -from intent_kit import IntentGraphBuilder +from intent_kit import DAGBuilder # Define your functions def greet(name, context=None): @@ -114,70 +153,83 @@ function_registry = { "calculate": calculate, } -# Define your graph in JSON -graph_config = { - "root": "main_classifier", +# Define your DAG in JSON +dag_config = { + "entrypoints": ["main_classifier"], "nodes": { "main_classifier": { - "id": "main_classifier", "type": "classifier", - "classifier_type": "llm", - "name": "main_classifier", - "description": "Main intent classifier", - "llm_config": { - "provider": "openai", - "model": "gpt-3.5-turbo", - }, - "children": ["greet_action", "calculate_action"], + "config": { + "description": "Main intent classifier", + "llm_config": { + "provider": "openai", + "model": "gpt-3.5-turbo", + }, + "output_labels": ["greet", "calculate"] + } }, "greet_action": { - "id": "greet_action", "type": "action", - "name": "greet_action", - "description": "Greet the user", - "function": "greet", - "param_schema": {"name": "str"}, + "config": { + "function": "greet", + "param_schema": {"name": "str"}, + "description": "Greet the user" + } }, "calculate_action": { - "id": "calculate_action", "type": "action", - "name": "calculate_action", - "description": "Perform a calculation", - "function": "calculate", - "param_schema": {"operation": "str", "a": "float", "b": "float"}, + "config": { + "function": "calculate", + "param_schema": {"operation": "str", "a": "float", "b": "float"}, + "description": "Perform a calculation" + } }, }, + "edges": [ + {"from": "main_classifier", "to": "greet_action", "label": "greet"}, + {"from": "main_classifier", "to": "calculate_action", "label": "calculate"} + ] } -# Build your graph -graph = ( - IntentGraphBuilder() - .with_json(graph_config) +# Build your DAG +dag = ( + DAGBuilder() + .with_json(dag_config) .with_functions(function_registry) .build() ) # Test it! -result = graph.route("Hello Alice") -print(result.output) # → "Hello Alice!" +context = DefaultContext() +result, final_context = run_dag(dag, "Hello Alice", context) +print(result.data) # → "Hello Alice!" ``` --- ## How It Works -Intent Kit uses a simple but powerful pattern: +Intent Kit uses a powerful DAG (Directed Acyclic Graph) pattern: -1. **Actions** - Define what your app can do (send messages, make API calls, etc.) -2. **Classifiers** - Understand what the user wants using AI or rules -3. **Graphs** - Connect everything together into a workflow -4. **Context** - Remember conversations and user preferences +1. **Nodes** - Define decision points, extractors, or actions +2. **Edges** - Connect nodes with optional labels for flexible routing +3. **Entrypoints** - Starting nodes for user input +4. **Context** - Remember conversations and user preferences across nodes The magic happens when a user sends a message: -- The classifier figures out what they want -- Intent Kit extracts the important details (names, locations, etc.) -- The right action runs with those details -- You get back a response +- The classifier figures out what they want and routes to appropriate nodes +- Extractors pull out important details (names, locations, etc.) +- Actions execute with those details +- Context flows through the DAG, enabling complex multi-step workflows +- You get back a response with full execution trace + +### DAG Benefits + +- **Node Reuse** - Share nodes across different execution paths +- **Flexible Routing** - Support fan-out, fan-in, and complex patterns +- **Multiple Entry Points** - Handle different types of input +- **Deterministic Execution** - Predictable, testable behavior +- **Context Propagation** - State flows through the entire workflow --- @@ -194,7 +246,7 @@ from intent_kit.evals import run_eval, load_dataset dataset = load_dataset("tests/greeting_tests.yaml") # Test your workflow -result = run_eval(dataset, graph) +result = run_eval(dataset, dag) print(f"Accuracy: {result.accuracy():.1%}") result.save_report("test_results.md") @@ -215,6 +267,12 @@ This means you can deploy with confidence, knowing your AI workflows work reliab ## Key Features +### **Flexible DAG Architecture** +- Node reuse across different execution paths +- Support for fan-out, fan-in, and complex routing patterns +- Multiple entry points for different input types +- Deterministic execution with full traceability + ### **Reliable & Auditable** - Every decision is traceable and testable - Comprehensive testing framework @@ -227,14 +285,16 @@ This means you can deploy with confidence, knowing your AI workflows work reliab - Handles complex, multi-step requests ### **Multi-Step Workflows** -- Chain actions together +- Chain actions together with flexible routing - Handle "do X and Y" requests - Remember context across conversations +- Support for complex branching and merging ### **Debugging & Transparency** - Track how decisions are made - Debug complex flows with full transparency - Audit decision paths when needed +- Context propagation tracking ### **Developer Friendly** - Simple, clear API @@ -258,16 +318,19 @@ This means you can deploy with confidence, knowing your AI workflows work reliab ## Common Use Cases ### **Chatbots & Virtual Assistants** -Build intelligent bots that understand natural language and take appropriate actions. +Build intelligent bots that understand natural language and take appropriate actions with context awareness. ### **Task Automation** -Automate complex workflows that require understanding user intent. +Automate complex workflows that require understanding user intent and multi-step processing. ### **Data Processing** -Route and process information based on what users are asking for. +Route and process information based on what users are asking for with flexible DAG patterns. ### **Decision Systems** -Create systems that make smart decisions based on user requests. +Create systems that make smart decisions based on user requests with full audit trails. + +### **Multi-Modal Workflows** +Handle complex scenarios requiring multiple classifiers, extractors, and actions working together. --- @@ -297,6 +360,10 @@ pip install 'intentkit-py[dev]' ``` intent-kit/ ├── intent_kit/ # Main library code +│ ├── core/ # DAG engine, traversal, validation +│ ├── nodes/ # Node implementations +│ ├── services/ # AI services and utilities +│ └── utils/ # Helper utilities ├── examples/ # Working examples ├── docs/ # Documentation ├── tests/ # Test suite