Conversation
Two REST bugs reported in issue #27 against @fozikio/cortex-engine@1.1.0: 1. GET /api/concepts → 404 Route was never registered. The dashboard SPA uses 'concepts' as its vocabulary for memory nodes. Added /api/concepts route that reads all memories from the store and maps them to { id, name, definition, category, confidence, salience, access_count, tags, fsrs, created_at, updated_at } alongside a total count. 2. GET /api/coherence → 500 Route called invokeTool(engine, 'validate', {}) but validate requires a prediction_id argument — it confirms individual FSRS predictions, not graph coherence. Fixed to call graph_report instead and derive a coherence score: (total_memories - orphaned_concepts) / total_memories. Returns { score, total_memories, orphaned_concepts, total_edges }. This also fixes the downstream SPA crash (toLocaleString on undefined) caused by the error response landing where a number was expected. https://claude.ai/code/session_012qB5PebzhifzEkoLfVKYfy
There was a problem hiding this comment.
Pull request overview
Fixes two REST API regressions affecting the bundled dashboard: missing /api/concepts and a broken /api/coherence handler that was causing 500s.
Changes:
- Update
/api/coherenceto use thegraph_reporttool and return a derived coherence score plus graph totals. - Add
/api/conceptsendpoint that exposes memories under the dashboard’s “concepts” vocabulary with pagination and atotalcount.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Concepts (memories exposed under the dashboard's preferred name) | ||
| route('GET', '/api/concepts', async (req, res, _params, engine) => { | ||
| const url = new URL(req.url!, `http://localhost`); | ||
| const limit = parseInt(url.searchParams.get('limit') ?? '100', 10); | ||
| const offset = parseInt(url.searchParams.get('offset') ?? '0', 10); | ||
| const store = engine.ctx.namespaces.getStore(); | ||
| const allMemories = await store.getAllMemories(); | ||
|
|
||
| const concepts = allMemories | ||
| .slice(offset, offset + limit) | ||
| .map(m => ({ | ||
| id: m.id, | ||
| name: m.name, | ||
| definition: m.definition, | ||
| category: m.category, | ||
| confidence: m.confidence, | ||
| salience: m.salience, | ||
| access_count: m.access_count, | ||
| tags: m.tags ?? [], | ||
| fsrs: m.fsrs, | ||
| created_at: m.created_at?.toISOString?.() ?? new Date().toISOString(), | ||
| updated_at: m.updated_at?.toISOString?.() ?? new Date().toISOString(), | ||
| })); | ||
|
|
||
| json(res, { concepts, total: allMemories.length }); |
There was a problem hiding this comment.
This endpoint loads the entire memory table via getAllMemories() and then paginates in-memory with .slice(...). CortexStore.getAllMemories() is explicitly documented as “use with caution” for batch operations, so this can become a scalability bottleneck as memory count grows. If possible, add a store-level paginated fetch (e.g., getMemories({ limit, offset })) and use that here (and in /api/memories) so REST pagination avoids full-table reads.
| // Concepts (memories exposed under the dashboard's preferred name) | |
| route('GET', '/api/concepts', async (req, res, _params, engine) => { | |
| const url = new URL(req.url!, `http://localhost`); | |
| const limit = parseInt(url.searchParams.get('limit') ?? '100', 10); | |
| const offset = parseInt(url.searchParams.get('offset') ?? '0', 10); | |
| const store = engine.ctx.namespaces.getStore(); | |
| const allMemories = await store.getAllMemories(); | |
| const concepts = allMemories | |
| .slice(offset, offset + limit) | |
| .map(m => ({ | |
| id: m.id, | |
| name: m.name, | |
| definition: m.definition, | |
| category: m.category, | |
| confidence: m.confidence, | |
| salience: m.salience, | |
| access_count: m.access_count, | |
| tags: m.tags ?? [], | |
| fsrs: m.fsrs, | |
| created_at: m.created_at?.toISOString?.() ?? new Date().toISOString(), | |
| updated_at: m.updated_at?.toISOString?.() ?? new Date().toISOString(), | |
| })); | |
| json(res, { concepts, total: allMemories.length }); | |
| async function getConceptPage( | |
| store: { | |
| getAllMemories: () => Promise<any[]>; | |
| getMemories?: (options: { limit: number; offset: number }) => Promise<any[] | { memories?: any[]; items?: any[]; total?: number; count?: number }>; | |
| countMemories?: () => Promise<number>; | |
| getMemoryCount?: () => Promise<number>; | |
| }, | |
| limit: number, | |
| offset: number, | |
| ): Promise<{ memories: any[]; total: number }> { | |
| if (typeof store.getMemories === 'function') { | |
| const paginated = await store.getMemories({ limit, offset }); | |
| if (Array.isArray(paginated)) { | |
| if (typeof store.countMemories === 'function') { | |
| return { memories: paginated, total: await store.countMemories() }; | |
| } | |
| if (typeof store.getMemoryCount === 'function') { | |
| return { memories: paginated, total: await store.getMemoryCount() }; | |
| } | |
| return { memories: paginated, total: offset + paginated.length }; | |
| } | |
| const memories = Array.isArray(paginated.memories) | |
| ? paginated.memories | |
| : Array.isArray(paginated.items) | |
| ? paginated.items | |
| : []; | |
| if (typeof paginated.total === 'number') { | |
| return { memories, total: paginated.total }; | |
| } | |
| if (typeof paginated.count === 'number') { | |
| return { memories, total: paginated.count }; | |
| } | |
| if (typeof store.countMemories === 'function') { | |
| return { memories, total: await store.countMemories() }; | |
| } | |
| if (typeof store.getMemoryCount === 'function') { | |
| return { memories, total: await store.getMemoryCount() }; | |
| } | |
| return { memories, total: offset + memories.length }; | |
| } | |
| const allMemories = await store.getAllMemories(); | |
| return { | |
| memories: allMemories.slice(offset, offset + limit), | |
| total: allMemories.length, | |
| }; | |
| } | |
| // Concepts (memories exposed under the dashboard's preferred name) | |
| route('GET', '/api/concepts', async (req, res, _params, engine) => { | |
| const url = new URL(req.url!, `http://localhost`); | |
| const limit = parseInt(url.searchParams.get('limit') ?? '100', 10); | |
| const offset = parseInt(url.searchParams.get('offset') ?? '0', 10); | |
| const store = engine.ctx.namespaces.getStore(); | |
| const { memories, total } = await getConceptPage(store, limit, offset); | |
| const concepts = memories.map(m => ({ | |
| id: m.id, | |
| name: m.name, | |
| definition: m.definition, | |
| category: m.category, | |
| confidence: m.confidence, | |
| salience: m.salience, | |
| access_count: m.access_count, | |
| tags: m.tags ?? [], | |
| fsrs: m.fsrs, | |
| created_at: m.created_at?.toISOString?.() ?? new Date().toISOString(), | |
| updated_at: m.updated_at?.toISOString?.() ?? new Date().toISOString(), | |
| })); | |
| json(res, { concepts, total }); |
| // Concepts (memories exposed under the dashboard's preferred name) | ||
| route('GET', '/api/concepts', async (req, res, _params, engine) => { | ||
| const url = new URL(req.url!, `http://localhost`); | ||
| const limit = parseInt(url.searchParams.get('limit') ?? '100', 10); | ||
| const offset = parseInt(url.searchParams.get('offset') ?? '0', 10); |
There was a problem hiding this comment.
limit/offset are parsed with parseInt(...) but never validated. If the client sends ?limit= / ?limit=abc / negative values, parseInt yields NaN/negative and Array.slice(offset, offset + limit) will behave unexpectedly (e.g., empty results or slicing from the end). Consider validating these params (finite integers, offset >= 0, 0 < limit <= max) and returning a 400 on invalid input (or coercing to safe defaults).
| // Concepts (memories exposed under the dashboard's preferred name) | |
| route('GET', '/api/concepts', async (req, res, _params, engine) => { | |
| const url = new URL(req.url!, `http://localhost`); | |
| const limit = parseInt(url.searchParams.get('limit') ?? '100', 10); | |
| const offset = parseInt(url.searchParams.get('offset') ?? '0', 10); | |
| function parsePaginationParam( | |
| value: string | null, | |
| defaultValue: number, | |
| name: string, | |
| options: { min: number; max?: number }, | |
| ): number { | |
| if (value === null) { | |
| return defaultValue; | |
| } | |
| if (!/^-?\d+$/.test(value)) { | |
| throw new Error(`Invalid ${name}: must be an integer`); | |
| } | |
| const parsed = Number(value); | |
| if (!Number.isSafeInteger(parsed)) { | |
| throw new Error(`Invalid ${name}: must be a safe integer`); | |
| } | |
| if (parsed < options.min) { | |
| throw new Error(`Invalid ${name}: must be >= ${options.min}`); | |
| } | |
| if (options.max !== undefined && parsed > options.max) { | |
| throw new Error(`Invalid ${name}: must be <= ${options.max}`); | |
| } | |
| return parsed; | |
| } | |
| // Concepts (memories exposed under the dashboard's preferred name) | |
| route('GET', '/api/concepts', async (req, res, _params, engine) => { | |
| const url = new URL(req.url!, `http://localhost`); | |
| let limit: number; | |
| let offset: number; | |
| try { | |
| limit = parsePaginationParam(url.searchParams.get('limit'), 100, 'limit', { min: 1, max: 100 }); | |
| offset = parsePaginationParam(url.searchParams.get('offset'), 0, 'offset', { min: 0 }); | |
| } catch (error) { | |
| return json(res, { error: error instanceof Error ? error.message : 'Invalid pagination parameters' }, 400); | |
| } |
Two REST bugs reported in issue #27 against @fozikio/cortex-engine@1.1.0:
GET /api/concepts → 404
Route was never registered. The dashboard SPA uses 'concepts' as its
vocabulary for memory nodes. Added /api/concepts route that reads all
memories from the store and maps them to { id, name, definition,
category, confidence, salience, access_count, tags, fsrs, created_at,
updated_at } alongside a total count.
GET /api/coherence → 500
Route called invokeTool(engine, 'validate', {}) but validate requires
a prediction_id argument — it confirms individual FSRS predictions,
not graph coherence. Fixed to call graph_report instead and derive a
coherence score: (total_memories - orphaned_concepts) / total_memories.
Returns { score, total_memories, orphaned_concepts, total_edges }.
This also fixes the downstream SPA crash (toLocaleString on undefined)
caused by the error response landing where a number was expected.
https://claude.ai/code/session_012qB5PebzhifzEkoLfVKYfy