Every frontend developer has been through this:
Console.log Hell
// Your code is littered with these
console.log('starting request...')
console.log('params:', params)
console.log('response:', response)
console.log('did this run?')
console.log('why didnt this run???')
console.log('fuck')Race Conditions
// User rapidly switches tabs
fetchTabData('tab1') // Request sent, takes 500ms
fetchTabData('tab2') // Request sent, takes 200ms
fetchTabData('tab3') // Request sent, takes 300ms
// Result: tab2 returns first, then tab3, finally tab1
// User sees tab1 data, but they clicked tab3Request Chaos
// Simultaneous updates to the same resource
updateUser({ id: 1, name: 'Alice' }) // Request A
updateUser({ id: 1, name: 'Bob' }) // Request B
// Request B reaches server first, then A
// Database has 'Alice', but user expects 'Bob'Zero Observability
- Did the request even fire?
- Where is it stuck?
- Why is this request slower than that one?
- Did the optimistic update roll back?
- How do I even reproduce this bug?
Modern CPUs don't execute instructions one by one. They:
- Analyze dependencies between instructions
- Execute independent operations in parallel
- Queue conflicting operations automatically
- Track every instruction's state throughout its lifecycle
Your frontend should do the same.
FrontCPU brings real CPU out-of-order execution architecture to the frontend:
- Automatic resource conflict detection
- Automatic parallel vs. serial execution decisions
- Complete lifecycle tracking for every operation
- World-class debugging with zero code
async function updateTask(id, data) {
console.log('[updateTask] start', { id, data })
try {
console.log('[updateTask] sending request...')
const response = await fetch(`/api/tasks/${id}`, {
method: 'PUT',
body: JSON.stringify(data)
})
console.log('[updateTask] status:', response.status)
const result = await response.json()
console.log('[updateTask] result:', result)
store.tasks[id] = result
console.log('[updateTask] state updated')
return result
} catch (error) {
console.error('[updateTask] failed:', error)
throw error
}
}Console output:
[updateTask] start {id: 1, data: {name: 'test'}}
[updateTask] sending request...
[updateTask] status: 200
[updateTask] result: {id: 1, name: 'test'}
[updateTask] state updated
Problems:
- Writing tons of log code manually
- No timing information
- No idea where the call originated
- Multiple request logs get mixed together
- Have to remove all logs before production
// Define instruction - not a single log statement
registerISA({
'task.update': {
meta: {
description: 'Update task',
resourceIdentifier: (p) => [`task:${p.id}`],
},
request: {
method: 'PUT',
url: (p) => `/api/tasks/${p.id}`,
body: (p) => p.data,
},
commit: async (result) => {
store.tasks[result.id] = result
},
},
})
// Use it
pipeline.dispatch('task.update', { id: 1, data: { name: 'test' } })Console automatically outputs:
[Created] 14:32:15.123 task.update 📍 src/views/TaskList.vue:42
├─ Instruction ID: instr-1760629698922-11
├─ Correlation ID: corr_1760629698922_f_M_sBit6A
└─ Payload: { id: 1, data: { name: 'test' } }
[Success] 14:32:15.456 task.update 333ms 📍 src/views/TaskList.vue:42
├─ 📝 Payload: { id: 1, data: { name: 'test' } }
├─ 📥 Response: { id: 1, name: 'test', updatedAt: '...' }
├─ 💾 WB Stage: commit() executed successfully
└─ Pipeline Stages:
IF→SCH 0ms
SCH→EX 2ms
EX→WB ██████ 331ms
Total: 333ms
Zero debugging code, automatically get:
- Millisecond-precision timestamps
- Source file and line number (clickable)
- Request payload and response
- Visual timing breakdown for each pipeline stage
- Unique instruction ID and correlation ID for tracing
[Failed] 14:32:16.789 task.update 1502ms 📍 src/views/TaskList.vue:42
├─ Reason: Request timeout
├─ 📝 Payload: { id: 1, data: { name: 'test' } }
├─ 🔄 Optimistic update rolled back
├─ Pipeline Stages:
│ IF→SCH 0ms
│ SCH→EX 1ms
│ EX→WB ████████████████████ 1501ms
│ Total: 1502ms
├─ 💡 Suggestions:
│ • Request timed out - consider increasing timeout or optimizing backend
│ • Execution took 1502ms (>1s) - check for performance issues
└─ Error Stack: ...
With FrontCPU, you only need to do four things:
Whatever framework you use for UI, keep using it. Vue, React, Svelte - all work.
<template>
<div v-for="task in tasks" :key="task.id">
{{ task.name }}
<button @click="completeTask(task.id)">Complete</button>
</div>
</template>Declare each operation: what it does, how to call the API, how to update state, whether it needs queuing.
registerISA({
'task.complete': {
meta: {
description: 'Complete a task',
category: 'task',
resourceIdentifier: (p) => [`task:${p.id}`], // Resource ID for conflict detection
schedulingStrategy: 'serial', // Same-task operations run serially
},
// Optimistic update: show completion immediately
optimistic: {
enabled: true,
apply: (payload) => {
const task = store.tasks.find(t => t.id === payload.id)
const snapshot = { ...task }
task.completed = true
return snapshot
},
rollback: (snapshot) => {
Object.assign(store.tasks.find(t => t.id === snapshot.id), snapshot)
},
},
// HTTP request
request: {
method: 'POST',
url: (p) => `/api/tasks/${p.id}/complete`,
},
// Update state after server confirms
commit: async (result, payload) => {
Object.assign(store.tasks.find(t => t.id === payload.id), result)
},
},
})function completeTask(id: string) {
pipeline.dispatch('task.complete', { id })
}That's it. Everything below is handled automatically:
- Instruction lifecycle tracking (no manual logs)
- HTTP request execution
- Request deduplication
- Optimistic updates & rollback on failure
- Resource conflict detection
- Concurrency control
For SSE, WebSocket, Web Workers, and other external events:
pipeline.registerInterruptHandler('sse', {
onMessage: (event) => {
// Unified handling for all external interrupts
pipeline.dispatch('data.sync', event.data)
},
})Different scenarios need different concurrency control:
// Different tasks can run in parallel
pipeline.dispatch('task.fetch', { id: 1 })
pipeline.dispatch('task.fetch', { id: 2 })
pipeline.dispatch('task.fetch', { id: 3 })
// All three requests fire simultaneouslymeta: { schedulingStrategy: 'serial' }
// Same-task operations must queue
pipeline.dispatch('task.update', { id: 1, name: 'A' })
pipeline.dispatch('task.update', { id: 1, name: 'B' })
// B waits for A to complete, guaranteeing ordermeta: { schedulingStrategy: 'latest' }
// User types quickly
pipeline.dispatch('search', { query: 'h' })
pipeline.dispatch('search', { query: 'he' })
pipeline.dispatch('search', { query: 'hel' })
pipeline.dispatch('search', { query: 'hello' })
// First three auto-cancel, only 'hello' executesmeta: { schedulingStrategy: 'read-write' }
// Multiple reads can run in parallel
pipeline.dispatch('user.get', { id: 1 })
pipeline.dispatch('user.get', { id: 1 })
// Writes are exclusive
pipeline.dispatch('user.update', { id: 1, name: 'test' })
// Waits for all reads to complete, blocks subsequent reads until write finishes┌─────────────────────────────────────────────────────────────────────────────┐
│ │
│ dispatch('task.complete', { id: 1 }) │
│ │ │
│ ▼ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │ IF │ → │ SCH │ → │ EX │ → │ RES │ → │ WB │ │
│ │ Fetch │ │Schedule │ │ Execute │ │Response │ │Writeback│ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
│ │ │ │ │ │ │
│ Generate ID Detect conflicts Run request Process result Commit │
│ Capture source Parallel/wait Optimistic Error handling Release │
│ Create aborter Issue instr Timeout Normalize Rollback │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
npm install front-cpuimport { Pipeline, setHttpClient, registerISA } from 'front-cpu'
// 1. Configure HTTP client
setHttpClient({
get: (url, config) => fetch(url, config).then(r => r.json()),
post: (url, data, config) =>
fetch(url, { method: 'POST', body: JSON.stringify(data), ...config })
.then(r => r.json()),
})
// 2. Define instruction set
registerISA({
'user.fetch': {
meta: {
description: 'Fetch user info',
resourceIdentifier: (p) => [`user:${p.id}`],
},
request: {
method: 'GET',
url: (p) => `/api/users/${p.id}`,
},
commit: async (result) => {
store.user = result
},
},
})
// 3. Create and start pipeline
const pipeline = new Pipeline({ maxConcurrency: 10 })
pipeline.start()
// 4. Dispatch instructions
const user = await pipeline.dispatch('user.fetch', { id: 1 })The core package excludes debugging code to minimize bundle size. When you need debugging:
import { Pipeline, cpuConsole, ConsoleLevel } from 'front-cpu/debug'
// Set log level
cpuConsole.setLevel(ConsoleLevel.VERBOSE)
// Filter specific instructions
cpuConsole.setFilter(['task.update', 'task.complete'])Log levels:
SILENT- No outputMINIMAL- Success/failure onlyNORMAL- Key stagesVERBOSE- All details + pipeline timing visualizationDEBUG- Full payload and call stack
const pipeline = new Pipeline({
maxConcurrency: 10, // Max parallel executions
tickInterval: 16, // Scheduler tick interval (ms)
})
pipeline.start() // Start
pipeline.stop() // Pause
pipeline.reset() // Reset
// Dispatch instruction
const result = await pipeline.dispatch('type', payload, options)
// Cancellation
pipeline.flush('tag') // Cancel by tag
pipeline.getInstructionsByTags(['tag']) // Query by tagsregisterISA({
'instruction.type': {
meta: {
description: string,
category: 'task' | 'debug' | 'schedule' | 'system' | 'area',
resourceIdentifier: (payload) => string[],
priority?: number, // 0-10
timeout?: number, // milliseconds
schedulingStrategy?: 'out-of-order' | 'serial' | 'latest' | 'read-write',
},
request?: { method, url, body }, // HTTP request
execute?: (payload, context) => any, // Or custom logic
optimistic?: { enabled, apply, rollback },
commit?: (result, payload, context, snapshot) => void,
},
})pnpm install # Install dependencies
pnpm dev # Start playground (localhost:3000)
pnpm build # Build
pnpm test # Test
pnpm bench # BenchmarkMIT