Intelligence lives in the loop, not in the model.
- Abstract
- The Problem ITERON Solves
- What ITERON Is NOT
- Core Intelligence Loop
- Architecture Overview
- Why ITERON Is Different From GPT / Gemini
- Long-Horizon Memory
- Example: Learning Over Time
- Demo Instructions
- Use Cases
- Research Positioning
- Core Claim (Defensible)
ITERON is an autonomous reasoning system designed to improve decision quality over time by learning from structured failure.
Unlike chatbots or agent frameworks that rely on large language models (LLMs) to both generate and judge outputs, ITERON strictly separates creativity from intelligence.
LLMs are used only as proposal engines.
Evaluation, reflection, memory, and learning are enforced by a deterministic, model-agnostic architecture.
ITERON persists experience across executions, detects repeated failure patterns, abandons unproductive strategy classes, and converges toward higher-quality decisions under constraints.
Modern LLMs are fluent but not self-correcting.
They:
- Do not remember past failures
- Reproduce the same weak ideas across sessions
- Judge their own outputs
- Optimize for plausibility rather than correctness
As a result, decision quality does not improve over time.
ITERON was built to solve this.
ITERON is not:
- A chatbot
- A prompt-engineering project
- An AutoGPT-style tool runner
- A replacement for GPT, Gemini, or Claude
ITERON does not attempt to sound intelligent.
It attempts to become correct over time.
ITERON implements a recursive intelligence loop:
Generate → Evaluate → Reflect → Improve → Repeat
-
Generate
Propose candidate strategies (LLM-backed or rule-based). -
Evaluate
Judge quality using external, deterministic criteria. -
Reflect
Abstract why failures occurred (pattern detection). -
Improve
Modify future behavior based on accumulated failures. -
Repeat
Continue until convergence or stopping conditions are met.
Intelligence emerges from recursion and constraint, not from model size.
ITERON is divided into two strictly separated layers:
Responsible for:
- Scoring and evaluation
- Failure detection
- Reflection and abstraction
- Long-horizon memory
- Strategy-class learning
- Best-output selection
- Exploration control
This layer contains all intelligence.
Responsible for:
- Idea generation
- Optional critique explanation
LLMs:
- Do NOT score
- Do NOT decide success
- Do NOT store memory
- Do NOT control learning
Models are interchangeable.
Architecture is permanent.
| Capability | GPT / Gemini | ITERON |
|---|---|---|
| Stateless | Yes | No |
| Learns from failure | No | Yes |
| Memory across runs | No | Yes |
| Self-critique | Superficial | Structural |
| External objective | No | Yes |
| Improves decision quality over time | No | Yes |
GPT answers questions.
ITERON improves thinking.
ITERON maintains persistent memory across executions, including:
- Canonical failure patterns
- Strategy-class performance statistics
- Decaying trust in unproductive heuristics
This allows the system to:
- Abandon failing strategy classes
- Avoid repeating mistakes
- Converge faster in future runs
Memory biases behavior but never overrides evaluation.
- Influencer-based strategies repeated
- Same failures across runs
- Slow or no convergence
- Influencer-based strategies abandoned
- Forced exploration of new strategy classes
- Faster convergence to higher-quality strategies
Learning persists even after restarting the system.
python -m core_intelligence.experiments.run_basicpython -m core_intelligence.experiments.failure_injection- Run the experiment multiple times
- Inspect
core/memory_store.json - Notice reduced repetition and faster convergence
- Business strategy exploration
- Product go-to-market planning
- Research hypothesis iteration
- Policy and system design
- Long-term decision optimization
- Casual Q&A
- Creative writing
- One-shot responses
- Reinforcement learning principles
- Cognitive architecture research
- Early AGI system design
- Recursive evaluation
- Constraint enforcement
- Failure-driven learning
- Persistent memory
— independent of model scale.
- Intelligence can be architected
- Models can be replaceable
- Learning can occur without training
- Failure is more valuable than fluency
ITERON is a model-agnostic autonomous reasoning system that improves decision quality over time by learning from structured failure instead of prompts.