Your AI writes the code. But does your brain keep up?
+**When's the last time you solved a problem without asking first?**
+
+
----
+
-AI coding tools let you ship code you don't understand. Not because you're lazy—there's just no friction. The code looks right, you move on, and slowly you stop reasoning from first principles.
+The last time you debugged something without asking first was—
-skill-issue tracks what you actually know. When your agent builds something non-trivial, it fires a challenge grounded in what just happened. You answer, it scores you 0-3, and your knowledge graph updates. Next time, it targets concepts you're weak on.
+You're shipping faster. Or you think you are.
---
-## Install
-
-### Claude Code
-
-Two separate commands (don't combine them):
-
-```
-/plugin marketplace add SnehalRaj/skill-issue-marketplace
-```
-
-```
-/plugin install skill-issue@skill-issue-marketplace
-```
+## The Accumulation
-Open a new session.
+
-### pip (Cursor, Codex, any agent)
+METR ran an RCT on experienced open-source developers: AI users took 19% longer. They believed they were 20% faster.[1] That's a 39-point perception gap, and it persisted even after subjects saw their own data.
-```bash
-pip install skill-issue-cc
-skill-issue init
-```
+GitHub's research didn't measure whether developers *were* more productive—they asked if developers *felt* more productive.[5] The answer was barely. Anthropic measured what you're actually losing: comprehension decays faster than you'd expect, and reviewing AI-generated code gives you "flimsy understanding."[2] The tests pass. You don't notice.
-Paste the output of `skill-issue init --print` into your editor's system prompt.
+It's not one study. The 2024 DORA report found no productivity signal.[3] Trust in AI tool output dropped from 43% to 33% in a single year—even developers stopped trusting it.[4]
---
-## Knowledge Graph
+## It's Not Just You
-```
-skill-issue graph show --domain machine-learning
+On r/ExperiencedDevs, seniors are watching juniors who say "clean coding is not a real thing" and use AI to write PR review responses without reading them.[8] Another thread: juniors can't explain why AI-generated code is wrong, even when it "works."[9]
-Knowledge Graph: machine-learning
-============================================================
+It's not just juniors. A senior ML engineer with 9 YoE: "Is senior ML engineering just API calls now?"[10] A 40-year veteran documenting his first all-AI project found the ambiguity had shifted somewhere he couldn't follow.[11]
-[GOOD] gradient-descent [####..........................] 0.42 (2)
-[WEAK] bias-variance-tradeoff [##............................] 0.09 (1)
-[GOOD] backpropagation [#############.................] 0.45 (2)
-[WEAK] regularization [..............................] 0.00
-[WEAK] cross-validation [..............................] 0.00
-[WEAK] loss-functions [######........................] 0.21 (1)
-[WEAK] attention-mechanism [..............................] 0.00
+Godot maintainers are drowning in "AI slop" contributions. One said: "I don't know how long we can keep it up."[12]
-Priority Queue (work on these next):
- >> regularization (priority: 0.95 = weight:0.95 x gap:1.00)
- >> cross-validation (priority: 0.95 = weight:0.95 x gap:1.00)
- >> attention-mechanism (priority: 0.95 = weight:0.95 x gap:1.00)
+---
-Total nodes: 12 | Avg mastery: 0.10 | 0 mastered | 10 weak
-```
+
-Each domain has a curated graph of concepts weighted by how often they come up in real work.
+## The Tool
-- **reuse_weight** (0–1): How fundamental. 0.95 means it's everywhere.
-- **mastery** (0–1): Your proven understanding. Updates via EMA after each challenge.
-- **priority** = `weight × (1 - mastery)`. High-weight stuff you haven't proven = top priority.
+skill-issue embeds micro-challenges directly into your agentic workflow. You ship code with Claude, then get quizzed on what just happened. Not trivia. Not LeetCode. The actual concepts in the code you just approved.
-Mastery fades if you don't practice (3-day grace, then 0.02/day). Use it or lose it.
+
+
+Your knowledge lives in a graph. Nodes are concepts—weighted by how often they appear across real codebases. Edges connect prerequisites. Every challenge updates your mastery scores using spaced repetition: get it right, the node strengthens; get it wrong, it surfaces more often. The system knows what you're weak on before you do.
---
-## Onboarding
+## Install
-```
+```bash
+pip install skill-issue-cc
skill-issue init
-
-3 questions to personalise your knowledge graph.
-
-1. What do you mainly build or work on?
- > I train ML models and do some backend API work
-
-2. What languages or tools do you use most?
- > Python, PyTorch, FastAPI, PostgreSQL
-
-3. One concept you know you are shaky on? (optional)
- > always forget when cross-validation goes wrong
-
-Knowledge graphs initialised for: machine-learning, backend-systems, algorithms
```
-Three questions, plain English. It figures out which domains to load.
+That's it.
---
@@ -119,7 +81,7 @@ Three questions, plain English. It figures out which domains to load.
| ⏱️ | **Complexity** | What's the Big-O? Can it be better? |
| 🔗 | **Connect** | How does this relate to X? |
-Challenges are grounded in what was just built. No random trivia.
+Grounded in what was just built. No random trivia.
---
@@ -168,6 +130,8 @@ Add your own in `references/knowledge_graphs/`.
---
+
+
## Progression
```
@@ -187,9 +151,19 @@ Streak bonus tops out at 2.5× for consecutive correct answers.
---
-## Persistent State
+## Knowledge Graph
+
+Each domain has a curated graph of concepts weighted by how often they come up in real work.
+
+- **reuse_weight** (0–1): How fundamental. 0.95 means it's everywhere.
+- **mastery** (0–1): Your proven understanding. Updates via EMA after each challenge.
+- **priority** = `weight × (1 - mastery)`. High-weight stuff you haven't proven = top priority.
-Everything's in `~/.skill-issue/`. Plain JSON/YAML, no database.
+Mastery fades if you don't practice (3-day grace, then 0.02/day). Use it or lose it.
+
+---
+
+## Persistent State
```
~/.skill-issue/
@@ -201,28 +175,60 @@ Everything's in `~/.skill-issue/`. Plain JSON/YAML, no database.
└── 2026-02-27.json
```
-Version-controllable. Portable. Human-readable.
+Plain JSON/YAML. Version-controllable. No database.
+
+---
+
+## Further Reading
+
+1. [METR, "AI Experienced OS Devs Study" (2025)](https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf) — RCT: experienced developers 19% slower with AI, believed they were 20% faster.
+
+2. [Anthropic, "The Impact of AI on Developer Productivity" (2026)](https://arxiv.org/abs/2601.20245) — No significant speed-up; AI coding "significantly lowers comprehension of the codebase."
+
+3. DORA Report 2024 — AI tools showed no measurable productivity signal.
+
+4. [LeadDev, "Trust in AI Coding Tools Is Plummeting" (2025)](https://leaddev.com/technical-direction/trust-in-ai-coding-tools-is-plummeting) — 33% trust in 2025, down from 43% in 2024.
+
+5. [LeadDev, "AI Coding Assistants Aren't Really Making Devs Feel More Productive" (2025)](https://leaddev.com/velocity/ai-coding-assistants-arent-really-making-devs-feel-more-productive) — GitHub's research measured feelings, not output—and still found minimal gains.
+
+6. [Claude on Upwork Benchmark (2025)](https://reddit.com/r/MachineLearning) — Resolved 26.2% of real-world software engineering tasks.
+
+7. ["Competence as Tragedy"](https://crowprose.com/blog/competence-as-tragedy) — Personal essay on watching AI make hard-won skills obsolete.
+
+8. [r/ExperiencedDevs, "Junior devs not interested in software engineering" (2025)](https://reddit.com/r/ExperiencedDevs/comments/1mrfgm2) — 1,795 upvotes. Seniors observing juniors who've never heard of Coursera, say "clean coding is not a real thing."
+
+9. [r/ExperiencedDevs, "ChatGPT is producing bad and lazy junior engineers" (2025)](https://reddit.com/r/ExperiencedDevs/comments/1lb5ktb) — 1,449 upvotes. Juniors can't explain why AI code is wrong.
+
+10. [r/MachineLearning, "Is senior ML engineering just API calls now?" (2025)](https://reddit.com/r/MachineLearning/comments/1npdfh1) — 9 YoE engineer feeling career stagnation.
+
+11. ["Vibe Coding as a Coding Veteran"](https://medium.com/gitconnected/vibe-coding-as-a-coding-veteran-cd370fe2be50) — 40-year veteran, PhD in AI, documenting 2 weeks building entirely through AI conversation.
+
+12. [PCGamer, "Godot drowning in 'AI slop' contributions" (2025)](https://pcgamer.com) — Maintainer: "I don't know how long we can keep it up." 2,963 upvotes on r/programming.
+
+13. Ilya Sutskever on the gap between AI benchmarks and economic impact (2025) — Even AI's believers can't explain why gains aren't translating.
---
## Philosophy
-The name's a joke. Claude has skills (literally, `.skill` files). What about yours?
+This is not a productivity tool. Your productivity is fine. It's your brain we're worried about.
-Understanding compounds. A developer who actually gets the code they ship is more effective long-term. One well-timed challenge beats a passive tutorial. Your trophy wall tracks your growth—no leaderboard against others.
+You hired a brilliant assistant who never explains anything. Now you're the person who can't do their job without them. The name is the bit. Claude has skills. The question is whether you still do.
+
+Ilya Sutskever—one of AI's true believers—said he's puzzled why benchmark gains aren't translating to economic impact.[13] A personal essay called it "competence as tragedy": watching your hard-won skills become vestigial.[7]
+
+The tools work. That's the problem. They work well enough that you stop working *at all*.
---
## Contributing
-Knowledge graphs are JSON in `references/knowledge_graphs/`. Scripts are plain Python, zero dependencies.
+Knowledge graphs are JSON in `references/knowledge_graphs/`. PRs welcome.
-See [CONTRIBUTING.md](CONTRIBUTING.md) or open an issue.
+See [CONTRIBUTING.md](CONTRIBUTING.md).
**MIT License**
---
-
- Works with Claude Code · Cursor · Codex · OpenCode · any agent that reads a system prompt
-
+
Works with Claude Code · Cursor · Codex · any agent that reads a system prompt
diff --git a/assets/demo/skill-issue-demo.cast b/assets/demo/skill-issue-demo.cast
new file mode 100644
index 0000000..3f9efe7
--- /dev/null
+++ b/assets/demo/skill-issue-demo.cast
@@ -0,0 +1,197 @@
+{"version": 2, "width": 80, "height": 30, "timestamp": 1772183512, "env": {"SHELL": "/bin/zsh", "TERM": "xterm-256color"}, "title": "skill-issue demo"}
+[0.0, "o", "$ "]
+[0.5, "o", "s"]
+[0.55, "o", "k"]
+[0.6000000000000001, "o", "i"]
+[0.6500000000000001, "o", "l"]
+[0.7000000000000002, "o", "l"]
+[0.7500000000000002, "o", "-"]
+[0.8200000000000003, "o", "i"]
+[0.8700000000000003, "o", "s"]
+[0.9200000000000004, "o", "s"]
+[0.9700000000000004, "o", "u"]
+[1.0200000000000005, "o", "e"]
+[1.0700000000000005, "o", " "]
+[1.1400000000000006, "o", "s"]
+[1.1900000000000006, "o", "t"]
+[1.2400000000000007, "o", "a"]
+[1.2900000000000007, "o", "t"]
+[1.3400000000000007, "o", "s"]
+[1.3900000000000008, "o", "\r\n"]
+[1.6900000000000008, "o", ""]
+[1.6900000000000008, "o", "\r\n"]
+[1.7400000000000009, "o", "\ud83e\udde0 skill-issue \u2014 demo"]
+[1.7400000000000009, "o", "\r\n"]
+[1.790000000000001, "o", "Level: 5 (847 XP)"]
+[1.790000000000001, "o", "\r\n"]
+[1.840000000000001, "o", "Streak: \ud83d\udd25 4 (best: 7)"]
+[1.840000000000001, "o", "\r\n"]
+[1.890000000000001, "o", "Accuracy: 74% (31/42 correct)"]
+[1.890000000000001, "o", "\r\n"]
+[1.940000000000001, "o", ""]
+[1.940000000000001, "o", "\r\n"]
+[1.990000000000001, "o", "Topics:"]
+[1.990000000000001, "o", "\r\n"]
+[2.040000000000001, "o", " machine-learning: 3 (18 attempts)"]
+[2.040000000000001, "o", "\r\n"]
+[2.0900000000000007, "o", " algorithms: 2 (12 attempts)"]
+[2.0900000000000007, "o", "\r\n"]
+[2.1400000000000006, "o", " backend-systems: 2 (12 attempts)"]
+[2.1400000000000006, "o", "\r\n"]
+[2.1900000000000004, "o", ""]
+[2.1900000000000004, "o", "\r\n"]
+[3.24, "o", "\r\n$ "]
+[3.74, "o", "s"]
+[3.79, "o", "k"]
+[3.84, "o", "i"]
+[3.8899999999999997, "o", "l"]
+[3.9399999999999995, "o", "l"]
+[3.9899999999999993, "o", "-"]
+[4.06, "o", "i"]
+[4.109999999999999, "o", "s"]
+[4.159999999999999, "o", "s"]
+[4.209999999999999, "o", "u"]
+[4.259999999999999, "o", "e"]
+[4.309999999999999, "o", " "]
+[4.379999999999999, "o", "g"]
+[4.429999999999999, "o", "r"]
+[4.479999999999999, "o", "a"]
+[4.5299999999999985, "o", "p"]
+[4.579999999999998, "o", "h"]
+[4.629999999999998, "o", " "]
+[4.699999999999998, "o", "s"]
+[4.749999999999998, "o", "h"]
+[4.799999999999998, "o", "o"]
+[4.849999999999998, "o", "w"]
+[4.899999999999998, "o", " "]
+[4.969999999999998, "o", "-"]
+[5.039999999999998, "o", "-"]
+[5.1099999999999985, "o", "d"]
+[5.159999999999998, "o", "o"]
+[5.209999999999998, "o", "m"]
+[5.259999999999998, "o", "a"]
+[5.309999999999998, "o", "i"]
+[5.359999999999998, "o", "n"]
+[5.4099999999999975, "o", " "]
+[5.479999999999998, "o", "m"]
+[5.529999999999998, "o", "a"]
+[5.579999999999997, "o", "c"]
+[5.629999999999997, "o", "h"]
+[5.679999999999997, "o", "i"]
+[5.729999999999997, "o", "n"]
+[5.779999999999997, "o", "e"]
+[5.8299999999999965, "o", "-"]
+[5.899999999999997, "o", "l"]
+[5.949999999999997, "o", "e"]
+[5.9999999999999964, "o", "a"]
+[6.049999999999996, "o", "r"]
+[6.099999999999996, "o", "n"]
+[6.149999999999996, "o", "i"]
+[6.199999999999996, "o", "n"]
+[6.249999999999996, "o", "g"]
+[6.299999999999995, "o", "\r\n"]
+[6.599999999999995, "o", "Knowledge Graph: machine-learning"]
+[6.599999999999995, "o", "\r\n"]
+[6.6299999999999955, "o", "============================================================"]
+[6.6299999999999955, "o", "\r\n"]
+[6.659999999999996, "o", ""]
+[6.659999999999996, "o", "\r\n"]
+[6.689999999999996, "o", "[GOOD] gradient-descent [\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.42 (2)"]
+[6.689999999999996, "o", "\r\n"]
+[6.719999999999996, "o", "[WEAK] bias-variance-tradeoff [\u2588\u2588\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.09 (1)"]
+[6.719999999999996, "o", "\r\n"]
+[6.7499999999999964, "o", "[GOOD] backpropagation [\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.45 (2)"]
+[6.7499999999999964, "o", "\r\n"]
+[6.779999999999997, "o", "[WEAK] regularization [\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.00 (1)"]
+[6.779999999999997, "o", "\r\n"]
+[6.809999999999997, "o", "[WEAK] cross-validation [\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.00 "]
+[6.809999999999997, "o", "\r\n"]
+[6.839999999999997, "o", "[WEAK] loss-functions [\u2588\u2588\u2588\u2588\u2588\u2588\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.21 (1)"]
+[6.839999999999997, "o", "\r\n"]
+[6.869999999999997, "o", "[WEAK] attention-mechanism [\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.00 "]
+[6.869999999999997, "o", "\r\n"]
+[6.899999999999998, "o", "[WEAK] probability-distributi [\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.00 "]
+[6.899999999999998, "o", "\r\n"]
+[6.929999999999998, "o", "[WEAK] embeddings [\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.00 "]
+[6.929999999999998, "o", "\r\n"]
+[6.959999999999998, "o", "[WEAK] hyperparameter-tuning [\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.00 "]
+[6.959999999999998, "o", "\r\n"]
+[6.989999999999998, "o", "[WEAK] decision-trees-ensembl [\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.00 "]
+[6.989999999999998, "o", "\r\n"]
+[7.019999999999999, "o", "[WEAK] generalization-theory [\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591\u2591] 0.00 "]
+[7.019999999999999, "o", "\r\n"]
+[7.049999999999999, "o", ""]
+[7.049999999999999, "o", "\r\n"]
+[7.079999999999999, "o", "Priority Queue (work on these next):"]
+[7.079999999999999, "o", "\r\n"]
+[7.109999999999999, "o", "------------------------------------------------------------"]
+[7.109999999999999, "o", "\r\n"]
+[7.14, "o", " \u25b6 regularization (priority: 0.95 = weight:0.95 \u00d7 gap:1.00)"]
+[7.14, "o", "\r\n"]
+[7.17, "o", " \u25b6 cross-validation (priority: 0.95 = weight:0.95 \u00d7 gap:1.00)"]
+[7.17, "o", "\r\n"]
+[7.2, "o", " \u25b6 attention-mechanism (priority: 0.95 = weight:0.95 \u00d7 gap:1.00)"]
+[7.2, "o", "\r\n"]
+[7.23, "o", " \u25b6 probability-distributions (priority: 0.93 = weight:0.93 \u00d7 gap:1.00)"]
+[7.23, "o", "\r\n"]
+[7.260000000000001, "o", " \u25b6 embeddings (priority: 0.90 = weight:0.90 \u00d7 gap:1.00)"]
+[7.260000000000001, "o", "\r\n"]
+[7.290000000000001, "o", ""]
+[7.290000000000001, "o", "\r\n"]
+[7.320000000000001, "o", "------------------------------------------------------------"]
+[7.320000000000001, "o", "\r\n"]
+[7.350000000000001, "o", "Total nodes: 12 | Avg mastery: 0.10"]
+[7.350000000000001, "o", "\r\n"]
+[7.380000000000002, "o", " 0 mastered | 0 strong | 10 weak"]
+[7.380000000000002, "o", "\r\n"]
+[7.410000000000002, "o", ""]
+[7.410000000000002, "o", "\r\n"]
+[8.940000000000001, "o", "\r\n$ "]
+[9.240000000000002, "o", "#"]
+[9.290000000000003, "o", " "]
+[9.360000000000003, "o", "Y"]
+[9.410000000000004, "o", "o"]
+[9.460000000000004, "o", "u"]
+[9.510000000000005, "o", "r"]
+[9.560000000000006, "o", " "]
+[9.630000000000006, "o", "A"]
+[9.680000000000007, "o", "I"]
+[9.730000000000008, "o", " "]
+[9.800000000000008, "o", "w"]
+[9.850000000000009, "o", "r"]
+[9.90000000000001, "o", "i"]
+[9.95000000000001, "o", "t"]
+[10.00000000000001, "o", "e"]
+[10.050000000000011, "o", "s"]
+[10.100000000000012, "o", " "]
+[10.170000000000012, "o", "c"]
+[10.220000000000013, "o", "o"]
+[10.270000000000014, "o", "d"]
+[10.320000000000014, "o", "e"]
+[10.370000000000015, "o", "."]
+[10.420000000000016, "o", " "]
+[10.490000000000016, "o", "D"]
+[10.540000000000017, "o", "o"]
+[10.590000000000018, "o", "e"]
+[10.640000000000018, "o", "s"]
+[10.690000000000019, "o", " "]
+[10.76000000000002, "o", "y"]
+[10.81000000000002, "o", "o"]
+[10.86000000000002, "o", "u"]
+[10.910000000000021, "o", "r"]
+[10.960000000000022, "o", " "]
+[11.030000000000022, "o", "b"]
+[11.080000000000023, "o", "r"]
+[11.130000000000024, "o", "a"]
+[11.180000000000025, "o", "i"]
+[11.230000000000025, "o", "n"]
+[11.280000000000026, "o", " "]
+[11.350000000000026, "o", "k"]
+[11.400000000000027, "o", "e"]
+[11.450000000000028, "o", "e"]
+[11.500000000000028, "o", "p"]
+[11.55000000000003, "o", " "]
+[11.62000000000003, "o", "u"]
+[11.67000000000003, "o", "p"]
+[11.72000000000003, "o", "?"]
+[11.770000000000032, "o", "\r\n"]
diff --git a/assets/demo/skill-issue-demo.gif b/assets/demo/skill-issue-demo.gif
index 8b9e0b9..9a6d0f4 100644
Binary files a/assets/demo/skill-issue-demo.gif and b/assets/demo/skill-issue-demo.gif differ
diff --git a/assets/logo.svg b/assets/logo.svg
new file mode 100644
index 0000000..8544265
--- /dev/null
+++ b/assets/logo.svg
@@ -0,0 +1,132 @@
+
diff --git a/assets/mascot-challenge.svg b/assets/mascot-challenge.svg
new file mode 100644
index 0000000..4ea8529
--- /dev/null
+++ b/assets/mascot-challenge.svg
@@ -0,0 +1,90 @@
+
diff --git a/assets/mascot-confused.svg b/assets/mascot-confused.svg
new file mode 100644
index 0000000..4ba8418
--- /dev/null
+++ b/assets/mascot-confused.svg
@@ -0,0 +1,82 @@
+
diff --git a/assets/mascot-leveling-up.svg b/assets/mascot-leveling-up.svg
new file mode 100644
index 0000000..c5db00e
--- /dev/null
+++ b/assets/mascot-leveling-up.svg
@@ -0,0 +1,111 @@
+
diff --git a/assets/mascot-typing.svg b/assets/mascot-typing.svg
new file mode 100644
index 0000000..abde0f3
--- /dev/null
+++ b/assets/mascot-typing.svg
@@ -0,0 +1,97 @@
+
diff --git a/assets/mascot.svg b/assets/mascot.svg
new file mode 100644
index 0000000..3180867
--- /dev/null
+++ b/assets/mascot.svg
@@ -0,0 +1,129 @@
+
diff --git a/assets/screenshots/graph-show.svg b/assets/screenshots/graph-show.svg
new file mode 100644
index 0000000..1644ae2
--- /dev/null
+++ b/assets/screenshots/graph-show.svg
@@ -0,0 +1,54 @@
+
\ No newline at end of file
diff --git a/assets/screenshots/graph-weak.svg b/assets/screenshots/graph-weak.svg
new file mode 100644
index 0000000..85c785e
--- /dev/null
+++ b/assets/screenshots/graph-weak.svg
@@ -0,0 +1,53 @@
+
\ No newline at end of file
diff --git a/assets/screenshots/stats.svg b/assets/screenshots/stats.svg
new file mode 100644
index 0000000..d45e246
--- /dev/null
+++ b/assets/screenshots/stats.svg
@@ -0,0 +1,36 @@
+
\ No newline at end of file
diff --git a/scripts/generate_demo.py b/scripts/generate_demo.py
new file mode 100644
index 0000000..780b4ba
--- /dev/null
+++ b/scripts/generate_demo.py
@@ -0,0 +1,125 @@
+#!/usr/bin/env python3
+"""Generate a synthetic asciinema cast file for the demo GIF."""
+
+import json
+import os
+import subprocess
+import time
+
+def create_asciicast(output_path: str):
+ """Create a synthetic asciicast v2 file showing skill-issue in action."""
+
+ # Get real CLI output
+ stats_result = subprocess.run(['skill-issue', 'stats'], capture_output=True, text=True)
+ graph_result = subprocess.run(['skill-issue', 'graph', 'show', '--domain', 'machine-learning'],
+ capture_output=True, text=True)
+
+ # Header
+ header = {
+ "version": 2,
+ "width": 80,
+ "height": 30,
+ "timestamp": int(time.time()),
+ "env": {"SHELL": "/bin/zsh", "TERM": "xterm-256color"},
+ "title": "skill-issue demo"
+ }
+
+ events = []
+ t = 0.0
+
+ def type_text(text: str, base_delay: float = 0.05) -> float:
+ """Simulate typing with realistic delays."""
+ nonlocal t
+ for char in text:
+ events.append([t, "o", char])
+ t += base_delay + (0.02 if char in ' -' else 0)
+ return t
+
+ def output(text: str, delay: float = 0.0) -> float:
+ """Output text immediately."""
+ nonlocal t
+ t += delay
+ events.append([t, "o", text])
+ return t
+
+ def newline():
+ output("\r\n")
+
+ def wait(seconds: float):
+ nonlocal t
+ t += seconds
+
+ # Scene 1: Show prompt and run stats
+ output("$ ")
+ wait(0.5)
+ type_text("skill-issue stats")
+ newline()
+ wait(0.3)
+
+ # Output stats with slight delay between lines
+ for line in stats_result.stdout.split('\n'):
+ output(line)
+ newline()
+ wait(0.05)
+
+ wait(1.0)
+ output("\r\n$ ")
+ wait(0.5)
+
+ # Scene 2: Show knowledge graph
+ type_text("skill-issue graph show --domain machine-learning")
+ newline()
+ wait(0.3)
+
+ for line in graph_result.stdout.split('\n'):
+ output(line)
+ newline()
+ wait(0.03)
+
+ wait(1.5)
+ output("\r\n$ ")
+ wait(0.3)
+ type_text("# Your AI writes code. Does your brain keep up?")
+ newline()
+ wait(2.0)
+
+ # Write the cast file
+ with open(output_path, 'w') as f:
+ f.write(json.dumps(header) + '\n')
+ for event in events:
+ f.write(json.dumps(event) + '\n')
+
+ return output_path
+
+
+def main():
+ script_dir = os.path.dirname(os.path.abspath(__file__))
+ project_dir = os.path.dirname(script_dir)
+ demo_dir = os.path.join(project_dir, 'assets', 'demo')
+ os.makedirs(demo_dir, exist_ok=True)
+
+ cast_path = os.path.join(demo_dir, 'skill-issue-demo.cast')
+ gif_path = os.path.join(demo_dir, 'skill-issue-demo.gif')
+
+ print("Generating asciicast...")
+ create_asciicast(cast_path)
+ print(f"Created: {cast_path}")
+
+ # Try to convert to GIF using agg
+ try:
+ result = subprocess.run(
+ ['agg', '--font-family', 'SF Mono,Monaco,monospace',
+ '--theme', 'dracula', cast_path, gif_path],
+ capture_output=True, text=True
+ )
+ if result.returncode == 0:
+ print(f"Created: {gif_path}")
+ else:
+ print(f"agg failed: {result.stderr}")
+ print("Cast file ready for manual conversion with agg or asciinema-agg")
+ except FileNotFoundError:
+ print("agg not found. Cast file ready for manual conversion.")
+
+
+if __name__ == '__main__':
+ main()
diff --git a/scripts/generate_terminal_svg.py b/scripts/generate_terminal_svg.py
new file mode 100644
index 0000000..7f78af5
--- /dev/null
+++ b/scripts/generate_terminal_svg.py
@@ -0,0 +1,164 @@
+#!/usr/bin/env python3
+"""Generate SVG screenshots of terminal output for README.
+
+Uses the danger palette from terminal_style.py for consistent visual identity.
+"""
+
+import subprocess
+import html
+import os
+import sys
+
+# Add scripts dir to path for import
+sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
+
+from terminal_style import (
+ BACKGROUND,
+ PROMPT,
+ OUTPUT_TEXT,
+ MUTED_TEXT,
+ SCORE_HIGHLIGHT,
+ WEAK_NODE,
+ GOOD_NODE,
+ WARNING,
+ FONT_FAMILY,
+ WINDOW_BUTTON_COLORS,
+ BORDER_RADIUS,
+)
+
+
+def terminal_to_svg(output: str, title: str = "Terminal", width: int = 700) -> str:
+ """Convert terminal output to an SVG image with macOS-style window chrome.
+
+ Uses the sci-fi dystopian danger palette from terminal_style.py.
+ """
+ lines = output.split('\n')
+ line_height = 20
+ padding = 20
+ header_height = 40
+ height = header_height + (len(lines) * line_height) + (padding * 2)
+
+ # Color mapping using danger palette
+ colors = {
+ 'default': OUTPUT_TEXT,
+ 'green': GOOD_NODE,
+ 'yellow': WARNING,
+ 'red': WEAK_NODE,
+ 'blue': '#3b82f6',
+ 'magenta': SCORE_HIGHLIGHT, # Use amber for emphasis
+ 'cyan': '#06b6d4',
+ 'gray': MUTED_TEXT,
+ 'orange': WARNING,
+ 'prompt': PROMPT,
+ }
+
+ svg_lines = [
+ f'',
+ ])
+
+ return '\n'.join(svg_lines)
+
+
+def main():
+ assets_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+ screenshots_dir = os.path.join(assets_dir, 'assets', 'screenshots')
+ os.makedirs(screenshots_dir, exist_ok=True)
+
+ # Generate graph show screenshot
+ result = subprocess.run(
+ ['skill-issue', 'graph', 'show', '--domain', 'machine-learning'],
+ capture_output=True, text=True
+ )
+ graph_svg = terminal_to_svg(result.stdout.strip(), 'skill-issue graph show --domain machine-learning')
+ with open(os.path.join(screenshots_dir, 'graph-show.svg'), 'w') as f:
+ f.write(graph_svg)
+ print(f"Generated: {screenshots_dir}/graph-show.svg")
+
+ # Generate stats screenshot
+ result = subprocess.run(
+ ['skill-issue', 'stats'],
+ capture_output=True, text=True
+ )
+ stats_svg = terminal_to_svg(result.stdout.strip(), 'skill-issue stats', width=500)
+ with open(os.path.join(screenshots_dir, 'stats.svg'), 'w') as f:
+ f.write(stats_svg)
+ print(f"Generated: {screenshots_dir}/stats.svg")
+
+ # Generate weak nodes screenshot
+ result = subprocess.run(
+ ['skill-issue', 'graph', 'weak', '--domain', 'machine-learning'],
+ capture_output=True, text=True
+ )
+ weak_svg = terminal_to_svg(result.stdout.strip(), 'skill-issue graph weak --domain machine-learning', width=600)
+ with open(os.path.join(screenshots_dir, 'graph-weak.svg'), 'w') as f:
+ f.write(weak_svg)
+ print(f"Generated: {screenshots_dir}/graph-weak.svg")
+
+
+if __name__ == '__main__':
+ main()
diff --git a/scripts/terminal_style.py b/scripts/terminal_style.py
new file mode 100644
index 0000000..958b242
--- /dev/null
+++ b/scripts/terminal_style.py
@@ -0,0 +1,196 @@
+"""
+Terminal SVG style constants for skill-issue visual assets.
+
+Sci-fi dystopian palette — danger reds, amber warnings, dark backgrounds.
+The kind of terminal that knows something is wrong.
+"""
+
+# === COLORS ===
+
+# Background — nearly black with a hint of blue
+BACKGROUND = "#0a0a0f"
+
+# Border — subtle, doesn't distract
+BORDER = "#333333"
+BORDER_WIDTH = 1
+
+# Text colors
+PROMPT = "#ff3366" # Danger red — we're past green terminals
+OUTPUT_TEXT = "#e0e0e0" # Light gray, easy to read
+MUTED_TEXT = "#666666" # De-emphasized info
+
+# Semantic highlights
+SCORE_HIGHLIGHT = "#ff9900" # Amber for scores, metrics, numbers
+WEAK_NODE = "#ff3366" # Red for weak/needs-work
+GOOD_NODE = "#00ff88" # Green for strong/mastered
+WARNING = "#ff9900" # Amber warnings
+ERROR = "#ff3366" # Red errors
+
+# Glitch effect colors (for special effects)
+GLITCH_CYAN = "#00ffff"
+GLITCH_MAGENTA = "#ff00ff"
+
+# === FONTS ===
+
+# Font stack — JetBrains Mono preferred, fallbacks for broad support
+FONT_FAMILY = "'JetBrains Mono', 'Fira Code', 'SF Mono', 'Consolas', 'Courier New', monospace"
+FONT_SIZE = 14
+LINE_HEIGHT = 1.4
+
+# === TERMINAL WINDOW ===
+
+# macOS-style window chrome
+WINDOW_CHROME_HEIGHT = 32
+WINDOW_BUTTON_RADIUS = 6
+WINDOW_BUTTON_SPACING = 20
+WINDOW_BUTTON_COLORS = {
+ "close": "#ff5f57",
+ "minimize": "#febc2e",
+ "maximize": "#28c840",
+}
+
+# Padding inside terminal
+TERMINAL_PADDING_X = 16
+TERMINAL_PADDING_Y = 12
+
+# Default terminal dimensions
+DEFAULT_WIDTH = 700
+DEFAULT_HEIGHT = 400
+
+# Corner radius
+BORDER_RADIUS = 8
+
+
+# === SVG GENERATION HELPERS ===
+
+def rgb_to_hex(r: int, g: int, b: int) -> str:
+ """Convert RGB values to hex color string."""
+ return f"#{r:02x}{g:02x}{b:02x}"
+
+
+def generate_terminal_header(width: int = DEFAULT_WIDTH) -> str:
+ """Generate SVG for macOS-style terminal window header."""
+ buttons = []
+ start_x = 16
+ cy = WINDOW_CHROME_HEIGHT // 2
+
+ for i, (name, color) in enumerate(WINDOW_BUTTON_COLORS.items()):
+ cx = start_x + i * WINDOW_BUTTON_SPACING
+ buttons.append(
+ f''
+ )
+
+ return f"""
+
+
+ {''.join(buttons)}
+ """
+
+
+def generate_terminal_body(
+ content_lines: list[str],
+ width: int = DEFAULT_WIDTH,
+ height: int = DEFAULT_HEIGHT,
+) -> str:
+ """
+ Generate SVG for terminal body with content.
+
+ content_lines: List of (text, color) tuples or plain strings.
+ """
+ y_offset = WINDOW_CHROME_HEIGHT + TERMINAL_PADDING_Y + FONT_SIZE
+
+ lines_svg = []
+ for i, line in enumerate(content_lines):
+ if isinstance(line, tuple):
+ text, color = line
+ else:
+ text = line
+ color = OUTPUT_TEXT
+
+ # Escape XML special characters
+ text = (
+ text.replace("&", "&")
+ .replace("<", "<")
+ .replace(">", ">")
+ .replace('"', """)
+ )
+
+ y = y_offset + i * (FONT_SIZE * LINE_HEIGHT)
+ lines_svg.append(
+ f'{text}'
+ )
+
+ body_y = WINDOW_CHROME_HEIGHT
+ body_height = height - WINDOW_CHROME_HEIGHT
+
+ return f"""
+
+
+ {''.join(lines_svg)}
+ """
+
+
+def generate_terminal_svg(
+ content_lines: list,
+ width: int = DEFAULT_WIDTH,
+ height: int = DEFAULT_HEIGHT,
+ title: str = "skill-issue",
+) -> str:
+ """
+ Generate complete terminal SVG with header and content.
+
+ Args:
+ content_lines: List of strings or (text, color) tuples
+ width: Terminal width in pixels
+ height: Terminal height in pixels
+ title: Window title (displayed in header)
+
+ Returns:
+ Complete SVG string
+ """
+ header = generate_terminal_header(width)
+ body = generate_terminal_body(content_lines, width, height)
+
+ # Title in header
+ title_svg = (
+ f'{title}'
+ )
+
+ # Border
+ border_svg = (
+ f''
+ )
+
+ return f""""""
+
+
+# === EXAMPLE USAGE ===
+
+if __name__ == "__main__":
+ # Demo: generate a sample terminal screenshot
+ sample_content = [
+ ("$ skill-issue graph show", PROMPT),
+ ("", OUTPUT_TEXT),
+ ("Knowledge Graph: machine-learning", OUTPUT_TEXT),
+ ("━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━", MUTED_TEXT),
+ ("", OUTPUT_TEXT),
+ (" backpropagation ████████░░ 82%", GOOD_NODE),
+ (" gradient_descent █████████░ 91%", GOOD_NODE),
+ (" neural_networks ██████░░░░ 64%", WARNING),
+ (" transformers ███░░░░░░░ 31%", WEAK_NODE),
+ (" attention ██░░░░░░░░ 23%", WEAK_NODE),
+ ]
+
+ svg = generate_terminal_svg(sample_content, title="skill-issue — graph show")
+ print(svg)
diff --git a/skill_issue/__pycache__/cli.cpython-314.pyc b/skill_issue/__pycache__/cli.cpython-314.pyc
index dbafd8d..55b74c0 100644
Binary files a/skill_issue/__pycache__/cli.cpython-314.pyc and b/skill_issue/__pycache__/cli.cpython-314.pyc differ
diff --git a/skill_issue_cc.egg-info/PKG-INFO b/skill_issue_cc.egg-info/PKG-INFO
index 275830e..5a696d2 100644
--- a/skill_issue_cc.egg-info/PKG-INFO
+++ b/skill_issue_cc.egg-info/PKG-INFO
@@ -34,40 +34,39 @@ Description-Content-Type: text/markdown
---
-AI coding tools let you ship code you don't understand. Not because you're lazy—there's just no friction. The code looks right, you move on, and slowly you stop reasoning from first principles.
+## The problem nobody's talking about
-skill-issue tracks what you actually know. When your agent builds something non-trivial, it fires a challenge grounded in what just happened. You answer, it scores you 0-3, and your knowledge graph updates. Next time, it targets concepts you're weak on.
+There's a study you should know about.
----
+METR ran a randomized controlled trial in early 2025 — experienced open-source developers, working on their own codebases, real issues. With AI tools, they took **19% longer** to complete tasks than without. But here's the unsettling part: those same developers *believed* AI had made them 20% faster. Even after experiencing the slowdown firsthand, the belief held.
-## Install
+Anthropic's own research found something similar: AI-assisted coding significantly lowers codebase comprehension. Developers who rely more on AI perform worse at debugging, conceptual understanding, and code reading. "Just reviewing the generated code" gives you, at best, a flimsy grasp of what you're shipping.
-### Claude Code
+Trust in AI coding tools is [already falling](https://leaddev.com/technical-direction/trust-in-ai-coding-tools-is-plummeting) — 33% of developers trusted AI output accuracy in 2025, down from 43% the year before.
-Two separate commands (don't combine them):
+This isn't an argument against AI tools. It's a description of a gap that's opening up quietly, without anyone noticing.
-```
-/plugin marketplace add SnehalRaj/skill-issue-marketplace
-```
+---
-```
-/plugin install skill-issue@skill-issue-marketplace
-```
+## The gym analogy
-Open a new session.
+Everyone goes to the gym. Not because physical labor disappeared — it mostly has — but because staying physically capable requires deliberate effort. You have to maintain it. The infrastructure for that is everywhere: gyms, workout plans, progress tracking, coaches.
-### pip (Cursor, Codex, any agent)
+Coding used to be that workout for your brain. Every bug you debugged, every algorithm you reasoned through, every system you designed from scratch was building something. Judgment. Pattern recognition. The ability to hold complexity in your head and work through it.
-```bash
-pip install skill-issue-cc
-skill-issue init
-```
+AI agents are changing that. Not by making you worse — but by removing the friction that was doing the work. The code appears. It looks right. You move on. And slowly, without noticing, you stop reasoning from first principles.
-Paste the output of `skill-issue init --print` into your editor's system prompt.
+There's no gym for this yet.
---
-## Knowledge Graph
+## What skill-issue does
+
+When your agent builds something non-trivial, it fires a challenge grounded in what just happened. Not trivia. Not a random LeetCode problem. Something directly tied to the code you just shipped.
+
+You answer it. It scores you 0–3. Your knowledge graph updates.
+
+Over time, it shows you exactly where your understanding is solid and where it's quietly fading. The mastery score decays if you don't practice — because real understanding works the same way.
```
skill-issue graph show --domain machine-learning
@@ -91,13 +90,36 @@ Priority Queue (work on these next):
Total nodes: 12 | Avg mastery: 0.10 | 0 mastered | 10 weak
```
-Each domain has a curated graph of concepts weighted by how often they come up in real work.
+Each domain has a curated graph of concepts weighted by how often they come up in real work. High-weight concepts you haven't proven → top priority.
+
+Mastery fades over time (3-day grace, then 0.02/day). Use it or lose it.
+
+---
+
+## Install
+
+### Claude Code
+
+Two separate commands (don't combine them):
+
+```
+/plugin marketplace add SnehalRaj/skill-issue-marketplace
+```
+
+```
+/plugin install skill-issue@skill-issue-marketplace
+```
+
+Open a new session.
+
+### pip (Cursor, Codex, any agent)
-- **reuse_weight** (0–1): How fundamental. 0.95 means it's everywhere.
-- **mastery** (0–1): Your proven understanding. Updates via EMA after each challenge.
-- **priority** = `weight × (1 - mastery)`. High-weight stuff you haven't proven = top priority.
+```bash
+pip install skill-issue-cc
+skill-issue init
+```
-Mastery fades if you don't practice (3-day grace, then 0.02/day). Use it or lose it.
+Paste the output of `skill-issue init --print` into your editor's system prompt.
---
@@ -135,7 +157,7 @@ Three questions, plain English. It figures out which domains to load.
| ⏱️ | **Complexity** | What's the Big-O? Can it be better? |
| 🔗 | **Connect** | How does this relate to X? |
-Challenges are grounded in what was just built. No random trivia.
+Challenges are grounded in what was just built. Not random trivia.
---
@@ -225,7 +247,9 @@ Version-controllable. Portable. Human-readable.
The name's a joke. Claude has skills (literally, `.skill` files). What about yours?
-Understanding compounds. A developer who actually gets the code they ship is more effective long-term. One well-timed challenge beats a passive tutorial. Your trophy wall tracks your growth—no leaderboard against others.
+The METR study found that developers *felt* faster with AI even when they were measurably slower. That perception gap is the real problem — you can't fix what you can't see. skill-issue makes the invisible visible: where your understanding is solid, where it's drifting, and what to work on next.
+
+A developer who actually understands the code they ship is more effective, more resilient, and harder to replace. One well-timed challenge beats an hour of passive tutorials. Your trophy wall tracks real growth — no leaderboard against others, just you versus yesterday's you.
---