I think in systems, write in specs, and ship in prototypes.
I'm building at the intersection of AI product management and developer tooling β focused on the gap between what LLMs can do and what engineers actually need from them.
-
AI trust design β 46% of developers distrust AI output (Stack Overflow 2025). How do you build products where the AI generates and the human verifies, without making verification feel like correction?
-
LLM context tradeoffs β README-only parsing vs. RAG over embeddings vs. agentic file traversal: when does the cost/quality curve justify complexity? I wrote the full comparison in the PRD.
-
The repo-to-revenue gap β Why Topmate requires an existing audience, why Gumroad has no discoverability, and what a platform looks like that solves all three stages: credibility signal β discovery β transaction.
-
Prompt engineering as product spec β Prompts aren't engineering implementation details. They're versioned product artifacts with their own regression risk.
- π§ Studying AI PM craft by building real products, not completing certificates
- π Currently developing ProvenanceAI as a working demonstration of AI product thinking β market research through to shipped prototype
- π― Targeting AI PM roles where the job is to make powerful models actually useful to real people
- π¬ I write about AI product decisions, LLM architecture tradeoffs, and the design of trustworthy AI systems
- π¬ Always interested in talking to engineers building with LLMs and PMs navigating the "what should the AI actually do?" question
| Document | What It Shows |
|---|---|
| Market Research Report | TAM sizing, competitor gap analysis, user persona with psychographics |
| Stage 2 PRD | Feature specs, AI engine architecture, edge case handling, explicit scope freeze |
| UI/UX Vision Brief | Design principles, screen intent, visual language decisions |
| React Prototype | Working portfolio output page β the product's core value surface |
Building in public. All artifacts are real working documents,not post-hoc writeups.