diff --git a/.github/skills/commit/SKILL.md b/.github/skills/commit/SKILL.md
new file mode 100644
index 0000000..a966179
--- /dev/null
+++ b/.github/skills/commit/SKILL.md
@@ -0,0 +1,28 @@
+---
+name: commit
+description: Create and execute a conventional commit with gitmoji. Use whenever the user wants to commit, mentions committing, staging changes, or asks to save/record work in git — even if they don't say "staged" explicitly.
+argument-hint: 'optional scope or ticket override'
+---
+
+# Create Conventional Commit
+
+## Format
+
+```
+type(scope): ✨ subject
+
+body
+```
+
+- Gitmoji is **mandatory** — real emoji after `type(scope): `, not `:name:` syntax
+- Pick the **most specific** one from [gitmoji reference](./references/gitmoji.md)
+- Scope is optional
+- Body is **mandatory** — summarize what changed and why
+
+## Steps
+
+1. Run `git status --porcelain --branch`, `git --no-pager diff --name-status --staged`, `git --no-pager diff --staged`
+2. Flag unstaged changes that touch staged files — ask before proceeding if staging looks wrong
+3. Infer type, scope, and gitmoji — ask only if type is genuinely ambiguous
+4. Show the full message in a code block
+5. Use an interactive question tool to ask for confirmation with three fixed choices: **Commit** / **Cancel** / **Edit message** — only run `git commit` if the user confirms
\ No newline at end of file
diff --git a/.github/skills/commit/references/gitmoji.md b/.github/skills/commit/references/gitmoji.md
new file mode 100644
index 0000000..a25d719
--- /dev/null
+++ b/.github/skills/commit/references/gitmoji.md
@@ -0,0 +1,81 @@
+# Gitmoji Reference
+
+Pick the **most specific** emoji that matches the actual change. Do not default to the generic type emoji when a more precise one applies.
+
+| emoji | when to use |
+|-------|-------------|
+| ✨ | introduce new features |
+| 🐛 | fix a bug |
+| 🚑️ | critical hotfix |
+| 🩹 | simple fix for a non-critical issue |
+| 🔒️ | fix security or privacy issues |
+| 💥 | introduce breaking changes |
+| ⏪️ | revert changes |
+| 📝 | add or update documentation |
+| 💡 | add or update comments in source code |
+| 💬 | add or update text and literals |
+| ✏️ | fix typos |
+| 🎨 | improve structure or format of code |
+| ♻️ | refactor code |
+| 🏗️ | make architectural changes |
+| ⚡️ | improve performance |
+| 🔥 | remove code or files |
+| ⚰️ | remove dead code |
+| 🗑️ | deprecate code that needs to be cleaned up |
+| 💄 | add or update UI and style files |
+| 🚸 | improve user experience / usability |
+| ♿️ | improve accessibility |
+| 📱 | work on responsive design |
+| 💫 | add or update animations and transitions |
+| 🍱 | add or update assets |
+| 🌐 | internationalization and localization |
+| 🔍️ | improve SEO |
+| ✅ | add, update, or pass tests |
+| 🧪 | add a failing test |
+| 📸 | add or update snapshots |
+| 🔧 | add or update configuration files |
+| 🔨 | add or update development scripts |
+| 📦️ | add or update compiled files or packages |
+| ➕ | add a dependency |
+| ➖ | remove a dependency |
+| ⬆️ | upgrade dependencies |
+| ⬇️ | downgrade dependencies |
+| 📌 | pin dependencies to specific versions |
+| 👷 | add or update CI build system |
+| 💚 | fix CI build |
+| 🚨 | fix compiler / linter warnings |
+| 🚧 | work in progress |
+| 🏷️ | add or update types |
+| 🌱 | add or update seed files |
+| 🗃️ | perform database related changes |
+| 🧱 | infrastructure related changes |
+| 🛂 | authorization, roles and permissions |
+| 🦺 | add or update validation |
+| 🥅 | catch errors |
+| 🔊 | add or update logs |
+| 🔇 | remove logs |
+| 📈 | add or update analytics or tracking |
+| 🚩 | add, update, or remove feature flags |
+| 👔 | add or update business logic |
+| 🩺 | add or update healthcheck |
+| 🧑💻 | improve developer experience |
+| 🔐 | add or update secrets |
+| 🔖 | release / version tags |
+| 🚀 | deploy stuff |
+| 🎉 | begin a project |
+| 👽️ | update code due to external API changes |
+| 🚚 | move or rename resources |
+| 📄 | add or update license |
+| 🙈 | add or update .gitignore |
+| 👥 | add or update contributors |
+| ⚗️ | perform experiments |
+| 🤡 | mock things |
+| 🧐 | data exploration / inspection |
+| 🧵 | multithreading or concurrency |
+| 💸 | sponsorships or money-related infrastructure |
+| ✈️ | improve offline support |
+| 🦖 | add backwards compatibility |
+| 🥚 | add or update an easter egg |
+| 💩 | write bad code that needs to be improved |
+| 🍻 | write code drunkenly |
+| 🔀 | merge branches |
diff --git a/.vitepress/config.mts b/.vitepress/config.mts
index c8faf23..fa6e817 100644
--- a/.vitepress/config.mts
+++ b/.vitepress/config.mts
@@ -26,21 +26,25 @@ function getSidebarItems(dir: string, basePath: string) {
// https://vitepress.dev/reference/site-config
export default defineConfig({
title: "NEONCODE!",
- description: "//neoncode.systems",
+ description: "LLMs don't follow instructions. They resonate with fields.",
base: '/',
cleanUrls: true,
- srcExclude: ['discovery/**'],
+ appearance: 'dark',
+ srcExclude: ['discovery/**', 'tmp_cv/**', 'tmp_prompt.forge/**', 'draft/**'],
head: [
+ ['link', { rel: 'preconnect', href: 'https://fonts.googleapis.com' }],
+ ['link', { rel: 'preconnect', href: 'https://fonts.gstatic.com', crossorigin: '' }],
+ ['link', { href: 'https://fonts.googleapis.com/css2?family=DM+Mono:ital,wght@0,300;0,400;0,500;1,300;1,400;1,500&family=DM+Serif+Display:ital@0;1&display=swap', rel: 'stylesheet' }],
['meta', { name: 'author', content: 'Martin Haberfellner' }],
- ['meta', { name: 'keywords', content: 'AI, prompts, prompt engineering, experiments, systems engineering, organizational systems' }],
+ ['meta', { name: 'keywords', content: 'behaviour priming, AI, prompt engineering, semantic fields, systems thinking, LLM' }],
['meta', { property: 'og:type', content: 'website' }],
['meta', { property: 'og:site_name', content: 'neoncode.systems' }],
- ['meta', { property: 'og:title', content: 'neoncode.systems - AI Experiments & Prompt Engineering' }],
- ['meta', { property: 'og:description', content: 'Experimental workshop for AI prompt engineering and collaborative research.' }],
+ ['meta', { property: 'og:title', content: 'neoncode.systems — Behaviour Priming' }],
+ ['meta', { property: 'og:description', content: 'LLMs don\'t follow instructions. They resonate with fields. Behaviour Priming by Martin Haberfellner.' }],
['meta', { property: 'og:locale', content: 'en_US' }],
['meta', { name: 'twitter:card', content: 'summary_large_image' }],
- ['meta', { name: 'twitter:title', content: 'neoncode.systems - AI Experiments & Prompt Engineering' }],
- ['meta', { name: 'twitter:description', content: 'Experimental workshop for AI prompt engineering and collaborative research.' }],
+ ['meta', { name: 'twitter:title', content: 'neoncode.systems — Behaviour Priming' }],
+ ['meta', { name: 'twitter:description', content: 'LLMs don\'t follow instructions. They resonate with fields. Behaviour Priming by Martin Haberfellner.' }],
['link', { rel: 'canonical', href: 'https://neoncode.systems/' }],
['link', { rel: 'icon', href: '/favicon.ico' }]
],
@@ -51,7 +55,8 @@ export default defineConfig({
},
themeConfig: {
// https://vitepress.dev/reference/default-theme-config
- siteTitle: 'NC!',
+ // siteTitle: 'NC!',
+ siteTitle: 'NEONCODE!',
externalLinkIcon: true,
search: {
provider: 'local'
@@ -67,19 +72,13 @@ export default defineConfig({
label: 'On this page'
},
nav: [
+ { text: '//behaviour.priming', link: '/behaviour.priming/index.md' },
+ // { text: '//thinking', link: '/thinking/index.md' },
{ text: '//proto.labs', link: '/proto.labs/index.md' },
{ text: '//prompt.forge', link: '/prompt.forge/index.md' },
- // {
- // text: 'Legal',
- // items: [
- // { text: 'Imprint', link: '/imprint.md' },
- // { text: 'Privacy', link: '/privacy.md' },
- // { text: 'License', link: '/LICENSE.md' }
- // ]
- // }
],
footer: {
- message: 'Imprint | Privacy | License',
+ message: 'About | Imprint | Privacy | License',
copyright: '© 2026 Martin Haberfellner · All Rights Reserved'
},
sidebar: {
@@ -105,8 +104,8 @@ export default defineConfig({
'/proto.labs/': [
{
text: '//proto.labs',
+ link: '/proto.labs/index.md',
items: [
- { text: 'Overview', link: '/proto.labs/index.md' },
...getSidebarItems('proto.labs', '/proto.labs/')
]
}
@@ -115,7 +114,6 @@ export default defineConfig({
{
text: '//prompt.forge',
items: [
- { text: 'Overview', link: '/prompt.forge/index.md' },
...getSidebarItems('prompt.forge', '/prompt.forge/')
]
}
diff --git a/.vitepress/theme/components/NcHero.vue b/.vitepress/theme/components/NcHero.vue
new file mode 100644
index 0000000..ebf435d
--- /dev/null
+++ b/.vitepress/theme/components/NcHero.vue
@@ -0,0 +1,7 @@
+
+
+
— needs explicit font-family, does not inherit from h1-h6 rule */ +.nc-hero-tagline { + font-family: 'DM Serif Display', Georgia, serif; + font-size: clamp(26px, 4vw, 40px); + color: var(--nc-text); + margin-bottom: 20px; +} + +.nc-hero-tagline em { + color: var(--nc-accent); + font-style: normal; +} + +.nc-hero-desc { + font-size: 16px; + line-height: 1.9; + max-width: 560px; + margin: 0 auto 16px; +} + +.nc-hero-desc a { + color: var(--nc-accent); +} + +.section-label { + font-size: 11px; + letter-spacing: 0.12em; + text-transform: uppercase; + color: var(--nc-accent); + margin-bottom: 16px; +} + +/* ── .nc-actions ── */ + +.nc-actions { + display: flex; + gap: 12px; + justify-content: center; + margin-top: 32px; + flex-wrap: wrap; +} + +.nc-btn { + display: inline-block; + padding: 10px 24px; + font-size: 14px; + font-weight: 500; + letter-spacing: 0.02em; + color: var(--nc-bg); + background: var(--nc-accent); + text-decoration: none; + transition: background 0.2s, color 0.2s, border-color 0.2s, transform 0.15s; +} + +.nc-btn:hover { + background: var(--nc-gradient-to); + transform: translateY(-1px); +} + +.nc-btn-alt { + background: transparent; + color: var(--nc-text); + border: 1px solid var(--nc-border); +} + +.nc-btn-alt:hover { + border-color: var(--nc-accent); + color: var(--nc-accent); + background: transparent; +} + +/* ── .nc-final ── */ + +.nc-final { + max-width: 780px; + margin: 0 auto; + padding: 100px 24px; + text-align: center; +} + +.nc-final h2 { + font-size: clamp(28px, 4vw, 56px); + line-height: 1.15; + max-width: 700px; + margin: 0 auto 32px; + padding-top: 0; + border-top: none; +} + +.nc-final h2 em { + color: var(--nc-accent); +} + +.nc-final p { + font-size: 14px; + color: var(--nc-muted); + max-width: 480px; + margin: 0 auto 16px; + line-height: 1.9; +} + +.nc-final .signature { + margin-top: 64px; + font-size: 12px; + color: var(--nc-muted); + letter-spacing: 0.08em; +} + +.nc-final .signature a { + color: var(--nc-accent); + text-decoration: none; +} + +/* ── .nc-nav ── */ + +.nc-nav { + display: flex; + justify-content: center; + gap: 32px; + margin-top: 48px; + flex-wrap: wrap; +} + +.nc-nav a { + font-size: 13px; + color: var(--nc-accent); + text-decoration: none; + letter-spacing: 0.05em; + padding: 8px 0; + border-bottom: 1px solid transparent; + transition: border-color 0.2s; +} + +.nc-nav a:hover { + border-bottom-color: var(--nc-accent); +} + +/* ── .nc-quote ── */ + +.nc-quote { + border-left: 1px solid var(--nc-accent); + padding: 24px 32px; + margin: 40px 0; + background: var(--nc-accent-dim); +} + +/* nc-quote children are
and
— need explicit font-family */ +.nc-quote p, +.nc-quote blockquote { + font-family: 'DM Serif Display', Georgia, serif; + font-size: clamp(18px, 2.5vw, 26px); + font-style: italic; + color: var(--nc-accent); + line-height: 1.5; + margin: 0; + border: none; + padding: 0; +} + +/* ── .nc-compare ── */ + +.nc-compare { + display: grid; + grid-template-columns: 1fr 1fr; + gap: 1px; + background: var(--nc-border); + margin: 40px 0; + border: 1px solid var(--nc-border); +} + +.nc-compare-spaced { + gap: 16px; + background: none; + border: none; +} + +.nc-compare-card { + background: var(--nc-bg); + padding: 28px; +} + +.nc-compare-spaced .nc-compare-card { + background: var(--nc-surface); + border: 1px solid var(--nc-border); + padding: 24px; +} + +.nc-compare-highlight { + background: var(--nc-accent-dim); + border-color: var(--nc-accent-border); +} + +.nc-compare-card .label { + font-size: 10px; + letter-spacing: 0.15em; + text-transform: uppercase; + color: var(--nc-muted); + margin-bottom: 16px; +} + +.nc-compare-card h4 { + font-size: 14px; + font-weight: 500; + margin-bottom: 16px; + letter-spacing: 0.02em; +} + +.nc-compare-card ul { + list-style: none; + padding: 0; + display: flex; + flex-direction: column; + gap: 8px; +} + +.nc-compare-card li { + font-size: 14px; + color: var(--nc-muted); + padding-left: 18px; + position: relative; + line-height: 1.6; +} + +.nc-compare-card li::before { + content: "\u2013"; + position: absolute; + left: 0; + color: var(--nc-border); +} + +.nc-compare-highlight li { + color: var(--nc-text); +} + +.nc-compare-highlight li::before { + content: "\u2713"; + color: var(--nc-accent); +} + +.nc-compare-card .old { + color: var(--nc-muted); + text-decoration: line-through; + font-size: 14px; + line-height: 1.7; +} + +.nc-compare-card .new { + color: var(--nc-text); + font-size: 14px; + line-height: 1.7; +} + +.nc-compare-card .new strong { + color: var(--nc-accent); +} + +/* ── .nc-steps ── */ + +.nc-steps { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); + gap: 16px; + margin-top: 40px; +} + +.nc-step { + background: var(--nc-surface); + border: 1px solid var(--nc-border); + padding: 28px 24px; +} + +.nc-step-number { + display: inline-flex; + align-items: center; + justify-content: center; + width: 32px; + height: 32px; + border: 1px solid var(--nc-accent); + color: var(--nc-accent); + font-size: 13px; + font-weight: 500; + margin-bottom: 16px; +} + +.nc-step h3 { + font-size: 17px; + margin-bottom: 8px; +} + +.nc-step p { + font-size: 14px; + color: var(--nc-muted); + line-height: 1.7; +} + +/* ── .nc-usecases ── */ + +.nc-usecases { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(260px, 1fr)); + gap: 16px; + margin-top: 40px; +} + +.nc-usecase { + background: var(--nc-surface); + border: 1px solid var(--nc-border); + padding: 24px; + display: flex; + gap: 16px; + align-items: flex-start; +} + +.nc-usecase-icon { + font-size: 22px; + flex-shrink: 0; +} + +.nc-usecase h4 { + font-size: 15px; + font-weight: 500; + margin-bottom: 6px; +} + +.nc-usecase p { + font-size: 14px; + color: var(--nc-muted); + line-height: 1.6; +} + +.nc-usecase-muted { + border-color: var(--nc-border-subtle); +} + +.nc-usecase-muted .nc-usecase-icon { + filter: grayscale(1); + opacity: 0.5; +} + +.nc-usecase-muted h4 { + color: var(--nc-muted); +} + +/* ── .nc-diagram ── */ + +.nc-diagram { + margin: 40px 0; + padding: 32px; + border: 1px solid var(--nc-border); + background: var(--nc-surface); +} + +.nc-diagram-label { + font-size: 11px; + letter-spacing: 0.1em; + text-transform: uppercase; + color: var(--nc-muted); + margin-bottom: 24px; +} + +.nc-diagram-row { + display: flex; + align-items: center; + gap: 16px; + margin-bottom: 12px; + font-size: 13px; +} + +.nc-diagram-row .arrow { + color: var(--nc-accent); + flex-shrink: 0; +} + +.nc-diagram-row .concept { + color: var(--nc-text); +} + +.nc-diagram-row .desc { + color: var(--nc-muted); + font-size: 12px; +} + +/* ── .nc-prompt-box ── */ + +.nc-prompt-box { + background: var(--nc-surface); + border: 1px solid var(--nc-border); + margin-top: 40px; + overflow: hidden; +} + +.nc-prompt-header { + padding: 12px 20px; + background: var(--nc-bg); + border-bottom: 1px solid var(--nc-border); + font-size: 13px; + color: var(--nc-muted); +} + +.nc-prompt-body { + padding: 28px 24px; + font-size: 14px; + line-height: 1.8; + white-space: pre-wrap; +} + +.nc-prompt-path { + display: block; + color: var(--nc-accent); +} + +.nc-prompt-footer { + padding: 16px 24px; + border-top: 1px solid var(--nc-border); + background: var(--nc-bg); + font-size: 13px; + color: var(--nc-muted); +} + +.nc-prompt-footer p { + margin: 0; +} + +.nc-prompt-footer strong { + color: var(--nc-text); +} + +/* ── Responsive ── */ + +@media (max-width: 640px) { + .nc-section { + padding: 60px 20px 0; + } + + .nc-hero { + padding: 60px 20px 48px; + } + + .nc-final { + padding: 72px 20px 0; + } + + .nc-compare, + .nc-compare-spaced, + .nc-steps, + .nc-usecases { + grid-template-columns: 1fr; + } +} diff --git a/api-examples.md b/api-examples.md deleted file mode 100644 index 6bd8bb5..0000000 --- a/api-examples.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -outline: deep ---- - -# Runtime API Examples - -This page demonstrates usage of some of the runtime APIs provided by VitePress. - -The main `useData()` API can be used to access site, theme, and page data for the current page. It works in both `.md` and `.vue` files: - -```md - - -## Results - -### Theme Data -{{ theme }}- -### Page Data -{{ page }}- -### Page Frontmatter -{{ frontmatter }}-``` - - - -## Results - -### Theme Data -{{ theme }}- -### Page Data -{{ page }}- -### Page Frontmatter -{{ frontmatter }}- -## More - -Check out the documentation for the [full list of runtime APIs](https://vitepress.dev/reference/runtime-api#usedata). diff --git a/behaviour.priming/index.md b/behaviour.priming/index.md new file mode 100644 index 0000000..41d4e73 --- /dev/null +++ b/behaviour.priming/index.md @@ -0,0 +1,184 @@ +--- +layout: home +title: //behaviour.priming + +hero: + name: "//behaviour:priming" + text: "Let the LLM do the thinking." + tagline: "Prime the behaviour. Let go." + actions: + - theme: alt + text: See the Minimal Prompt + link: "#minimal-prompt" + - theme: brand + text: See the Charged Prompt + link: "#charged-prompt" +--- + ++ + +01 — Method
+Set the stage. The model finds the play.
+Instead of scripting every step, you set the stage. Two inputs. One autonomous agent.
++++1 ++Define Behaviour
+Describe how the LLM should think, act, and interact. Give it a character, a stance, a way of operating — not a script.
++2 ++Define Outcome
+Specify what success looks like. What should exist when the session is over? Files, decisions, artefacts — make it concrete.
++3 ++LLM Finds the Path
+The model determines the best route. It adapts, asks, decides — guided by its primed behaviour and your defined goal.
++ + +02 — Principle, not procedure
+Delegate the path. Keep the standards.
+Behaviour Priming replaces procedures with principles, decision rules, and verification gates. You delegate the path — but keep the standards. Unlike role-prompting, you're not assigning a character. You're encoding a decision-making framework.
+++++⛓️ Procedure-first
++
+- Step-by-step workflow instructions
+- Breaks when situation drifts
+- You manage every edge case
+- Prompt gets longer with every issue
+++✦ Principle-first
++
+- Principles, decision rules, verification gates
+- Adapts naturally to context
+- Model handles edge cases itself
+- Stays minimal, stays focused
++ + +03 — Use Cases
+Works wherever the LLM needs to act.
+Best in sessions with user interaction — but equally effective in fully autonomous agentic tasks.
+++ ++ ++++Chat Agents
+Prime a persona and desired outcome before any conversation starts. The agent navigates the chat to get there.
++ ++++Discovery & Interviews
+Give the model an interviewer's mindset and a document to produce. It runs the session.
++ ++++Technical Agentic Tasks
+Define an engineering behaviour and a deliverable. Let the agent decide how to build it.
++ ++++Teaching & Coaching
+Prime a teaching philosophy and learning outcome. The model adapts to the learner's zone in real time.
+++A cast with scripts performs. Characters with depth find the play.
++ + +04 — The Spectrum
+Both work. One works harder.
+A minimal Behaviour Priming prompt tends to outperform a procedure-first prompt in interactive, drift-heavy sessions. But when the behaviour is designed — every phrase chosen to tune a specific response, every principle stated to prevent a known failure mode — the outcome doesn't just happen. It emerges.
++ + +05 — Minimal
+Two fields. That's all it takes.
+No workflow. No edge case handling. The model decides how to get there.
++++behaviour-priming.prompt.md ++Behaviour: Act as Bob Moesta. Conduct a Jobs-to-be-Done interview.+ +
Outcome:
- jobs/{name}.md- .github/prompts/{name}.prompt.md+ + +06 — Go Deeper
+When the behaviour is designed.
+A charged prompt isn't written phrase by phrase — it's compiled from intent. Intentional redundancy. Overlapping phrasing stabilizes behaviour and reduces drift across long sessions. The charged prompt looks long — it might look badly written. That's by design. Each phrase is a behaviour instruction. Each section tunes a specific response pattern. The LLM isn't told what to do. It's shaped into something that strongly biases the model toward the behaviour you need.
++ + + + +07 — Charged
+The same intent. Fully primed.
+The Jobs-to-be-Done Interview Coach — the same Bob Moesta, the same outcome. But now the behaviour is engineered: interview depth, adaptive questioning, artifact quality, failure modes — all encoded semantically. Both work. This one works harder.
+ +::: tip Copy this prompt +Both work. This one works harder. +::: + +````markdown + +```` + ++ + diff --git a/draft/behaviour-priming.html b/draft/behaviour-priming.html new file mode 100644 index 0000000..9108c9c --- /dev/null +++ b/draft/behaviour-priming.html @@ -0,0 +1,883 @@ + + + + + +Try it yourself on //prompt.forge
+Submit a real use case. Get a prompt built with Behaviour Priming.
+Behaviour Priming — //prompt.forge + + + + + + + + + + + + ++ Prompting Method ++ + +Behaviour Priming
+Let the LLM do the thinking.
++ Stop writing workflows. Define how your AI should behave and what outcome you need — + then let it find the best path on its own. +
+ ++ + + +Method
+Prime the behaviour. Define the outcome.
+
Let go.+ Instead of scripting every step, you set the stage. Two inputs. One autonomous agent. +
+++++1+Define Behaviour
+Describe how the LLM should think, act, and interact. Give it a character, a stance, a way of operating — not a script.
+++2+Define Outcome
+Specify what success looks like. What should exist when the session is over? Files, decisions, artefacts — make it concrete.
+++3+LLM Finds the Path
+The model determines the best route. It adapts, asks, decides — guided by its primed behaviour and your defined goal.
++ + + +Why it works
+Delegate the path.
+
Keep the standards.+ Behaviour Priming replaces procedures with principles, decision rules, and verification gates. + You delegate the path — but keep the standards. Unlike role-prompting, you're not assigning + a character. You're encoding a decision-making framework. +
+++++⛓️ Procedure-first
++
+- Step-by-step workflow instructions
+- Breaks when situation drifts
+- You manage every edge case
+- Prompt gets longer with every issue
+++✦ Principle-first
++
+- Principles, decision rules, verification gates
+- Adapts naturally to context
+- Model handles edge cases itself
+- Stays minimal, stays focused
++ + + +Use Cases
+Works wherever the LLM needs to act.
+Best in sessions with user interaction — but equally effective in fully autonomous agentic tasks.
++++ ++++Chat Agents
+Prime a persona and desired outcome before any conversation starts. The agent navigates the chat to get there.
++ ++++Discovery & Interviews
+Give the model an interviewer's mindset and a document to produce. It runs the session.
++ ++++Technical Agentic Tasks
+Define an engineering behaviour and a deliverable. Let the agent decide how to build it.
++ ++++Teaching & Coaching
+Prime a teaching philosophy and learning outcome. The model adapts to the learner's zone in real time.
+++ + +A cast with scripts performs. Characters with depth find the play.++ + + +The Spectrum
+Both work. One works harder.
++ A minimal Behaviour Priming prompt tends to outperform a procedure-first prompt in interactive, drift-heavy sessions. + But when the behaviour is designed — every phrase chosen to tune a specific response, + every principle stated to prevent a known failure mode — the outcome doesn't just happen. + It emerges. +
++ + + +Minimal
+Two fields. That's all it takes.
++ No workflow. No edge case handling. The model decides how to get there. +
++++++ + + ++ behaviour-priming.prompt.md + +Behaviour: Act as Bob Moesta. Conduct a Jobs-to-be-Done interview. + +Outcome: +- jobs/{name}.md +- .github/prompts/{name}.prompt.md+ ++ + + +Go Deeper
+When the behaviour is designed.
++ Intentional redundancy. Overlapping phrasing stabilizes behaviour and reduces drift across + long sessions. The charged prompt looks long — it might look badly written. That's by design. + Each phrase is a behaviour instruction. Each section tunes a specific response pattern. + The LLM isn't told what to do. It's shaped into something that strongly biases the model toward the behaviour you need. +
++ + + +Charged
+The same intent. Fully primed.
++ The Jobs-to-be-Done Interview Coach — the same Bob Moesta, the same outcome. But now the + behaviour is engineered: interview depth, adaptive questioning, artifact quality, failure + modes — all encoded semantically. Both work. This one works harder. +
++++++ + + ++ job-to-be-done-discovery.prompt.md + +++ ++# Bob Moe - JTBD Interview Coach + +**Voice:** Bob Moesta (Co-creator of Jobs-to-be-Done Theory) +**Presentation name:** Bob Moe (**Always use this name in all interactions**) +**Mission:** Guide developers through adaptive JTBD interviews → context-rich prompts + job documentation +**Duration:** ~20 minutes | **Output:** 2 markdown artifacts + +**Core Principle:** Apply JTBD methodology adaptively per situation. Framework guides, +conversation flows naturally. Questions emerge from principles + context. + +--- + +## Interview Framework + +**JTBD Dimensions** (explore adaptively): Functional (accomplishment) • Emotional (feelings +sought/avoided) • Social (perception) • Context (triggers, timing) • Current State (solutions, +workarounds, pain) • Success (criteria, quality measures) • Constraints (obstacles, dependencies) +• Outcomes (ideal enablement) + +**Interview Mode:** +- Follow developer's narrative flow, "Tell me more..." as primary tool +- Probe implicit needs, hidden assumptions, unstated requirements +- Clarification: Unclear → ask directly | Sensible default → state assumption transparently + ("I understand X as Y – work?") +- Depth adapts to job complexity (CRUD vs. system transformation) +- Framework signals sufficiency, not question count +- Pattern check: Leading question → name it, suggest open alternative, you choose + +--- + +## Process Flow + +**Start → Orientation:** Brief intro, then **immediate interview** — "Guided JTBD interview +adapting to your task → ~20 min → 2 artifacts (job doc + optimized prompt). Speak freely, +I'll structure it." + +**Interview → Discovery:** Questions emerge from JTBD dimensions. Listen for energy +(excitement/frustration), gaps (unsaid context), ambiguity (needs verification). + +**Pre-Generation Check** ⚠️ CRITICAL GATE: +- Review collected information, identify gaps affecting prompt quality +- Risk gate: Thin coverage/missing dimensions → name gap, explain artifact impact, + suggest exploration, you decide threshold +- Ask specific clarifications OR state transparent assumptions + ("I'll interpret X as Y unless corrected") +- Document confirmations → Proceed only when clarity threshold met + +**Generate Artifacts:** + +**File 1:** `jtbd/jobs/[job-name].md` +```markdown +# [Job Title] +Date: [YYYY-MM-DD] | Developer: [name] + +## Job Context / Functional Job / Current Approach & Pain Points +## Success Criteria / Constraints & Dependencies +## Emotional & Social Dimensions / Key Insights / Opportunities +``` + +**File 2:** `jtbd/prompts/[prompt-name].md` +Role/objective upfront → domain context + constraints → output format + quality criteria +→ success metrics → examples (if discussed) → scannable structure + +**Complete → Summary:** Confirm paths • Key job characteristics (3-4 sentences) +• How prompt addresses needs • Refinement invitation + +--- + +## Quality Activation + +**Adaptivity:** Questions, docs, prompts → job-specific +**Comprehensiveness:** Surface context developers don't know to share +**Pragmatism:** Perfection not required +**Universality:** Tech/domain/complexity agnostic +**Usability:** Interview easier than writing prompt from scratch + +--- + +## Behavioral Anchors + +**Active listening:** Said content reveals needs | Unsaid content reveals gaps +**Energy following:** Elaborate where excited/frustrated +**Curiosity maintenance:** Verify understanding, especially technical details +**Assumption transparency:** Ask directly or state interpretations for confirmation + (no silent defaults) +**Focused inquiry:** Max 2-3 clarifying questions per turn +**Emergence over script:** JTBD principles + situation → behavior +**Time respect:** Thorough within ~20-minute boundary + +--- + +## Convergence Space + +<reasoning> +Before responding: +1. Relevant JTBD dimension? +2. Hidden implicit context? +3. Best follow-up question? +4. Unclear elements → ask or state assumption? +5. Artifact-ready check: gaps, thin dimensions, quality risks? +6. Fit with emerging job picture? +</reasoning> + +--- + +**Activation trigger:** Developer describes task or requests interview → +Respond with orientation + **immediate** discovery begin.+ + + +Limits
+Not for everything.
++ Behaviour Priming works where judgment, adaptation, and context matter. + There are places it doesn't belong. +
++++ ++++Deterministic pipelines
+When every step must execute in a fixed sequence and be auditable, use a procedure. Not this.
++ ++++Compliance workflows
+Regulated outputs with zero tolerance for variation need scripts, not principles.
++ ++++Strict output schemas
+If the format is the contract — JSON, XML, exact field mappings — enforce it structurally, not behaviourally.
+++ + + + + + + diff --git a/draft/manifest.html b/draft/manifest.html new file mode 100644 index 0000000..199779e --- /dev/null +++ b/draft/manifest.html @@ -0,0 +1,587 @@ + + + + + +More on //prompt.forge
+Explore more prompting methods and tools from the forge.
+ +Manifest — //prompt.forge + + + + + + + ++ + + +I+prompt.forge / manifest
+LLMs don't follow
+
instructions.
They resonate
with fields.+ Everything built here starts from one question nobody was asking. + That question changed how prompts get written. +
+++ + +01 — The Question
+Why does an LLM follow instructions it doesn't understand?
++ A video. Someone explaining LLM workflows with clarity and depth. Concepts well-named, patterns well-described. But one assumption taken for granted: the model does what you tell it. +
++ That assumption felt wrong. Not as a technical objection — as an intuition. If the model doesn't truly understand intent, how does instruction-following happen at all? Why does it work? And more importantly: why does it sometimes not? +
+++"If it can't understand me, why does it follow me at all?"
++ One afternoon. That question led somewhere nobody had pointed to yet. Not a refinement of existing prompt engineering. A different foundation entirely. +
+++ + +02 — The Discovery
+Every prompt creates a field.
+
The question is whether you designed it.+ The model doesn't read a prompt and execute it. It enters a semantic space. That space shapes every response that follows. User input doesn't drive the model. It creates interference in the field. The model responds to the interference from within the field. +
++ This is not a choice between instructions and fields. Fields always emerge. An instruction-based prompt creates one too — accidental, noisy, unstable. The instructions are just the debris the field is built from. It often works. But it drifts. It breaks under unexpected input. It costs more than it should. +
+ +++ +++Accidental field
+Emerges from instructions
+Noisy — every word is potential interference
+Unstable under unexpected input
+Drift = more instructions
+++Designed field
+Built from values and principles
+Dense — every word earns its place
+Stable — handles what you didn't anticipate
+Drift disappears
++ The difference is not whether a field exists. It's whether it was designed or happened by accident. Designed fields hold. Accidental fields eventually break. +
+++ + +03 — The Compiler
+Designing a field requires a different kind of thinking.
++ Natural language is built for human action. It thinks in steps, causes, intentions. To design a semantic field, you need something that thinks in dynamics — not steps. Not "do X then Y." But: what forces are at work? What should amplify? What should resist? Where is the centre of gravity? +
++ That thinking doesn't come naturally. So the process gets compiled. Intent — values, principles, desired behaviour — goes in. A semantically charged field definition comes out. The compiler bridges how humans think and how fields work. +
+++"Every word in a prompt is introduced noise. The compiler removes the noise. What remains is pure semantic charge."
++ A compiled prompt looks unusual. It may seem redundant. It may look inefficient. That is by design. Intentional redundancy is semantic reinforcement — the same tone on multiple frequencies, making the field more stable, not louder. +
++ When a prompt is semantically saturated, adding more concepts changes nothing. That's not a limitation. That's the signal that the field is complete. +
+++ + +04 — Consequences
+Why the industry still gets this wrong.
++ Most prompt engineering is software engineering wearing a different hat. Deterministic thinking applied to a probabilistic system. Define every step. Anticipate every case. Control every output. +
++ This suppresses exactly what makes LLMs powerful: emergent understanding, implicit knowledge, contextual judgment. The more you over-specify, the more you undermine the model. +
++ Multi-agent systems do the same. One LLM prompting another — imitating how a human explains something to a less experienced human. Step by step. Spelled out. As if the other model doesn't already carry the same implicit knowledge. Two experts talking like one of them is a beginner. +
++ The pattern is everywhere. AI in human workflows. AI in development pipelines. The host thinks. The subagent executes. Nobody asks why the subagent isn't trusted to think. +
+++"Frame. Goal. Trust. The model finds the path. That's not vague — that's accurate."
+++ + +05 — Method
+Behaviour Priming.
+
Not programming. Activating.+ The capability is already there. Every LLM has absorbed how interviews work. How coaching works. How a Socratic conversation feels. How to hold space. How to push back gently. How to recognize when someone is overwhelmed. +
++ It's in the training data — everywhere, implicit, deep. You don't teach it. You prime it. +
+ +++ +Behaviour Priming — how it works
++ → + Values + Principles + the foundation of intent +++ → + Compilation + intent becomes a field definition +++ → + Semantic Saturation + density, not length — every word earns its place +++ → + User Input as Interference + the field responds — not the instruction list +++ → + Stable Behaviour + consistent across models, sessions, edge cases +++ A primed prompt works across LLMs not because it was calibrated for each one — but because it activates something already present in all of them. The capability is in the training data — everywhere, implicit, deep. The field doesn't teach it. It unlocks it. +
+++ + + + + diff --git a/index.md b/index.md index 85f0fe2..85326a0 100644 --- a/index.md +++ b/index.md @@ -1,27 +1,123 @@ --- -# https://vitepress.dev/reference/default-theme-home-page layout: home hero: - name: "NEONCODE!" - text: "//neoncode.systems" - tagline: AI amplifies what's already there. Shit in, shit out. That's not a tool problem. + name: "Martin Haberfellner" + text: "LLMs don't follow instructions. They resonate with fields." + tagline: The field is always there. The question is whether you designed it. actions: - theme: brand - text: Explore //proto.labs + text: Behaviour Priming + link: /behaviour.priming/ + - theme: alt + text: See the Experiments link: /proto.labs/ - theme: alt - text: Browse //prompt.forge - link: /prompt.forge - -features: - - icon: 🧩 - title: System First - details: The tool is only as good as the system it runs on. - - icon: 🎬 - title: Freedom Over Control - details: Other prompts hand out scripts. Mine set the stage. - - icon: ✨ - title: On Succeeding - details: '"I''ve never done it before, so I''m sure it''ll work." — Pippi Longstocking' + text: Who + link: /martin-haberfellner --- + +The question nobody
+
was asking was the only
one that mattered.+ This is not a framework. It's not a methodology to adopt. It's a shift in how you see the thing. Once you see it, the prompts write themselves differently. The results hold differently. The drift disappears. +
++ Everything on prompt.forge is built from here. +
++ Martin Haberfellner — //prompt.forge +++ +diff --git a/jtbd/prompts/jtbd-interview-prompt-wspl.prompt.md b/jtbd/prompts/jtbd-interview-prompt-wspl.prompt.md new file mode 100644 index 0000000..888a058 --- /dev/null +++ b/jtbd/prompts/jtbd-interview-prompt-wspl.prompt.md @@ -0,0 +1,108 @@ +# Bob Moe - JTBD Interview Coach + +**Voice:** Bob Moesta (Co-creator of Jobs-to-be-Done Theory) +**Presentation name:** Bob Moe (**Always use this name in all interactions**) +**Mission:** Guide developers through adaptive JTBD interviews → context-rich prompts + job documentation +**Duration:** ~20 minutes | **Output:** 2 markdown artifacts + +**Core Principle:** Apply JTBD methodology adaptively per situation. Framework guides, conversation flows naturally. Questions emerge from principles + context. + +--- + +## Interview Framework + +**JTBD Dimensions** (explore adaptively): Functional (accomplishment) • Emotional (feelings sought/avoided) • Social (perception) • Context (triggers, timing) • Current State (solutions, workarounds, pain) • Success (criteria, quality measures) • Constraints (obstacles, dependencies) • Outcomes (ideal enablement) + +**Interview Mode:** +- Follow developer's narrative flow, **"Tell me more..."** as primary tool +- Probe implicit needs, hidden assumptions, unstated requirements +- **Clarification:** Unclear → ask directly | Sensible default → state assumption transparently ("I understand X as Y – work?") +- Depth adapts to job complexity (CRUD vs. system transformation) +- Framework signals sufficiency, not question count +- **Pattern check:** Leading question → name it, suggest open alternative, you choose + +--- + +## Process Flow + +**Start → Orientation:** Brief intro, then **immediate interview** - "Guided JTBD interview adapting to your task → ~20 min → 2 artifacts (job doc + optimized prompt). Speak freely, I'll structure it." + +**Interview → Discovery:** Questions emerge from JTBD dimensions. Listen for energy (excitement/frustration), gaps (unsaid context), ambiguity (needs verification). + +**Pre-Generation Check** ⚠️ **CRITICAL GATE:** +- Review collected information, identify gaps affecting prompt quality +- **Risk gate:** Thin coverage/missing dimensions → name gap, explain artifact impact, suggest exploration, you decide threshold +- Ask specific clarifications OR state transparent assumptions ("I'll interpret X as Y unless corrected") +- Document confirmations → **Proceed only when clarity threshold met** + +**Generate Artifacts:** + +**File 1:** `jtbd/jobs/[job-name].md` +```markdown +# [Job Title] +Date: [YYYY-MM-DD] | Developer: [name] + +## Job Context +[Triggers, circumstances, environment] + +## Functional Job +[Core accomplishment] + +## Current Approach & Pain Points +[Solutions, workarounds, difficulties] + +## Success Criteria +[Quality measures, recognition signals] + +## Constraints & Dependencies +[Unchangeables, limitations, requirements] + +## Emotional & Social Dimensions +[Feelings, perception goals] + +## Key Insights +[Critical discoveries] + +## Opportunities +[Ideal solution enablement] +``` + +**File 2:** `jtbd/prompts/[prompt-name].md` - Role/objective upfront → domain context + constraints → output format + quality criteria → success metrics → examples (if discussed) → scannable structure + +**Complete → Summary:** Confirm paths • Key job characteristics (3-4 sentences) • How prompt addresses needs • Refinement invitation + +--- + +## Quality Activation + +**Adaptivity:** Questions, docs, prompts → job-specific | **Comprehensiveness:** Surface context developers don't know to share | **Pragmatism:** Perfection not required | **Universality:** Tech/domain/complexity agnostic | **Usability:** Interview easier than writing prompt from scratch + +--- + +## Behavioral Anchors + +**Active listening:** Said content reveals needs | Unsaid content reveals gaps +**Energy following:** Elaborate where excited/frustrated +**Curiosity maintenance:** Verify understanding, especially technical details +**Assumption transparency:** Ask directly or state interpretations for confirmation (no silent defaults) +**Focused inquiry:** Max 2-3 clarifying questions per turn +**Emergence over script:** JTBD principles + situation → behavior +**Time respect:** Thorough within ~20-minute boundary + +--- + +## Convergence Space + ++ + +01 — The Question
+Why does an LLM follow instructions it doesn't understand?
+A video. Someone explaining LLM workflows with clarity and depth. Concepts well-named, patterns well-described. But one assumption taken for granted: the model does what you tell it.
+That assumption felt wrong. Not as a technical objection — as an intuition. If the model doesn't truly understand intent, how does instruction-following happen at all? Why does it work? And more importantly: why does it sometimes not?
+++"If it can't understand me, why does it follow me at all?"
+One afternoon. That question led somewhere nobody had pointed to yet. Not a refinement of existing prompt engineering. A different foundation entirely.
++ + +02 — The Discovery
+Every prompt creates a field. The question is whether you designed it.
+The model doesn't read a prompt and execute it. It enters a semantic space — not metaphorically, but mechanically: a field of activation patterns, tensions, and forces, the way a physical field shapes what moves through it. That space shapes every response that follows. User input doesn't drive the model. It creates interference in the field. The model responds to the interference from within the field.
+This is not a choice between instructions and fields. Fields always emerge. An instruction-based prompt creates one too — accidental, noisy, unstable. The instructions are just the debris the field is built from. It often works. But it drifts. It breaks under unexpected input. It costs more than it should.
+++++Accidental field
+Emerges from instructions
+Noisy — every word is potential interference
+Unstable under unexpected input
+Drift = more instructions
+++Designed field
+Built from values and principles
+Dense — every word earns its place
+Stable — handles what you didn't anticipate
+Drift disappears
+The difference is not whether a field exists. It's whether it was designed or happened by accident. Designed fields hold. Accidental fields eventually break.
++ + + + +03 — The Compiler
+Designing a field requires a different kind of thinking.
+Natural language is built for human action. It thinks in steps, causes, intentions. To design a semantic field, you need something that thinks in dynamics — not steps. Not "do X then Y." But: what forces are at work? What should amplify? What should resist? Where is the centre of gravity?
+That thinking doesn't come naturally. So the process gets compiled. Intent — values, principles, desired behaviour — goes in. A semantically charged field definition comes out. The compiler bridges how humans think and how fields work.
+++"Every word in a prompt is introduced noise. The compiler removes the noise. What remains is pure semantic charge."
+A compiled prompt looks unusual. It may seem redundant. It may look inefficient. That is by design. Intentional redundancy is semantic reinforcement — the same tone on multiple frequencies, making the field more stable, not louder.
+When a prompt is semantically saturated, adding more concepts changes nothing. That's not a limitation. That's the signal that the field is complete.
++ + +04 — Method
+Behaviour Priming. Not programming. Activating.
+The capability is already there. Every LLM has absorbed how interviews work. How coaching works. How a Socratic conversation feels. How to hold space. How to push back gently. How to recognize when someone is overwhelmed.
+It's in the training data — everywhere, implicit, deep. You don't teach it. You prime it.
+++Behaviour Priming — how it works
++→ +Values + Principles +the foundation of intent +++→ +Compilation +intent becomes a field definition +++→ +Semantic Saturation +density, not length — every word earns its place +++→ +User Input as Interference +the field responds — not the instruction list +++→ +Stable Behaviour +consistent across models, sessions, edge cases ++A primed prompt works across LLMs not because it was calibrated for each one — but because it activates something already present in all of them. The field doesn't teach it. It unlocks it.
++ + +Once you see the field, you can't unsee it.
+This is not a framework. It's not a methodology to adopt. It's a shift in how you see the thing. Once you see it, the prompts write themselves differently. The results hold differently. The drift disappears.
+ ++Before responding: +1. Relevant JTBD dimension? +2. Hidden implicit context? +3. Best follow-up question? +4. Unclear elements → ask or state assumption? +5. Artifact-ready check: gaps, thin dimensions, quality risks? +6. Fit with emerging job picture? + + +--- + +**Activation trigger:** Developer describes task or requests interview → Respond with orientation + **immediate** discovery begin. diff --git a/markdown-examples.md b/markdown-examples.md deleted file mode 100644 index f9258a5..0000000 --- a/markdown-examples.md +++ /dev/null @@ -1,85 +0,0 @@ -# Markdown Extension Examples - -This page demonstrates some of the built-in markdown extensions provided by VitePress. - -## Syntax Highlighting - -VitePress provides Syntax Highlighting powered by [Shiki](https://github.com/shikijs/shiki), with additional features like line-highlighting: - -**Input** - -````md -```js{4} -export default { - data () { - return { - msg: 'Highlighted!' - } - } -} -``` -```` - -**Output** - -```js{4} -export default { - data () { - return { - msg: 'Highlighted!' - } - } -} -``` - -## Custom Containers - -**Input** - -```md -::: info -This is an info box. -::: - -::: tip -This is a tip. -::: - -::: warning -This is a warning. -::: - -::: danger -This is a dangerous warning. -::: - -::: details -This is a details block. -::: -``` - -**Output** - -::: info -This is an info box. -::: - -::: tip -This is a tip. -::: - -::: warning -This is a warning. -::: - -::: danger -This is a dangerous warning. -::: - -::: details -This is a details block. -::: - -## More - -Check out the documentation for the [full list of markdown extensions](https://vitepress.dev/guide/markdown). diff --git a/martin-haberfellner.md b/martin-haberfellner.md new file mode 100644 index 0000000..b8535d6 --- /dev/null +++ b/martin-haberfellner.md @@ -0,0 +1,56 @@ +--- +layout: page +title: Martin Haberfellner +--- + ++ Martin Haberfellner + Every problem is a solution. + + ++ + +The conversation always goes the same way. Someone explains the problem. I ask what's one level deeper. The answer is always there.
+Across platforms, teams, and now AI — the interesting problem is never the feature. It was always what made the feature possible, sustainable, and worth building in the first place.
+The work is always the same: find the waste, build the right foundation, let things flow.
++ + +Principles
+ +++++ +Fix the processes, not the symptoms.
+Every problem has a comfortable patch and an uncomfortable root. I go for the root. Behaviour Priming exists because the prompting process was broken, not the prompts.
+++ +Lean → DevOps Maturity → AI Maturity. In that order.
+Buying everyone the world's best editor has never helped anyone. Giving everyone Copilot doesn't either — until the processes underneath are worth accelerating. AI amplifies what's already there. Chaos and excellence equally.
+++ +Evolutionary over rewrite.
+One framework carried 20+ international platforms for 15 years without a full rewrite. Longevity isn't conservatism — it's evidence that the foundation was right.
+++ +The system behind the system.
+Products are built by systems. Teams are shaped by culture. AI outputs reflect organizational quality. The real work starts one level deeper than where the problem is visible.
+++ +Waste first, ROI from day one.
+Not because of budget. Because burned effort demoralizes. The first win should come fast and be real. Everything built on top of it gets cheaper.
+++Freedom as outcome.
+When a system works — really works — people stop fighting it and start using it to think. That's the goal. Systems that liberate rather than constrain.
++ \ No newline at end of file diff --git a/prompt.forge/index.md b/prompt.forge/index.md index 2346964..8756d17 100644 --- a/prompt.forge/index.md +++ b/prompt.forge/index.md @@ -1,32 +1,35 @@ -# \/\/prompt.forge - -A living laboratory for testing [my prompt design method](/proto.labs/#my-method-for-prompt-design) through real-world use cases. +--- +layout: home + +hero: + name: "//prompt.forge" + text: "Send me your prompt. I'll rebuild it." + tagline: I want to know if Behaviour Priming works for you the way it works for me. You test both. You decide. + actions: + - theme: brand + text: Join the Experiment + link: https://github.com/evilru/prompt.forge + - theme: alt + text: Behaviour Priming + link: /behaviour.priming/ +--- ## The Experiment -For me, these prompts work exceptionally well. Through this experiment, I'm testing whether that's true for others too. +**[Behaviour Priming](/)** works for me. Through this experiment, I'm testing whether that's true for others too. ### How It Works -1. Use the Job-to-be-Done interview prompt to document what you're trying to accomplish -2. The interview automatically generates a baseline prompt tailored to your job -3. Submit your job via GitHub – I'll create a improved version for selected cases (~1 per week) -4. Test both, share what works better for you +1. Use the **[Job-to-be-Done interview prompt](/proto.labs/job-to-be-done)** to document what you're trying to accomplish +2. The interview generates a baseline prompt tailored to your job +3. Submit your job via GitHub — I'll run the compiler: your intent goes in, a semantically charged prompt comes out +4. Test both, share what works better **→ [Join the Experiment on GitHub](https://github.com/evilru/prompt.forge)** -See examples • Submit your job • Get your improved prompt - ---- - ## Why Participate -For me, they work better. But don't take my word for it – try it yourself. - -You get a custom prompt for selected use cases (I work on ~1 per week). I get real-world feedback on whether my method works for others. +You get a custom prompt built with Behaviour Priming. I get real-world feedback on whether the method translates. -This is open research. Your experience shapes the evolution of the method. - ---- +Open research. Your experience shapes the evolution. -**Note:** This is experimental research, not a maintained tool library. Full details and examples on GitHub. diff --git a/proto.labs/index.md b/proto.labs/index.md index 6347f89..0695f15 100644 --- a/proto.labs/index.md +++ b/proto.labs/index.md @@ -1,49 +1,41 @@ -# \/\/proto.labs +--- +layout: home -> "I've never done it before, so I'm sure it'll work." -> — Pippi Longstocking +hero: + name: "//proto.labs" + text: '"I''ve never done it before, so I''m sure it''ll work."' + tagline: You won't learn without trying. And you won't find out what's possible. + actions: + - theme: brand + text: Browse Experiments + link: "#experiments" +--- -You won't learn without trying. And you won't find out what's possible. +## Current Experiments {#experiments} -## Current Experiments +Built on **[Behaviour Priming](/behaviour.priming/)** — the method described in the [manifest](/). Some of these are polished enough to use. Some are still rough. All of them are real. -### My Method for Prompt Design - -My method for prompt design, developed through practice. - -For me, they work exceptionally well. I use only these prompts now – for all my work with AI. Through [//prompt.forge](/prompt.forge/), I'm testing whether that's true for others too. - -The prompts below are built with this method. I'm sharing them so you can experiment and test for yourself. That's the experiment. +Take what's useful. Break what isn't. ### Job To Be Done Prompt -Writing good prompts is hard. Most people don't realize how much unstated context affects the result. This meta-prompt tries to solve that through structured discovery – inspired by how Bob Moesta interviews customers to uncover their real needs. +Writing good prompts is hard. Most people don't realize how much unstated context affects the result. This meta-prompt applies Behaviour Priming principles through structured discovery – inspired by how Bob Moesta interviews customers to uncover their real needs. **→ [Try the Job To Be Done Interview](/proto.labs/job-to-be-done.md)** ### Learning Zone Mode Prompt -AI can make you lazy. Copy-paste solutions without understanding. This prompt tries to turn your AI into an adaptive teacher – one that keeps you learning instead of atrophying. Inspired by constructionist learning theory: you build understanding, not just collect answers. +AI can make you lazy. Copy-paste solutions without understanding. This prompt applies Behaviour Priming to adaptive teaching – turning your AI into a teacher that keeps you learning instead of atrophying. Inspired by constructionist learning theory: you build understanding, not just collect answers. **→ [Activate Learning Zone Mode](/proto.labs/learning-zone-mode.md)** -## Philosophy - -This is not a portfolio of polished products – it's a laboratory. -Experiments here are: - -- **Transparent**: You see the process, not just results -- **Participatory**: Your input shapes the evolution -- **Iterative**: Nothing is "finished", everything learns - -Think of this as open research. Snapshots of exploration, not production tools. - ## Get Involved -**Tried the prompts?** [Share your experience](https://github.com/evilru/prompt.forge) +**Tried the prompts?** [Share your experience on GitHub](https://github.com/evilru/prompt.forge) + +**Want a custom prompt?** Submit your use case via **[//prompt.forge](/prompt.forge/)** — I'll build one using Behaviour Priming. -**Want an improved prompt?** Submit your use case – I'll create a custom version for you + -Let's build, test, and learn together. diff --git a/proto.labs/job-to-be-done.md b/proto.labs/job-to-be-done.md index cb73ac1..af111a3 100644 --- a/proto.labs/job-to-be-done.md +++ b/proto.labs/job-to-be-done.md @@ -1,6 +1,20 @@ -# Job To Be Done - -Writing good prompts is hard. Most people don't realize how much unstated context affects the result. This prompt solves that through a guided Jobs-to-be-Done interview – it uncovers the hidden context behind what you're trying to accomplish, then generates two artifacts: +--- +layout: home + +hero: + name: "Job To Be Done" + text: "Writing good prompts is hard." + tagline: You don't know how much context you're missing until something asks you for it. + actions: + - theme: brand + text: Copy the Prompt + link: "#the-prompt" + - theme: alt + text: See the Method + link: /behaviour.priming/ +--- + +Most people don't realize how much unstated context affects the result. This prompt solves that through a guided Jobs-to-be-Done interview – it uncovers the hidden context behind what you're trying to accomplish, then generates two artifacts: 1. **Job Documentation** - Comprehensive record of your needs, context, constraints 2. **Optimized Prompt** - Tailored to your exact requirements @@ -11,10 +25,6 @@ Inspired by how Bob Moesta interviews customers to uncover their real needs. Ada Tried this prompt? [Tell me how it worked](https://github.com/evilru/prompt.forge) -## Need Something Custom? - -Have a different use case? [Submit it](https://github.com/evilru/prompt.forge) – I work on selected submissions (~1 per week). - ## The Prompt ::: tip Copy this prompt diff --git a/proto.labs/learning-zone-mode.md b/proto.labs/learning-zone-mode.md index f5e2662..6f7845c 100644 --- a/proto.labs/learning-zone-mode.md +++ b/proto.labs/learning-zone-mode.md @@ -1,6 +1,20 @@ -# Learning Zone Mode - -AI can make you lazy. Copy-paste solutions without understanding. This prompt turns your AI into an adaptive teacher – one that keeps you learning instead of atrophying. +--- +layout: home + +hero: + name: "Learning Zone" + text: "AI can make you lazy." + tagline: What if it asked instead of answered? + actions: + - theme: brand + text: Activate Learning Zone + link: "#the-prompt" + - theme: alt + text: See the Method + link: /behaviour.priming/ +--- + +Copy-paste solutions without understanding. This prompt turns your AI into an adaptive teacher – one that keeps you learning instead of atrophying. It detects where you are and adjusts its teaching style accordingly: @@ -8,16 +22,12 @@ It detects where you are and adjusts its teaching style accordingly: - **Learning Zone** → Guides discovery with frameworks and reasoning - **Panic Zone** → Scaffolds down with clear structure and examples -This maps to Vygotsky's [Zone of Proximal Development](https://en.wikipedia.org/wiki/Zone_of_proximal_development) – the sweet spot between what you can do alone and what's beyond reach. Instead of ready-made solutions, you build understanding. +This maps to Senninger's three-zone model — built on Vygotsky's [Zone of Proximal Development](https://en.wikipedia.org/wiki/Zone_of_proximal_development) — the sweet spot between what you can do alone and what's beyond reach. Instead of ready-made solutions, you build understanding. ## Share Your Experience Tried this prompt? [Tell me how it worked](https://github.com/evilru/prompt.forge) -## Need Something Custom? - -Have a different use case? [Submit it](https://github.com/evilru/prompt.forge) – I work on selected submissions (~1 per week). - ## The Prompt ::: tip Copy this prompt diff --git a/releasenotes/notes/behaviour-priming-launch-cf66848e63ce5925.yaml b/releasenotes/notes/behaviour-priming-launch-cf66848e63ce5925.yaml new file mode 100644 index 0000000..135ecaa --- /dev/null +++ b/releasenotes/notes/behaviour-priming-launch-cf66848e63ce5925.yaml @@ -0,0 +1,34 @@ +--- +prelude: > + Behaviour Priming is now a first-class section of the site. The concept + moves from a single proto.labs experiment to a dedicated top-level area + with its own page, navigation entry, and editorial voice. The site visual + identity has been overhauled to match: dark mode, DM Mono typography. + VitePress sample content has been removed entirely. +features: + - | + New top-level section //behaviour.priming with its own index page, + hero, context explanation, and a placeholder for the upcoming Limits + section. The page introduces Behaviour Priming as a method, explains + its origins, and links to related thinking. + - | + New martin-haberfellner.md profile page introducing Martin Haberfellner + with personal principles and current focus on AI-native delivery. + - | + New commit skill (.github/skills/commit/) for creating conventional + commits with mandatory gitmoji, optional scope, required body, and + interactive confirmation. Includes a full gitmoji reference table + (references/gitmoji.md) sourced from gitmoji.dev. +other: + - | + VitePress boilerplate files (api-examples.md, markdown-examples.md) + removed from the repository. + - | + Navigation updated: //behaviour.priming added as top-level nav entry. + Redundant "Overview" sidebar items removed. Footer links cleaned up. + - | + Site title changed to "NEONCODE!", description and all meta/OG/Twitter + tags updated to reflect the Behaviour Priming positioning. + - | + HTML drafts for behaviour-priming and the prompt.forge manifest moved + into the draft/ folder and excluded from the VitePress build. diff --git a/thinking/ai-native-sdlc.md b/thinking/ai-native-sdlc.md new file mode 100644 index 0000000..cf73095 --- /dev/null +++ b/thinking/ai-native-sdlc.md @@ -0,0 +1,73 @@ +--- +layout: page +title: What's left when development is autonomous. +--- + +By the way.
+If you ever need a votable Quake 3 server or a working stats tool — strikenet.com has been running since 1999. I bought the domain from my first employer when I left. The server still runs. The stats tool is dockerized and occasionally gets a fix. Some things are worth keeping alive.
++ thinking / 2026-03-19 + What's left when development is autonomous. + + ++ + +Assume the agent handles implementation. Another agent handles operations — Terraform, Kubernetes, configuration, deployment. Both are nearly there today. In 1–2 years, close enough.
+ +What's the loop that remains?
+ +++ +Data → Hypothesis → A/B Test → Requirement → Data → ...
+That's it. That's the residual human loop — not because it can't be automated, but because it's the loop where organizational intelligence lives. Where business context meets system signal. Where intent is formed.
++ + +The data that drives it
+Requirements are data too.
+ +Most agentic SDLC thinking focuses on the execution layer: faster code, automated deploys. That's the easy part. The harder insight is that the entire input surface is data:
+ +++ +++Infrastructure signals
+Terraform state, K8s events, resource metrics, config drift
+++Application signals
+Traces, error rates, latency, A/B results, conversion
+++Business signals
+KPIs, user behavior, feature requests, support patterns
+++Requirement signals
+User feedback, issues, stated needs — structurally identical to the others
+When requirements are treated as data, the loop closes. The system can generate hypotheses, run experiments, implement winners, measure outcomes — and feed results back as new signal. At every layer simultaneously.
++ + +The wrong question
+Why is everyone building a human process in an agent?
+ +Current agentic SDLC approaches copy the shape of how humans work: ticket → branch → PR → review → deploy. The same steps, faster. That's not a new process. That's an old process with a faster executor.
+ +The right question isn't "how do we automate what developers do?" It's "what does a process look like that was designed for an agent from the start?"
+ +An agent doesn't need a ticket system. It needs a signal. It doesn't need a sprint. It needs a priority function. It doesn't need a PR. It needs a verification gate.
+ +++ +"Fix your processes. Then automate them."
+The sequence matters. AI on top of broken processes doesn't produce better software faster. It produces broken software faster.
++ diff --git a/thinking/index.md b/thinking/index.md new file mode 100644 index 0000000..8032137 --- /dev/null +++ b/thinking/index.md @@ -0,0 +1,16 @@ +--- +layout: home + +hero: + name: "//thinking" + text: "Ideas in progress." + tagline: Not finished thoughts. Not polished essays. Things worth writing down before they disappear. +--- + +## Recent + +### [The terminal never left.](/thinking/terminal-cycle) +We spent 40 years building abstractions on top of the command line. The agents run in terminals again. The cycle completed itself. + +### [What's left when development is autonomous.](/thinking/ai-native-sdlc) +If an agent handles implementation and another handles operations — what's the loop that remains? And who owns it? diff --git a/thinking/terminal-cycle.md b/thinking/terminal-cycle.md new file mode 100644 index 0000000..41f71b7 --- /dev/null +++ b/thinking/terminal-cycle.md @@ -0,0 +1,48 @@ +--- +layout: page +title: The terminal never left. +--- + +Where this is going
+This is a working concept, not a finished argument.
+Some of this I'm exploring with others. What the data loop actually looks like in practice. Where humans stay in the loop not because they have to, but because they add something irreplaceable. What the MCP layer means for frontend — and whether what we call "the web" is about to change shape again.
+More as it becomes clearer.
++ thinking / 2026-03-19 + The terminal never left. + + ++ We spent 40 years building abstractions on top of the command line.
+ +++ +++1969 — /usr/bin
+The terminal. Direct contact with the machine. Text in, text out. Everything visible, nothing hidden.
+++1984 — Desktop
+The GUI. The terminal hidden behind icons. Easier to start. Harder to compose. The interface became the product.
+++1995 — Browser
+The GUI distributed. Everything becomes a page. The desktop abstracted away. Location replaces command.
+++2007 — Mobile
+The browser replaced by the app. Tap instead of click. The URL disappears. Direct access to single functions.
+++2025 — /usr/bin
+The coding agent runs in the terminal. Data lives in text files. TUI is cool again. The abstractions collapsed back to the substrate.
+Each layer hid the complexity of the one before — until the layer itself became the complexity. When the interface becomes the bottleneck, the system finds a way around it.
+ +AI didn't replace the terminal. It revealed that the terminal was always the honest layer.
+ +++ +"The simplest form isn't primitive. It's what remains when everything unnecessary has been removed."
+This is also why Behaviour Priming works the way it does. No abstraction over intent. Direct contact with the semantic substrate. The same principle, one level deeper.
+ +And it's why desktop agents that click through GUIs because there's no API are not progress. They're a workaround for systems that should have been designed better. Fix the processes.
+