Skip to content

Owl-Listener/mood-protocol

Repository files navigation

mood-protocol

A visual-to-semantic bridge for AI-assisted design.

Designers think in images. AI agents think in text. mood.md is a protocol that connects the two.

Drop your moodboard images into a folder — screenshots of annotated Figma canvases, colour palettes, typography specimens, spatial references, things you love, things you hate — and generate a structured mood.md that any AI agent can use for design direction.

Why this exists

We can write SKILL.md files to tell agents how to build things. But we have no equivalent for telling them what it should feel like. Design intent is fundamentally visual, and right now that gets lost in translation.

mood.md is the missing counterpart:

File Purpose
SKILL.md "Here's how to do things" (procedural)
mood.md "Here's what it should feel like" (perceptual)

Together, they give an agent both craft knowledge and aesthetic direction.

Quick start (no setup required)

The simplest way to generate a mood.md — no code, no API keys, no terminal.

You'll use two files: prompt.md (the instructions you paste into an AI) and mood.md (the file the AI generates for you to save in your project).

1. Screenshot your moodboard

Export your annotated Figma canvas as a PNG, or screenshot your Pinterest/Miro board. Sticky notes, labels, and spatial groupings are all valuable — the AI reads everything visible.

2. Upload and prompt

Open the AI you already use — claude.ai, gemini.google.com, chatgpt.com — upload your images, then copy the contents of prompt.md and paste them into the chat.

3. Save the output

The AI will return structured design direction. Copy its response into a file called mood.md in your project root. Done.

That's it.


Automated version (for repeat use)

If you want to run this from the command line without copy-pasting, there's a Python script that does it in one command.

Install (pick your model)

# For Claude
python -m pip install -r requirements-claude.txt
export ANTHROPIC_API_KEY=your-key-here

# For Gemini
python -m pip install -r requirements-gemini.txt
export GEMINI_API_KEY=your-key-here

Each backend has its own pinned requirements file so you only install what you'll actually use. python -m pip is a touch more robust than bare pip on systems where pip resolves to a different Python than python; bare pip install anthropic / pip install google-genai also still works if you'd rather skip the requirements files.

Create a mood folder and generate

my-project/
  mood/                   ← input: drop your reference images here
    your-images-here.png
    notes.md              ← optional designer annotations
  mood.md                 ← output: generated by the script

Drop your reference images into the mood/ folder, then:

python generate_mood.py --name "Editorial Warmth"

# Or use Gemini instead
python generate_mood.py --model gemini --name "Editorial Warmth"

Anti-references (things you don't want) are signalled by prefixing filenames with not- or anti-:

mood/
  warm-editorial-layout.png
  colour-palette.png
  not-corporate-dashboard.png    ← anti-reference

Options

python generate_mood.py                            # scans ./mood/ → ./mood.md (uses Claude)
python generate_mood.py --model gemini             # use Gemini instead
python generate_mood.py --input ./brand-refs       # custom input folder
python generate_mood.py --output ./src/design      # custom output location
python generate_mood.py --name "Dark Mode Variant"  # name your mood
python generate_mood.py --claude-model claude-sonnet-4-5   # pin a specific Claude snapshot
python generate_mood.py --gemini-model gemini-2.5-pro      # pin a specific Gemini model
python generate_mood.py --max-tokens 16384                 # raise the output cap for very dense moods
python generate_mood.py --dry-run                          # preview the prompt without calling the API

The defaults (claude-sonnet-4-20250514, gemini-2.5-flash) are known-good snapshots at time of writing. If a snapshot is retired, pass --claude-model / --gemini-model with any current model ID — no code change needed.

The default --max-tokens is 8192, which comfortably fits a full mood with all optional sections. If you're generating from a very dense moodboard and the output looks truncated, bump it higher.

--dry-run is useful the first time you try the tool: it prints the exact system prompt, user prompt, and image/anti-reference list the script would send, without making an API call or spending any tokens. Run it once to see what the model will see.

What mood.md looks like

A generated mood.md contains structured, actionable design direction:

  • Essence — 2-3 sentences capturing the overall feeling
  • Colour — temperature, palette (with hex values), contrast, saturation
  • Typography — character, weight distribution, suggested pairings
  • Space & Layout — rhythm, grid character, scale relationships
  • Texture & Material — surface quality, material references
  • Emotional Register — how someone should feel
  • Design Principles — extracted from the moodboard
  • Anti-References — what this explicitly is NOT
  • Agent Instructions — actionable paragraph for any AI agent

See a full example

A complete worked example lives in examples/folk-maximalism/ — two moodboard screenshots, the designer's notes, and the generated mood.md. Its README shows the exact command that produced the output, so you can regenerate it yourself and compare how different models read the same images.

Using mood.md with agents

Once generated, mood.md works with any AI assistant or coding agent. Place it in your project root alongside your code, and reference it in prompts:

With Claude Code:

Read mood.md and apply its direction to the landing page components.

With Cursor / Copilot / any agent:

Before styling this component, read mood.md for the project's visual direction.

Multiple moods per project:

my-project/
  moods/
    brand/
      mood/             ← input: drop your reference images here
        ...images...
      mood.md           ← output: generated by the script
    dark-mode/
      mood/             ← input
        ...images...
      mood.md           ← output
    onboarding/
      mood/             ← input
        ...images...
      mood.md           ← output

Inside each subfolder the mood/ directory holds the images you feed in, and the sibling mood.md is what the script writes out. Generate any one of them with python generate_mood.py --input moods/brand/mood --output moods/brand --name Brand.

Works with any design tool

This protocol is tool-agnostic. Your moodboard images can come from:

  • Figma — export a frame as PNG
  • Stitch (Google's AI UI design tool) — screenshot or export
  • Miro / FigJam — screenshot your board
  • Pinterest — save images to a folder
  • Are.na — download your channel
  • Your phone — photograph a physical moodboard
  • Anywhere — if it's an image, it works

Works with any AI model

The output is just markdown. Any model that can read text can use it — Claude, Gemini, GPT, Llama, whatever you prefer.

The generator script supports Claude and Gemini out of the box. Adding another vision backend is straightforward — contributions welcome.

Philosophy

  • Designer-native — formalises what designers already do, no new workflow to learn
  • Model-agnostic — the output works with any AI
  • Tool-agnostic — the input works from any design tool
  • Composable — multiple moods per project, each a different direction
  • Versionable — it's a text file, it lives in git
  • Open — MIT licensed, free to use, adapt, and extend

Contributing

This is an early-stage protocol. Contributions welcome:

  • Alternative vision backends — Gemini, GPT-4o, local models
  • Framework integrations — Claude Code slash commands, Cursor rules, etc.
  • Figma plugin — one-click export from a Figma frame
  • Format extensions — motion, sound, interaction mood dimensions
  • Example mood files — share your generated moods

License

MIT


mood-protocol was created by MC Dean as part of ongoing work on Percolates — exploring design, AI, and the craft of making things well.

About

Designers think in images. Agents think in text. mood.md bridges the gap. Drop your moodboard into a folder, run one command, get structured design direction that any AI agent can consume. Works with Claude or Gemini, any design tool, any project.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages