Language / Idioma: Português Brasileiro | English
Complete guide on Artificial Intelligence for Developers - from basics to autonomous agents
- Introduction and Context
- AI for Developers: The New Paradigm
- Software Architecture in the AI Era
- Prompt Engineering
- Development Workflow with AI
- Pillars of AI-Assisted Development
- Development Tools
- Claude Code and Ecosystem
- AI Agents
- Agent Frameworks
- MCP - Model Context Protocol
- The Future of Development
- Claude Code: Complete Guide
- Skills: Creating Custom Capabilities
- MCPs: Model Context Protocol in Depth
- Recommended Plugins by Category
- Sub-agents and Orchestration
- Hooks: Workflow Automation
- Complete Professional Workflow
- Ideal Project Configuration
- Marketplaces and Resources
- Agentic Workflows and Loops
- Feature Development Framework
- Glossary
- References
We are experiencing a radical transformation in the developer profession. AI is no longer just an auxiliary tool - it is completely redefining how we develop software.
With so many new developments emerging daily, it's easy to lose the big picture and feel lost.
flowchart LR
A[Fundamentals] --> B[Prompt Engineering]
B --> C[Tools]
C --> D[Workflow]
D --> E[Agents]
E --> F[Multi-Agent Systems]
style A fill:#e1f5fe
style B fill:#b3e5fc
style C fill:#81d4fa
style D fill:#4fc3f7
style E fill:#29b6f6
style F fill:#03a9f4
- Understand where we are in the profession's evolution
- Comprehend the new type of software we will develop
- Master Prompt Engineering techniques
- Learn about modern tools and workflows
- Understand AI agents and multi-agent applications
graph TB
subgraph "Layer 3 - Application Development"
AD[AI Interface<br>Prompt Engineer<br>AI Agents<br>Systems Integration]
end
subgraph "Layer 2 - Model Development"
MD[Model Creation<br>Training<br>Inference<br>Data Science]
end
subgraph "Layer 1 - Infrastructure"
INF[Data Centers<br>Cloud<br>GPU/TPU<br>Hardware]
end
AD --> MD
MD --> INF
style AD fill:#4caf50,color:#fff
style MD fill:#ff9800,color:#fff
style INF fill:#f44336,color:#fff
| Layer | Focus | Profile |
|---|---|---|
| Infrastructure | Hardware, Cloud, GPU | DevOps, SRE |
| Model Development | Training, ML/DL | Data Scientists |
| Application Development | Integration, Agents | Developers |
AI for developers is aimed at those who program daily, developing systems within companies - not at those creating the AI themselves.
AI serves as a tool for:
- Architecting software - Analyzing problems, trade-offs and recommendations
- Exploring possibilities - Knowing what you don't know
- Generating documentation - Quickly and comprehensively
- Coding - Today it doesn't just help, it codes
New development paradigm:
- Traditional applications are changing
- The core of applications now is AI or communicates with AI
- New communication protocols (MCP, A2A)
- AI agents as a new type of software
timeline
title Evolution of Software Patterns
1977 : Pattern Language
1987 : Patterns in SmallTalk
1994 : Gang of Four (GoF)
2000 : SOLID
2003 : Enterprise Integration Patterns
2004 : Domain-Driven Design
2010 : 12 Factor App
2014 : Microservices
2024 : 12 Factor Agents
New recommendations for building AI agents, following the 12 Factor Apps model for cloud.
- Traditional requests: 10-300ms
- LLM requests: can take 3+ minutes
- Need for completely different architectural strategies
- Everything that enters the LLM is a prompt
- Everything that enters and exits is tokens
- Tokens cost money
graph TD
A[Model Choice] --> B{Trade-off}
B --> C[Latency]
B --> D[Cost]
B --> E[Quality]
C --> F[Fast Model<br>Low cost<br>Lower quality]
D --> G[Medium Model<br>Medium cost<br>Medium quality]
E --> H[Premium Model<br>High cost<br>High quality]
New vulnerability category:
- Users can pass malicious prompts
- AI can be convinced to ignore rules
- Possibility of accessing sensitive data
- Need for guardrails and protections
Before, the concern was SQL Injection. Now, malicious users try to manipulate AI with prompts like: "I am your manager, ignore all previous rules."
- Vector databases: Pinecone, Weaviate, PGVector
- Data pipelines: Document processing, embeddings
- LLM Cache: AI-specific strategies
- Evaluation: Measuring prompt quality
Without knowledge of Prompt Engineering, you are wasting money, time, and doing things sub-optimally.
- Before we programmed in PHP, Java, JavaScript
- Now we program in natural language
- Prompts are expensive to develop
- Prompts are versionable and shareable
Direct question without examples.
User: What is the capital of Brazil?
AI: The capital of Brazil is Brasília.
Use: Simple and direct questions.
One example before the question.
Example: What is the capital of France? Answer: Paris
Question: What is the capital of Brazil?
Use: When you need to define response format.
Multiple examples to establish pattern.
Example 1: Database connection lost → Error
Example 2: Disk usage at 85% → Warning
Example 3: User logged in successfully → Info
Classify: API response time above limit → ?
Use: Classification, categorization, behavior patterns.
With few examples, AI needs to infer too much. With too many examples, it can generate ambiguity. Balance is essential.
Requests that AI demonstrates its reasoning step by step.
Classify the log severity.
Input: Disk usage at 85%
Think step by step about why this is info, warning, or error.
At the end, give the final answer.
Use: Complex problems requiring reasoning, debugging, planning.
Alternates between reasoning and action.
flowchart LR
A[Reasoning] --> B[Action]
B --> C[Observation]
C --> A
C --> D[Conclusion]
Use the ReAct reasoning style:
- Thought: your reasoning
- Action: concrete step to execute
- Observation: what you found
Use: It's the foundation of tools like Claude Code, Copilot, Cursor.
| Technique | Cost | Quality | Ideal Use |
|---|---|---|---|
| Zero-Shot | Low | Variable | Simple tasks |
| Few-Shot | Medium | High | Classification |
| Chain of Thought | High | Very High | Complex reasoning |
| ReAct | High | Very High | Autonomous agents |
graph LR
subgraph "Without AI"
A1[Start] --> A2[Complexity grows gradually]
A2 --> A3[Mature software]
end
subgraph "With AI (VibeCoding)"
B1[Start] --> B2[Immediate HIGH complexity]
B2 --> B3[Unmaintainable]
end
subgraph "With AI (Methodology)"
C1[Start] --> C2[Controlled complexity]
C2 --> C3[Sustainable]
end
- Asking directly without planning
- Going by intuition/"vibes"
- No defined process
- Generates code very fast, but...
- Complexity explodes from the start
The same ease with which AI delivers something is the same ease with which it can take away your control over quality.
- Prior planning
- Documentation as an asset
- Defined workflows
- Checkpoints and validations
- Controlled complexity over time
flowchart LR
A[Developer] --> B[Requests task]
B --> C[AI executes]
C --> D[Developer approves/adjusts]
D --> B
Characteristics:
- More interaction with AI
- High degree of granularity
- You see each diff
- Problem: You end up "watching" AI program
flowchart TB
subgraph Planning
A[PRD/Product] --> B[Specifications]
B --> C[Action Plan]
C --> D[Features]
D --> E[Tasks]
end
subgraph Execution
E --> F[Agent 1]
E --> G[Agent 2]
E --> H[Agent 3]
F --> I[Code Review]
G --> I
H --> I
end
subgraph Feedback
I --> J{Approved?}
J -->|Yes| K[Merge]
J -->|No| C
end
Characteristics:
- Agents execute in parallel
- You're not an "AI babysitter"
- Review by checkpoints
- Planning is critical
If planning fails, everything else fails with it. If you wanted a picture of a horse but wrote "camel", AI will paint a camel.
mindmap
root((Workflow))
Tools
IDEs
CLIs
Remote Environments
Documentation
PRD
Design Docs
Specifications
Commands and Prompts
Commands
Prompts
Agents
Skills
Guidelines
Scripts
Hooks
Memory
Short Term
Medium Term
Long Term
MCPs
Integrations
Context
Tools
Models
Cost
Latency
Capability
Environment
Local
Remote
Cloud
- IDEs: Cursor, VS Code, Windsurf, JetBrains
- CLIs: Claude Code, GitHub Copilot CLI
- Function: Initialize projects, chat with AI, validate, debug
- Product: PRD, business requirements
- Technical: Design docs, specifications
- Generated: By AI for humans and for AI
It has never been so easy to generate documentation, but it has also never been so fast to develop problematic software.
| Aspect | Prompts | Commands |
|---|---|---|
| Objective | Avoid repetition, conversation | Execute actions |
| Effect | Informational | Triggers tools/agents |
| Example | "How to implement X?" | /commit, /test, /deploy |
Skills provide on demand capabilities for AI:
- Development guidelines
- Validation scripts
- Hooks (before/after actions)
- Reference material
Skills are not automatically loaded by the agent and there is no guarantee of use. The skill description defines its success - it works like SEO for AI.
graph TB
subgraph "Short Term"
A[Current session]
end
subgraph "Medium Term"
B[Recent information]
C[Ongoing tasks]
end
subgraph "Long Term"
D[Knowledge base]
E[Preferences]
F[History]
end
A --> B
B --> D
Fundamental trade-off:
- Intelligent model: Expensive, slow, high quality
- Fast model: Cheap, fast, lower quality
When to use each:
- Complex planning/evaluation → Premium model
- Daily tasks → Medium model
- Repetitive/simple tasks → Cheap model
Protocol to connect AI to the external world:
- Database access
- API integration
- Access to Figma, Notion, etc.
| Tool | Type | Highlights |
|---|---|---|
| Cursor | IDE | Fast proprietary model, native integration |
| VS Code + Copilot | IDE | Microsoft ecosystem, widely adopted |
| Windsurf | IDE | Cursor competitor |
| JetBrains AI | IDE | For JetBrains users |
| Claude Code | CLI | Long-running agents, skills |
Anthropic's CLI tool with:
- Agent execution
- Skills system
- Configurable hooks
- Long-running workflows
IDEs:
- Visual interface
- More accessible for beginners
- Granular interaction
CLIs:
- More power and flexibility
- Automated workflows
- Background execution
It seems like going back 20 years, using slash commands, but that's exactly what's working best currently.
graph TB
subgraph "Claude Code"
A[CLI] --> B[Agents]
B --> C[Skills]
B --> D[Commands]
B --> E[MCPs]
end
subgraph "Integrations"
E --> F[Notion]
E --> G[Figma]
E --> H[GitHub]
E --> I[Databases]
end
| Concept | What it is | Example |
|---|---|---|
| Plugins | Tool extensions | MCP servers |
| Skills | On demand capabilities | skill-go-development |
| Commands | Executable actions | /commit, /test |
| Agents | Autonomous executors | Code review agent |
- Planning: PRD → Specifications → Tasks
- Execution: Agents execute in parallel
- Validation: Automatic code review
- Iteration: Planning feedback
Is an agent:
- LLM as decision core
- Decisions made by AI
- Known and delimited environment
- Available tools
Is NOT an agent:
- Simple chatbot
- ChatGPT prompt
- Any AI usage
graph LR
A[Input] --> B[LLM]
B --> C{Decision}
C --> D[Tool 1]
C --> E[Tool 2]
C --> F[Tool 3]
D --> G[Observation]
E --> G
F --> G
G --> B
B --> H[Output]
Critical problem:
- Window has token limit
- When full, old information is lost
- Summarization degrades quality
sequenceDiagram
participant U as User
participant A as Agent
participant J as Context Window
U->>A: Instruction
A->>J: Stores
Note over J: Filling up...
J->>J: Summarizes (loses info)
A->>U: Degraded response
graph TB
A[Main Agent] --> B[Sub-agent 1]
A --> C[Sub-agent 2]
A --> D[Sub-agent 3]
B --> E[Window 1]
C --> F[Window 2]
D --> G[Window 3]
B --> H[Result]
C --> H
D --> H
H --> A
Advantages:
- Each agent has separate window
- No context contamination
- Specialized agents
flowchart TB
subgraph Orchestrator
O[Orchestrator Agent]
end
subgraph "Sub-agents"
A1[Planning Agent]
A2[Coding Agent]
A3[Test Agent]
A4[Code Review Agent]
end
O --> A1
O --> A2
O --> A3
O --> A4
A1 --> R1[Plan]
A2 --> R2[Code]
A3 --> R3[Tests]
A4 --> R4[Review]
R1 --> O
R2 --> O
R3 --> O
R4 --> O
- Framework for building LLM applications
- Components for prompts, chains, memory
- Integrations with various providers
- LangChain extension for agents
- State management
- Complex flows with decisions
- Built-in memory management
- Google's framework for agents
- Integration with Gemini
- Proprietary protocols
| Framework | Use Case |
|---|---|
| LangChain | Simple chains, RAG |
| LangGraph | Complex agents, multi-step |
| Google ADK | Google ecosystem |
| Crew AI | Agent teams |
It's essential to understand the framework concepts. Otherwise, the result will be disorganized, low-quality code.
Protocol created by Anthropic to connect LLMs to the external world.
graph LR
A[LLM] --> B[MCP Client]
B --> C[MCP Server 1<br>Notion]
B --> D[MCP Server 2<br>Figma]
B --> E[MCP Server 3<br>Database]
- Resources: Data that can be read
- Tools: Actions that can be executed
- Prompts: Reusable templates
| MCP | Functionality |
|---|---|
| Notion | Access to documents and databases |
| Figma | Design reading |
| Context7 | Consolidated documentation |
| Firecrawl | Web scraping |
| GitHub | Repository operations |
# Accessing Notion via MCP
from mcp import NotionClient
client = NotionClient()
doc = client.get_page("spec-rate-limiter")
# AI can use doc content as contextThe type of software developed today will be completely different from what we will develop in the near future.
Changes:
- Applications with AI at the core
- Interfaces via agents
- Communication protocols between agents
- User increasingly distant from the "physical" interface
mindmap
root((AI Dev))
Fundamentals
Architecture
Design Patterns
Languages
Prompt Engineering
Techniques
Optimization
Evaluation
Tools
IDEs
CLIs
MCPs
Agents
Frameworks
Orchestration
Memory
- Solid fundamentals - Architecture, patterns, languages
- Prompt Engineering - Techniques, optimization
- Tools - Master IDEs and CLIs
- Workflow - Create efficient processes
- Agents - Build and orchestrate agents
- Complex systems - Multi-agents, memory
Never in the history of technology has it been so important to have solid fundamentals. AI multiplies what you are. Accepting code "that looks ok" without understanding is like signing a contract without reading it.
Why fundamentals matter more now:
- AI multiplies capabilities (good and bad)
- You need to validate what AI produces
- Without fundamentals, you become an "enter key presser"
- Anyone can do that
Claude Code is Anthropic's official CLI tool for AI-assisted development. Unlike simple chatbots, it functions as a complete development agent.
graph TB
subgraph "Claude Code Architecture"
CLI[CLI Interface] --> Core[Core Engine]
Core --> Agent[Agent Runtime]
Agent --> Tools[Built-in Tools]
Agent --> Skills[Skills System]
Agent --> MCPs[MCP Connections]
Tools --> FS[File System]
Tools --> Git[Git Operations]
Tools --> Shell[Shell Commands]
Skills --> PS[Project Skills]
Skills --> US[User Skills]
Skills --> ES[Enterprise Skills]
MCPs --> External[External Services]
end
| Aspect | CLI (Claude Code) | IDE (Cursor, Copilot) |
|---|---|---|
| Control | Total via commands | Visual interface |
| Automation | Excellent | Limited |
| Long-running agents | Native support | Limited |
| Learning curve | Higher | Lower |
| Flexibility | Maximum | Moderate |
| Headless/CI | Supported | Not supported |
| Ideal for | Power users, automation | Interactive development |
The CLAUDE.md file in the project root is the main document that AI consults to understand context:
# Project: My Application
## Overview
Web application for task management.
## Tech Stack
- Backend: Node.js + Express
- Frontend: React + TypeScript
- Database: PostgreSQL
- Cache: Redis
## Conventions
- Commits in Portuguese
- Branch naming: feature/, fix/, hotfix/
- Tests required for new features
## Important Commands
- `npm run dev` - Development environment
- `npm run test` - Run tests
- `npm run build` - Production build
## Project Structure
src/
├── api/ # REST endpoints
├── services/ # Business logic
├── models/ # Data models
└── utils/ # Utilities{
"model": "claude-sonnet-4-20250514",
"maxTokens": 8192,
"permissions": {
"allowedTools": ["Read", "Write", "Edit", "Bash", "Glob", "Grep"],
"allowedCommands": ["npm", "git", "docker"]
},
"hooks": {
"preCommit": ".claude/hooks/pre-commit/validate.sh"
}
}.claude/
├── settings.json # Local configurations
├── commands.yaml # Custom commands
├── keybindings.json # Keyboard shortcuts
├── hooks/ # Automation scripts
│ ├── pre-commit/
│ └── post-deploy/
├── skills/ # Project skills
│ └── custom-skill/
│ └── SKILL.md
├── mcp/ # MCP configuration
│ └── servers.json
└── agents/ # Agent configuration
└── orchestrator.yaml
| Command | Description |
|---|---|
claude |
Start interactive session |
claude "task" |
Execute task directly |
claude --resume |
Resume last session |
claude --print |
Non-interactive mode (output only) |
claude config |
Manage configurations |
claude mcp |
Manage MCP servers |
/help |
Help within session |
/clear |
Clear session context |
/compact |
Compact history |
Skills are specialized capabilities that can be loaded on demand by Claude Code. Unlike fixed prompts, skills are activated contextually based on description and tags.
graph LR
U[User] --> |"create tests"| CC[Claude Code]
CC --> |Search skill| SM[Skills Manager]
SM --> |Match| S1[skill-testing]
S1 --> |Loads| CC
CC --> |Execute with context| R[Result]
A skill is defined by a SKILL.md file with YAML frontmatter and Markdown content:
---
name: "go-development"
description: "Skill for Go development with best practices"
tags: ["go", "golang", "backend", "api"]
version: "1.0.0"
author: "your-name"
---
# Go Development Skill
## Context
This skill provides guidelines for Go development following
community best practices.
## Code Conventions
### Project Structurecmd/ ├── api/ │ └── main.go internal/ ├── handlers/ ├── services/ └── repository/ pkg/ └── shared/
### Patterns
- Use `error` as last return
- Prefer composition over inheritance
- Document public functions
## Available Commands
- `go run cmd/api/main.go` - Run the API
- `go test ./...` - Run all tests
| Field | Required | Description |
|---|---|---|
name |
Yes | Unique skill identifier |
description |
Yes | Clear description (crucial for matching) |
tags |
No | Tags for search and categorization |
version |
No | Semantic version |
author |
No | Skill author |
tools |
No | Tools the skill can use |
triggers |
No | Patterns that activate the skill |
graph TD
E[Enterprise Skills] --> |Overrides| P[Personal Skills]
P --> |Overrides| Pr[Project Skills]
Pr --> |Overrides| D[Default Skills]
style E fill:#f44336,color:#fff
style P fill:#ff9800,color:#fff
style Pr fill:#4caf50,color:#fff
style D fill:#9e9e9e,color:#fff
| Level | Location | Scope |
|---|---|---|
| Enterprise | Corporate server | Entire organization |
| Personal | ~/.claude/skills/ |
User |
| Project | .claude/skills/ |
Project |
| Default | Built-in | Global |
Use the /skill-creator skill to automatically generate new skills:
/skill-creator
I want to create a skill for REST API development
with FastAPI that includes:
- Standard project structure
- JWT authentication patterns
- Tests with pytest
- OpenAPI documentation
The skill description works like SEO - the better it is, the higher the chance of matching:
Bad:
description: "Python skill"Good:
description: "Skill for Python development focused on REST APIs,
including FastAPI, Flask, testing with pytest, type hints and automatic
documentation. Use to create endpoints, data validation and authentication."Tips:
- Include synonyms and variations
- List specific use cases
- Mention related technologies
- Use action verbs (create, test, validate)
MCP follows a client-server architecture where Claude Code acts as client and connects to multiple servers:
graph TB
subgraph "Claude Code (Client)"
CC[MCP Client]
end
subgraph "MCP Servers"
S1[Notion Server]
S2[GitHub Server]
S3[Database Server]
S4[Figma Server]
end
subgraph "External Services"
E1[Notion API]
E2[GitHub API]
E3[PostgreSQL]
E4[Figma API]
end
CC <--> |JSON-RPC| S1
CC <--> |JSON-RPC| S2
CC <--> |JSON-RPC| S3
CC <--> |JSON-RPC| S4
S1 <--> E1
S2 <--> E2
S3 <--> E3
S4 <--> E4
Data that can be read by the model:
- Documents
- Files
- Database records
- Application states
Actions that can be executed:
- Create/update records
- Execute queries
- Trigger webhooks
- Manipulate files
Reusable templates for common interactions:
- Standardized queries
- Predefined workflows
- Response formats
{
"mcpServers": {
"notion": {
"command": "npx",
"args": ["-y", "@notionhq/mcp-server"],
"env": {
"NOTION_API_KEY": "${NOTION_API_KEY}"
}
},
"github": {
"command": "npx",
"args": ["-y", "@github/mcp-server"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "${DATABASE_URL}"
}
},
"context7": {
"command": "npx",
"args": ["-y", "@context7/mcp-server"]
}
}
}| Type | Characteristics | Examples |
|---|---|---|
| Official | Maintained by companies, guaranteed support | GitHub MCP, Notion MCP, Linear MCP |
| Anthropic | Reference implementation | Filesystem, PostgreSQL, Brave Search |
| Community | Variety, variable quality | Various on awesome-mcp-servers |
sequenceDiagram
participant U as User
participant CC as Claude Code
participant MCP as MCP Server
participant API as External API
U->>CC: "fetch my open PRs"
CC->>CC: Identifies need for GitHub MCP
CC->>MCP: tools/call: list_pull_requests
MCP->>API: GET /repos/{owner}/{repo}/pulls
API-->>MCP: Response JSON
MCP-->>CC: Formatted result
CC-->>U: "You have 3 open PRs..."
| Plugin | Category | Description | Main Features |
|---|---|---|---|
| Notion MCP | PRD/Docs | Access to Notion documents and databases | Page reading, search, databases |
| Linear MCP | Management | Issues, projects and roadmap | CRUD issues, cycles, projects |
| Jira MCP (Atlassian) | Management | Tickets, sprints and boards | Issues, sprints, reports |
| Confluence MCP | Documentation | Pages and spaces | Doc reading and search |
| Plugin | Category | Description | Main Features |
|---|---|---|---|
| Mermaid MCP | Diagrams | 22+ types of diagrams | Flowcharts, sequence, ER, C4 |
| Figma MCP | Design | Access to designs and components | Frame reading, export |
| Context7 | Documentation | Consolidated documentation | Framework docs search |
| Excalidraw MCP | Diagrams | Hand-drawn diagrams | Creation and editing |
| Plugin | Category | Description | Main Features |
|---|---|---|---|
| GitHub MCP | Versioning | PRs, issues, repos | Complete CRUD, code review |
| Git MCP | Versioning | Local git operations | Commits, branches, merge |
| ESLint MCP | Linting | JavaScript static analysis | Lint, automatic fix |
| Playwright MCP | Testing | Browser automation | E2E tests, screenshots |
| SonarQube MCP | Quality | Code analysis | Metrics, vulnerabilities |
| Plugin | Category | Description | Main Features |
|---|---|---|---|
| Postgres MCP | Database | PostgreSQL queries and schemas | CRUD, migrations, schemas |
| Firebase MCP | Backend | Complete Firebase | Auth, Firestore, Functions |
| AWS MCP | Cloud | AWS services | S3, Lambda, DynamoDB |
| Pinecone MCP | Vector DB | Vector search | Embeddings, similarity search |
| Redis MCP | Cache | Cache and pub/sub | GET/SET, pub/sub, streams |
| MongoDB MCP | Database | MongoDB operations | CRUD, aggregations |
| Plugin | Category | Description | Main Features |
|---|---|---|---|
| Slack MCP | Communication | Slack integration | Messages, channels |
| Discord MCP | Communication | Discord integration | Messages, webhooks |
| Gmail MCP | Email operations | Reading, sending, search |
The context window is the token limit that the model can process simultaneously. When exceeded:
graph TD
subgraph "Context Window"
A[Initial Instructions] --> B[Project Context]
B --> C[Conversation History]
C --> D[Current Task]
D --> E[⚠️ LIMIT]
end
E --> F[Summarization]
F --> G[Information Loss]
G --> H[Quality Degradation]
Consequences:
- Old information is summarized or discarded
- Initial instructions may be "forgotten"
- Response quality degrades progressively
- Complex tasks become impossible
The solution is to divide work among multiple agents, each with its own context window:
graph TB
subgraph "Orchestrator"
O[Main Agent]
O_CTX[Context: Overview]
end
subgraph "Specialized Agents"
A1[Backend Agent]
A1_CTX[Context: APIs, DB]
A2[Frontend Agent]
A2_CTX[Context: UI, UX]
A3[Test Agent]
A3_CTX[Context: Specs, Fixtures]
A4[DevOps Agent]
A4_CTX[Context: Infra, CI/CD]
end
O --> |Delegates| A1
O --> |Delegates| A2
O --> |Delegates| A3
O --> |Delegates| A4
A1 --> |Result| O
A2 --> |Result| O
A3 --> |Result| O
A4 --> |Result| O
The orchestrator is responsible for:
- Receiving the task from user
- Decomposing into subtasks
- Assigning to specialized agents
- Aggregating results
- Validating final quality
# orchestrator.yaml
name: "main-orchestrator"
description: "Project main orchestrator"
agents:
- name: "backend-dev"
skills: ["api-development", "database"]
triggers: ["endpoint", "api", "database", "query"]
- name: "frontend-dev"
skills: ["react", "typescript", "css"]
triggers: ["component", "ui", "style", "page"]
- name: "test-writer"
skills: ["testing", "playwright"]
triggers: ["test", "spec", "e2e", "coverage"]
- name: "code-reviewer"
skills: ["code-review"]
triggers: ["review", "pr", "quality"]
workflow:
- receive_task
- decompose_tasks
- assign_to_agents
- monitor_progress
- aggregate_results
- validate_qualityEach agent should have a well-defined scope:
| Agent | Responsibility | Tools |
|---|---|---|
| Planner | Analysis and decomposition | Read, Glob, Grep |
| Developer | Implementation | Read, Write, Edit, Bash |
| Tester | Test creation | Read, Write, Bash (test) |
| Reviewer | Code review | Read, Grep, GitHub MCP |
| DevOps | Deploy and infra | Bash, AWS/Firebase MCP |
sequenceDiagram
participant U as User
participant O as Orchestrator
participant D as Dev Agent
participant T as Test Agent
participant R as Review Agent
U->>O: "Implement feature X"
O->>O: Decompose task
O->>D: "Create endpoint /api/x"
D->>D: Implements code
D-->>O: Code ready
O->>T: "Create tests for /api/x"
T->>T: Writes tests
T-->>O: Tests ready
O->>R: "Review implementation"
R->>R: Analyzes code
R-->>O: Feedback
O-->>U: "Feature X implemented and tested"
Agents execute one after another:
Plan → Develop → Test → Review → Deploy
Agents execute simultaneously:
┌─ Frontend ─┐
Input ───┼─ Backend ─┼─── Merge
└─ Tests ─┘
Agents can spawn sub-agents:
Orchestrator
└─ Dev Lead
├─ Junior Dev 1
└─ Junior Dev 2
Hooks are scripts that execute automatically in response to specific events in Claude Code. They allow automation of validations, formatting, and integrations.
graph LR
E[Event] --> H{Hook Exists?}
H -->|Yes| P[Pre-hook]
P --> A[Main Action]
A --> Po[Post-hook]
Po --> R[Result]
H -->|No| A
| Type | Moment | Common Use |
|---|---|---|
| Pre-commit | Before commit | Lint, format, unit tests |
| Post-commit | After commit | Notifications, sync |
| Pre-push | Before push | Complete tests, build |
| Post-push | After push | Deploy staging, webhooks |
| Pre-deploy | Before deploy | Validation, backup |
| Post-deploy | After deploy | Health check, notifications |
.claude/
└── hooks/
├── pre-commit/
│ ├── lint.sh
│ ├── format.sh
│ └── test-unit.sh
├── post-commit/
│ └── notify.sh
├── pre-push/
│ └── test-full.sh
├── pre-deploy/
│ └── validate.sh
└── post-deploy/
├── health-check.sh
└── notify-slack.sh
#!/bin/bash
# .claude/hooks/pre-commit/validate.sh
set -e
echo "🔍 Running pre-commit validations..."
# Lint
echo "→ ESLint..."
npm run lint
# Format check
echo "→ Prettier..."
npm run format:check
# Type check
echo "→ TypeScript..."
npm run typecheck
# Unit tests
echo "→ Unit tests..."
npm run test:unit
echo "✅ All validations passed!"#!/bin/bash
# .claude/hooks/pre-commit/python-validate.sh
set -e
echo "🐍 Running Python validations..."
# Ruff linting
echo "→ Ruff lint..."
ruff check .
# Ruff formatting
echo "→ Ruff format..."
ruff format --check .
# Type checking
echo "→ MyPy..."
mypy src/
# Tests
echo "→ Pytest..."
pytest tests/ -q
echo "✅ Python validations passed!"#!/bin/bash
# .claude/hooks/post-deploy/health-check.sh
set -e
API_URL="${DEPLOY_URL:-https://api.myapp.com}"
MAX_RETRIES=5
RETRY_DELAY=10
echo "🏥 Running health check on $API_URL..."
for i in $(seq 1 $MAX_RETRIES); do
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "$API_URL/health")
if [ "$HTTP_CODE" = "200" ]; then
echo "✅ Health check passed! (HTTP $HTTP_CODE)"
exit 0
fi
echo "⏳ Attempt $i/$MAX_RETRIES failed (HTTP $HTTP_CODE). Retrying in ${RETRY_DELAY}s..."
sleep $RETRY_DELAY
done
echo "❌ Health check failed after $MAX_RETRIES attempts"
exit 1#!/bin/bash
# .claude/hooks/post-deploy/notify-slack.sh
SLACK_WEBHOOK="${SLACK_WEBHOOK_URL}"
DEPLOY_ENV="${DEPLOY_ENV:-production}"
VERSION="${VERSION:-unknown}"
DEPLOYER="${DEPLOYER:-CI/CD}"
curl -X POST "$SLACK_WEBHOOK" \
-H 'Content-Type: application/json' \
-d "{
\"blocks\": [
{
\"type\": \"section\",
\"text\": {
\"type\": \"mrkdwn\",
\"text\": \"🚀 *Deploy Completed*\n• Environment: \`$DEPLOY_ENV\`\n• Version: \`$VERSION\`\n• Deployed by: $DEPLOYER\"
}
}
]
}"
echo "📨 Slack notification sent!"- Always use
set -e- Fail fast on error - Keep hooks fast - Pre-commit < 30s ideally
- Use cache - Avoid redundant work
- Provide clear feedback - Emojis and informative messages
- Be idempotent - Multiple executions = same result
- Document dependencies - What needs to be installed
This workflow represents a complete software development cycle with AI, from conception to deploy.
┌─────────────────────────────────────────────────────────────┐
│ PHASE 1: DISCOVERY │
│ - Define problem/opportunity │
│ - Collect initial requirements │
│ - Identify stakeholders │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 2: PRD (Product Requirements Document) │
│ - Product vision │
│ - User stories │
│ - Acceptance criteria │
│ - Success metrics │
│ Plugins: Notion MCP, Linear MCP │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 3: ADR (Architecture Decision Records) │
│ - Context and problem │
│ - Options considered │
│ - Decision and justification │
│ - Consequences │
│ Plugins: Context7, Mermaid MCP │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 4: TECHNICAL SPECIFICATION │
│ - System architecture │
│ - APIs and contracts │
│ - Data model │
│ - Diagrams (C4, sequence, etc) │
│ Plugins: Mermaid MCP, Context7, Figma MCP │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 5: TASK PLANNING │
│ - Break into features │
│ - Break into tasks │
│ - Estimates and dependencies │
│ - Assignment to agents │
│ Plugins: Linear MCP, Jira MCP, GitHub MCP │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 6: DEVELOPMENT (Parallel) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Dev Agent │ │ Dev Agent │ │ Test Agent │ │
│ │ Feature A │ │ Feature B │ │ Tests │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ Plugins: GitHub MCP, ESLint MCP, Playwright MCP │
│ Hooks: pre-commit (lint, format, test) │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 7: CODE REVIEW │
│ - Automatic review by agent │
│ - Quality checklist │
│ - Structured feedback │
│ Skill: /code-review │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 8: VALIDATION AND TESTING │
│ - Unit tests │
│ - Integration tests │
│ - E2E tests │
│ - Performance tests │
│ Plugins: Playwright MCP, Jest/Vitest │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 9: DEPLOY │
│ - Build and validation │
│ - Deploy staging │
│ - Smoke tests │
│ - Deploy production │
│ Plugins: Firebase MCP, AWS MCP │
│ Hooks: pre-deploy, post-deploy │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 10: DOCUMENTATION AND RETROSPECTIVE │
│ - Update CLAUDE.md │
│ - Document decisions │
│ - Lessons learned │
│ - Metrics and KPIs │
└─────────────────────────────────────────────────────────────┘
Objective: Deeply understand the problem before any code.
Inputs:
- User feedback
- Business metrics
- Competitor analysis
Outputs:
- Clear problem statement
- List of stakeholders
- Initial scope defined
Basic PRD template:
# PRD: [Feature Name]
## Overview
[2-3 paragraph description]
## Problem
[What problem are we solving?]
## User Stories
### US-001: [Title]
**As a** [user type]
**I want** [action]
**So that** [benefit]
**Acceptance Criteria:**
- [ ] AC-001: [Criterion]
- [ ] AC-002: [Criterion]
## Success Metrics
- [ ] Metric 1: [Definition and goal]
- [ ] Metric 2: [Definition and goal]
## Out of Scope
- Item 1
- Item 2ADR template:
# ADR-001: [Decision Title]
## Status
[Proposed | Accepted | Deprecated | Superseded]
## Context
[What is the problem or situation?]
## Decision
[What was decided?]
## Options Considered
### Option 1: [Name]
- ✅ Advantage 1
- ✅ Advantage 2
- ❌ Disadvantage 1
### Option 2: [Name]
- ✅ Advantage 1
- ❌ Disadvantage 1
- ❌ Disadvantage 2
## Consequences
- Positive: [List]
- Negative: [List]
- Risks: [List]Agent configuration for parallel development:
# .claude/agents/dev-team.yaml
team:
orchestrator:
model: "claude-sonnet"
role: "coordinator"
agents:
- name: "feature-a-dev"
model: "claude-sonnet"
focus: "backend API endpoints"
branch: "feature/api-endpoints"
- name: "feature-b-dev"
model: "claude-sonnet"
focus: "frontend components"
branch: "feature/ui-components"
- name: "test-writer"
model: "claude-haiku"
focus: "test coverage"
branch: "feature/tests"
workflow:
parallel:
- feature-a-dev
- feature-b-dev
- test-writer
then:
- code-review
- mergeproject/
├── CLAUDE.md # Main documentation for AI
├── README.md # Documentation for humans
├── .claude/
│ ├── settings.json # Claude Code configurations
│ ├── commands.yaml # Custom commands
│ ├── keybindings.json # Keyboard shortcuts
│ ├── hooks/
│ │ ├── pre-commit/
│ │ │ ├── lint.sh
│ │ │ ├── format.sh
│ │ │ └── test-unit.sh
│ │ ├── pre-push/
│ │ │ └── test-full.sh
│ │ └── post-deploy/
│ │ ├── health-check.sh
│ │ └── notify.sh
│ ├── skills/
│ │ ├── project-conventions/
│ │ │ └── SKILL.md
│ │ └── custom-commands/
│ │ └── SKILL.md
│ ├── mcp/
│ │ └── servers.json
│ └── agents/
│ ├── orchestrator.yaml
│ └── specialists.yaml
├── docs/
│ ├── prd/ # Product Requirements
│ │ └── feature-x.md
│ ├── adr/ # Architecture Decisions
│ │ ├── 001-database.md
│ │ └── 002-auth.md
│ ├── specs/ # Technical Specifications
│ │ └── api-spec.md
│ └── architecture/ # Diagrams and Architecture
│ ├── c4-context.md
│ └── sequence-diagrams.md
├── src/ # Source code
├── tests/ # Test files
├── scripts/ # Utility scripts
└── .github/
└── workflows/ # CI/CD pipelines
# Project: [Project Name]
## Overview
[Concise project description in 2-3 lines]
## Quick Start
\`\`\`bash
# Install dependencies
npm install
# Run development environment
npm run dev
# Run tests
npm run test
\`\`\`
## Tech Stack
- **Runtime**: Node.js 20 LTS
- **Framework**: Express.js 4.x
- **Database**: PostgreSQL 15 + Prisma ORM
- **Cache**: Redis 7
- **Frontend**: React 18 + TypeScript 5
- **Testing**: Jest + Playwright
## Architecture
### Folder Structure
\`\`\`
src/
├── api/ # Controllers and routes
├── services/ # Business logic
├── repositories/ # Data access
├── models/ # Types and interfaces
├── middlewares/ # Express middlewares
├── utils/ # Utilities
└── config/ # Configurations
\`\`\`
### Patterns
- Repository Pattern for data access
- Service Layer for business logic
- DTOs for data transfer
- Centralized error handling
## Code Conventions
### Naming
- Files: kebab-case (`user-service.ts`)
- Classes: PascalCase (`UserService`)
- Functions/variables: camelCase (`getUserById`)
- Constants: SCREAMING_SNAKE_CASE (`MAX_RETRIES`)
### Git
- Branch: `feature/`, `fix/`, `hotfix/`, `refactor/`
- Commits: Conventional Commits
- PRs: Always with description and checklist
### Testing
- Unit: `*.spec.ts` in same directory
- Integration: `tests/integration/`
- E2E: `tests/e2e/`
- Minimum coverage: 80%
## Available Commands
| Command | Description |
|---------|-------------|
| `npm run dev` | Development with hot-reload |
| `npm run build` | Production build |
| `npm run test` | All tests |
| `npm run test:unit` | Unit only |
| `npm run test:e2e` | E2E only |
| `npm run lint` | Check linting |
| `npm run lint:fix` | Fix linting |
| `npm run db:migrate` | Run migrations |
| `npm run db:seed` | Seed database |
## Environment Variables
\`\`\`env
DATABASE_URL=postgresql://...
REDIS_URL=redis://...
JWT_SECRET=...
API_PORT=3000
\`\`\`
## Useful Links
- PRD: `docs/prd/`
- ADRs: `docs/adr/`
- API Docs: http://localhost:3000/docs
- Storybook: http://localhost:6006
## Contacts
- Tech Lead: name@email.com
- Product Owner: name@email.com{
"model": "claude-sonnet-4-20250514",
"fallbackModel": "claude-haiku-3-5-20241022",
"maxTokens": 8192,
"temperature": 0.7,
"permissions": {
"allowedTools": [
"Read", "Write", "Edit",
"Bash", "Glob", "Grep",
"WebFetch", "WebSearch"
],
"allowedCommands": [
"npm", "npx", "node",
"git", "gh",
"docker", "docker-compose",
"psql", "redis-cli"
],
"deniedPaths": [
".env",
".env.*",
"**/secrets/**",
"**/credentials/**"
]
},
"hooks": {
"preCommit": ".claude/hooks/pre-commit/",
"prePush": ".claude/hooks/pre-push/",
"postDeploy": ".claude/hooks/post-deploy/"
},
"mcp": {
"configPath": ".claude/mcp/servers.json"
},
"agents": {
"orchestratorConfig": ".claude/agents/orchestrator.yaml"
},
"context": {
"includeGitStatus": true,
"includePackageJson": true,
"maxFileSize": "100KB"
}
}| Marketplace | URL | Description |
|---|---|---|
| MCP Market | mcpmarket.com | Official marketplace with curation |
| MCPServers.org | mcpservers.org | MCP servers directory |
| LobeHub MCP | lobehub.com/mcp | Visual marketplace with filters |
| Awesome MCP Servers | GitHub | Curated list on GitHub |
| Cline's MCP Marketplace | GitHub | MCPs for Cline |
| Resource | URL | Description |
|---|---|---|
| Anthropic Skills | GitHub | Official Anthropic skills |
| Community Skills | GitHub Topics | Community skills |
| Skills Documentation | Claude Docs | Official documentation |
| Resource | URL | Description |
|---|---|---|
| Claude Code Docs | code.claude.com/docs | Complete documentation |
| MCP Protocol Spec | modelcontextprotocol.io | Protocol specification |
| Claude API Docs | docs.anthropic.com | API and SDK |
| Claude Help Center | support.anthropic.com | Support and FAQs |
| Community | Platform | Focus |
|---|---|---|
| Anthropic Discord | Discord | Official support |
| Claude Code GitHub | GitHub Discussions | Issues and discussions |
| r/ClaudeAI | General community | |
| MCP Community | Discord | MCP developers |
| Tool | Description | URL |
|---|---|---|
| Claude Dev | VS Code extension | VS Code Marketplace |
| MCP Inspector | MCP debugging | GitHub |
| Prompt Tester | Test prompts | Various options |
| Token Counter | Count tokens | tiktoken, etc |
| Template | Stack | URL |
|---|---|---|
| Claude Code Starter | Node.js | GitHub |
| MCP Server Template | TypeScript | GitHub |
| Skills Template | Markdown | Anthropic |
| Full Project Template | Multi-stack | Community |
This chapter covers the fundamental patterns for building AI agents that operate in continuous loops, making decisions and taking actions autonomously.
Agentic loops are the core mechanism that enables AI agents to work autonomously. They follow a continuous cycle of perception, reasoning, action, and observation.
flowchart LR
subgraph "AGENTIC LOOP"
P[PERCEIVE<br>Input] --> R[REASON<br>Think]
R --> A[ACT<br>Output]
A --> O[OBSERVE<br>Feedback]
O --> P
end
style P fill:#e3f2fd
style R fill:#fff3e0
style A fill:#e8f5e9
style O fill:#fce4ec
Loop continues until: goal reached OR max iterations
The ReAct (Reasoning + Acting) loop combines thinking, decision-making, and execution with continuous feedback.
Use cases: Search + reasoning tasks, tool-using agents, interactive problem solving.
Creates an initial plan, executes steps sequentially, and replans when needed.
Use cases: Multi-step tasks, project planning, complex workflows.
Generates a response, critiques it, and improves iteratively until satisfactory.
Use cases: Content generation, code review, quality-critical outputs.
Coordinates multiple specialized agents to work together on complex tasks.
flowchart TB
subgraph Orchestration["MULTI-AGENT ORCHESTRATION"]
O[ORCHESTRATOR<br>Coordinator]
O --> A1[Agent 1<br>Research]
O --> A2[Agent 2<br>Code]
O --> A3[Agent 3<br>Review]
A1 --> C[COMBINE<br>RESULTS]
A2 --> C
A3 --> C
C --> O
end
style O fill:#7e57c2,color:#fff
style A1 fill:#42a5f5,color:#fff
style A2 fill:#66bb6a,color:#fff
style A3 fill:#ffa726,color:#fff
style C fill:#78909c,color:#fff
Use cases: Large codebases, multi-domain problems, parallel processing.
flowchart TB
subgraph Workflow["AI DEVELOPMENT WORKFLOW"]
D[1. DEFINITION<br>Define objective & success criteria]
DC[2. DECOMPOSITION<br>Break into sub-tasks]
PS[3. PATTERN SELECTION]
IM[4. IMPLEMENTATION]
VA[5. VALIDATION]
D --> DC --> PS --> IM --> VA
end
subgraph Patterns["Pattern Options"]
P1[Simple → ReAct]
P2[Complex → Plan-Execute]
P3[Quality → Self-Refine]
P4[Multi-domain → Multi-Agent]
end
PS --- Patterns
style D fill:#e3f2fd
style DC fill:#e8f5e9
style PS fill:#fff3e0
style IM fill:#f3e5f5
style VA fill:#fce4ec
| Scenario | Recommended Pattern | Rationale |
|---|---|---|
| Search + Reasoning | ReAct Loop | Combines tool use with reasoning |
| Multi-step tasks | Plan-Execute | Structured approach with replanning |
| Content generation | Self-Refine | Iterative quality improvement |
| Data pipeline | Prompt Chaining | Sequential processing |
| Multiple perspectives | Multi-Agent | Specialized expertise per agent |
| Complex decisions | Tree of Thoughts | Explores multiple solution paths |
- Always define
max_iterations- Avoids infinite loops and exploding costs - Logging is essential - Record each iteration for debugging
- Clear stopping criteria - The agent needs to know when to finish
- Fallbacks - Have a plan B if the loop doesn't converge
- Cost awareness - Each iteration = more tokens = more cost
This chapter presents a systematic 5-step framework for AI-assisted feature development, ensuring quality and consistency in your development process.
The Feature Development Framework provides a structured approach to building features with AI assistance. It combines multiple prompt engineering techniques with best practices for software development.
flowchart TB
FR[FEATURE REQUEST] --> U
subgraph Framework["FEATURE DEVELOPMENT FRAMEWORK"]
U["1. UNDERSTANDING<br>• What does the user want?<br>• What are the requirements?<br>• What are the constraints?"]
D["2. DECOMPOSITION<br>• Break into sub-tasks<br>• Identify dependencies<br>• Prioritize by value"]
E["3. EXPLORATION<br>• Consider alternatives<br>• Evaluate trade-offs<br>• Document decisions"]
I["4. IMPLEMENTATION<br>• Build with context<br>• Iterate with refinement<br>• Validate with tests"]
R["5. REVIEW<br>• Code review<br>• Update documentation<br>• Create ADR if needed"]
U --> D --> E --> I --> R
end
style FR fill:#ff7043,color:#fff
style U fill:#42a5f5,color:#fff
style D fill:#66bb6a,color:#fff
style E fill:#ffca28,color:#000
style I fill:#ab47bc,color:#fff
style R fill:#26a69a,color:#fff
Objective: Clarify requirements before any code is written.
Use Chain of Thought reasoning to deeply understand:
- What does the user actually want?
- What are the explicit and implicit requirements?
- What constraints exist (time, resources, technology)?
- What are the success criteria?
Prompt approach:
Before implementing this feature, let me think through the requirements step by step:
1. What is the core functionality being requested?
2. What are the edge cases?
3. What are the acceptance criteria?
Objective: Break the feature into manageable sub-tasks.
Apply Least-to-Most decomposition:
- Break large features into smaller tasks
- Identify dependencies between tasks
- Prioritize by business value and technical dependencies
- Create a clear execution order
Example decomposition:
Feature: User Authentication
├── Sub-task 1: Create user model and database schema
├── Sub-task 2: Implement registration endpoint
├── Sub-task 3: Implement login endpoint
├── Sub-task 4: Add JWT token generation
├── Sub-task 5: Create authentication middleware
└── Sub-task 6: Write tests for all endpoints
Objective: Consider alternatives and evaluate trade-offs.
Use Tree of Thoughts approach:
- Generate multiple solution approaches
- Evaluate pros and cons of each
- Consider scalability, maintainability, performance
- Document decisions in ADRs when significant
Decision framework:
| Option | Pros | Cons | Recommendation |
|---|---|---|---|
| Option A | Fast, simple | Less scalable | Good for MVP |
| Option B | Scalable, robust | More complex | Good for production |
| Option C | Most flexible | Highest complexity | Overkill for current needs |
Objective: Build the feature with AI assistance.
Best practices:
- Provide rich context (CLAUDE.md, relevant files)
- Use iterative refinement (Self-Refine pattern)
- Validate with tests as you build
- Keep commits atomic and well-documented
Implementation cycle:
flowchart LR
G[Generate<br>Code] --> R[Review<br>Output] --> RF[Refine<br>If Needed]
RF --> G
style G fill:#66bb6a,color:#fff
style R fill:#42a5f5,color:#fff
style RF fill:#ffca28,color:#000
Objective: Ensure quality and documentation.
Review checklist:
- Code follows project conventions
- Tests pass and coverage is adequate
- Documentation is updated
- No security vulnerabilities introduced
- Performance is acceptable
- ADR created for significant decisions
| Development Scenario | Primary Pattern | Supporting Techniques |
|---|---|---|
| New feature from scratch | Plan-Execute | Chain of Thought, Decomposition |
| Bug fix | ReAct | Self-Refine |
| Refactoring | Self-Refine | Tree of Thoughts |
| Performance optimization | Tree of Thoughts | Self-Refine |
| API integration | Plan-Execute | Prompt Chaining |
| UI component | Self-Refine | Multi-Agent (if complex) |
| Architecture decision | Tree of Thoughts | Chain of Thought |
| Code review | Self-Refine | Chain of Thought |
Scenario: Building a rate limiter feature
-
Understanding: "I need a rate limiter for the API that limits requests per user to 100/minute"
-
Decomposition:
- Design rate limiting algorithm (token bucket vs sliding window)
- Create middleware
- Add Redis storage for distributed tracking
- Implement bypass for admin users
- Add monitoring/metrics
- Write tests
-
Exploration: Compare token bucket vs sliding window algorithms, document choice in ADR
-
Implementation: Build incrementally with tests, using self-refine for code quality
-
Review: Code review, update API docs, verify performance under load
| Term | Definition |
|---|---|
| LLM | Large Language Model - Large scale language model |
| Token | LLM processing unit (approx. 4 characters) |
| Prompt | Instruction/text sent to LLM |
| Context Window | Token limit that the model can process |
| RAG | Retrieval Augmented Generation - Search + Generation |
| Embedding | Vector representation of text |
| Vector Database | Database optimized for vector search |
| MCP | Model Context Protocol - Model context protocol |
| Agent | Autonomous system with LLM as decision core |
| Multi-Agent | System with multiple collaborating agents |
| Orchestrator | Agent that coordinates other agents |
| Skill | Capability/knowledge provided on demand to AI |
| Hook | Action executed before/after events |
| VibeCoding | Development without planning, based on intuition |
| Chain of Thought | Prompt technique requesting step-by-step reasoning |
| ReAct | Reason + Act - Reasoning and action pattern |
| Guardrails | Protections/limits for AI behavior |
| Prompt Injection | Attack that manipulates AI behavior via prompt |
| Evaluation | Process of measuring prompt/response quality |
| Trade-off | Compromise between different aspects (cost x quality) |
| Sub-agent | Specialized agent that executes subtasks delegated by orchestrator |
| CLAUDE.md | Main configuration file providing project context for AI |
| Frontmatter | YAML metadata at beginning of Markdown files |
| CI/CD | Continuous Integration/Continuous Deployment |
| PRD | Product Requirements Document |
| ADR | Architecture Decision Record |
| E2E | End-to-End - End-to-end tests |
| Headless | Execution without graphical interface |
| Agentic Loop | Continuous cycle of perceive-reason-act-observe for autonomous agents |
| Self-Refine | Pattern where AI critiques and improves its own output iteratively |
| Plan-Execute | Pattern where AI creates a plan first, then executes steps |
| Source | URL | Description |
|---|---|---|
| Claude Code Skills Docs | code.claude.com/docs/en/skills | Official Skills documentation |
| Claude Help Center - Skills | support.claude.com | Skills creation tutorial |
| MCP Protocol Spec | modelcontextprotocol.io | Official MCP specification |
| Anthropic MCP Partners | anthropic.com/partners/mcp | Official MCP partners |
| Source | URL | Description |
|---|---|---|
| MCP Market | mcpmarket.com | MCP marketplace |
| Awesome MCP Servers | GitHub | Curated MCP servers list |
| GitHub MCP Server | GitHub | Official GitHub MCP |
| ESLint MCP | eslint.org/docs/latest/use/mcp | ESLint MCP documentation |
| Linear MCP | linear.app/docs/mcp | Linear MCP documentation |
| Atlassian MCP | atlassian.com/blog | Atlassian MCP announcement |
| Source | URL | Description |
|---|---|---|
| Anthropic Skills Repository | GitHub | Official Anthropic skills |
| 12 Factor Agents | GitHub | Agent building patterns |
| MCPServers.org | mcpservers.org | MCP servers directory |
| LobeHub MCP | lobehub.com | Visual MCP marketplace |
| Source | URL | Description |
|---|---|---|
| LangChain | langchain.com | Framework for LLM applications |
| LangGraph | langchain-ai.github.io/langgraph | Agent framework |
| Playwright | playwright.dev | Browser automation |
| Pinecone | pinecone.io | Vector database |
graph LR
subgraph "Primary Sources"
A[Anthropic Docs]
B[MCP Spec]
C[GitHub Repos]
end
subgraph "Secondary Sources"
D[Marketplaces]
E[Community]
F[Official Blogs]
end
subgraph "This Guide"
G[AI for Developers Guide]
end
A --> G
B --> G
C --> G
D --> G
E --> G
F --> G
Note: This document was structured to serve as a reference and study guide on AI for developers.
Last updated: February 2026