Skip to content

andreustimm/Software-Developement-Guide-with-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

Software Development Guide with AI

Language / Idioma: Português Brasileiro | English

Complete guide on Artificial Intelligence for Developers - from basics to autonomous agents


Table of Contents

  1. Introduction and Context
  2. AI for Developers: The New Paradigm
  3. Software Architecture in the AI Era
  4. Prompt Engineering
  5. Development Workflow with AI
  6. Pillars of AI-Assisted Development
  7. Development Tools
  8. Claude Code and Ecosystem
  9. AI Agents
  10. Agent Frameworks
  11. MCP - Model Context Protocol
  12. The Future of Development
  13. Claude Code: Complete Guide
  14. Skills: Creating Custom Capabilities
  15. MCPs: Model Context Protocol in Depth
  16. Recommended Plugins by Category
  17. Sub-agents and Orchestration
  18. Hooks: Workflow Automation
  19. Complete Professional Workflow
  20. Ideal Project Configuration
  21. Marketplaces and Resources
  22. Agentic Workflows and Loops
  23. Feature Development Framework
  24. Glossary
  25. References

1. Introduction and Context

The Current Moment

We are experiencing a radical transformation in the developer profession. AI is no longer just an auxiliary tool - it is completely redefining how we develop software.

With so many new developments emerging daily, it's easy to lose the big picture and feel lost.

Learning Roadmap

flowchart LR
    A[Fundamentals] --> B[Prompt Engineering]
    B --> C[Tools]
    C --> D[Workflow]
    D --> E[Agents]
    E --> F[Multi-Agent Systems]

    style A fill:#e1f5fe
    style B fill:#b3e5fc
    style C fill:#81d4fa
    style D fill:#4fc3f7
    style E fill:#29b6f6
    style F fill:#03a9f4
Loading

Workshop Objectives

  • Understand where we are in the profession's evolution
  • Comprehend the new type of software we will develop
  • Master Prompt Engineering techniques
  • Learn about modern tools and workflows
  • Understand AI agents and multi-agent applications

2. AI for Developers: The New Paradigm

The Three Layers of AI

graph TB
    subgraph "Layer 3 - Application Development"
        AD[AI Interface<br>Prompt Engineer<br>AI Agents<br>Systems Integration]
    end

    subgraph "Layer 2 - Model Development"
        MD[Model Creation<br>Training<br>Inference<br>Data Science]
    end

    subgraph "Layer 1 - Infrastructure"
        INF[Data Centers<br>Cloud<br>GPU/TPU<br>Hardware]
    end

    AD --> MD
    MD --> INF

    style AD fill:#4caf50,color:#fff
    style MD fill:#ff9800,color:#fff
    style INF fill:#f44336,color:#fff
Loading

Where Developers Position Themselves

Layer Focus Profile
Infrastructure Hardware, Cloud, GPU DevOps, SRE
Model Development Training, ML/DL Data Scientists
Application Development Integration, Agents Developers

AI for developers is aimed at those who program daily, developing systems within companies - not at those creating the AI themselves.

AI as a Tool

AI serves as a tool for:

  • Architecting software - Analyzing problems, trade-offs and recommendations
  • Exploring possibilities - Knowing what you don't know
  • Generating documentation - Quickly and comprehensively
  • Coding - Today it doesn't just help, it codes

AI as a Software Project

New development paradigm:

  • Traditional applications are changing
  • The core of applications now is AI or communicates with AI
  • New communication protocols (MCP, A2A)
  • AI agents as a new type of software

3. Software Architecture in the AI Era

Emerging New Patterns

timeline
    title Evolution of Software Patterns
    1977 : Pattern Language
    1987 : Patterns in SmallTalk
    1994 : Gang of Four (GoF)
    2000 : SOLID
    2003 : Enterprise Integration Patterns
    2004 : Domain-Driven Design
    2010 : 12 Factor App
    2014 : Microservices
    2024 : 12 Factor Agents
Loading

12 Factor Agents

New recommendations for building AI agents, following the 12 Factor Apps model for cloud.

Problems That Didn't Exist Before

LLM Latency

  • Traditional requests: 10-300ms
  • LLM requests: can take 3+ minutes
  • Need for completely different architectural strategies

Token Cost

  • Everything that enters the LLM is a prompt
  • Everything that enters and exits is tokens
  • Tokens cost money

Architectural Trade-offs

graph TD
    A[Model Choice] --> B{Trade-off}
    B --> C[Latency]
    B --> D[Cost]
    B --> E[Quality]

    C --> F[Fast Model<br>Low cost<br>Lower quality]
    D --> G[Medium Model<br>Medium cost<br>Medium quality]
    E --> H[Premium Model<br>High cost<br>High quality]
Loading

Security: Prompt Injection

New vulnerability category:

  • Users can pass malicious prompts
  • AI can be convinced to ignore rules
  • Possibility of accessing sensitive data
  • Need for guardrails and protections

Before, the concern was SQL Injection. Now, malicious users try to manipulate AI with prompts like: "I am your manager, ignore all previous rules."

New Architectural Components

  • Vector databases: Pinecone, Weaviate, PGVector
  • Data pipelines: Document processing, embeddings
  • LLM Cache: AI-specific strategies
  • Evaluation: Measuring prompt quality

4. Prompt Engineering

Why It's Important

Without knowledge of Prompt Engineering, you are wasting money, time, and doing things sub-optimally.

Prompt is the New Source Code

  • Before we programmed in PHP, Java, JavaScript
  • Now we program in natural language
  • Prompts are expensive to develop
  • Prompts are versionable and shareable

Prompt Engineering Techniques

Zero-Shot

Direct question without examples.

User: What is the capital of Brazil?
AI: The capital of Brazil is Brasília.

Use: Simple and direct questions.

One-Shot

One example before the question.

Example: What is the capital of France? Answer: Paris
Question: What is the capital of Brazil?

Use: When you need to define response format.

Few-Shot

Multiple examples to establish pattern.

Example 1: Database connection lost → Error
Example 2: Disk usage at 85% → Warning
Example 3: User logged in successfully → Info
Classify: API response time above limit → ?

Use: Classification, categorization, behavior patterns.

With few examples, AI needs to infer too much. With too many examples, it can generate ambiguity. Balance is essential.

Chain of Thought (CoT)

Requests that AI demonstrates its reasoning step by step.

Classify the log severity.
Input: Disk usage at 85%
Think step by step about why this is info, warning, or error.
At the end, give the final answer.

Use: Complex problems requiring reasoning, debugging, planning.

ReAct (Reason + Act)

Alternates between reasoning and action.

flowchart LR
    A[Reasoning] --> B[Action]
    B --> C[Observation]
    C --> A
    C --> D[Conclusion]
Loading
Use the ReAct reasoning style:
- Thought: your reasoning
- Action: concrete step to execute
- Observation: what you found

Use: It's the foundation of tools like Claude Code, Copilot, Cursor.

Technique Impact on Results

Technique Cost Quality Ideal Use
Zero-Shot Low Variable Simple tasks
Few-Shot Medium High Classification
Chain of Thought High Very High Complex reasoning
ReAct High Very High Autonomous agents

5. Development Workflow with AI

Software Complexity

graph LR
    subgraph "Without AI"
        A1[Start] --> A2[Complexity grows gradually]
        A2 --> A3[Mature software]
    end

    subgraph "With AI (VibeCoding)"
        B1[Start] --> B2[Immediate HIGH complexity]
        B2 --> B3[Unmaintainable]
    end

    subgraph "With AI (Methodology)"
        C1[Start] --> C2[Controlled complexity]
        C2 --> C3[Sustainable]
    end
Loading

VibeCoding vs Structured Development

VibeCoding (Anti-pattern)

  • Asking directly without planning
  • Going by intuition/"vibes"
  • No defined process
  • Generates code very fast, but...
  • Complexity explodes from the start

The same ease with which AI delivers something is the same ease with which it can take away your control over quality.

Structured Development

  • Prior planning
  • Documentation as an asset
  • Defined workflows
  • Checkpoints and validations
  • Controlled complexity over time

Two Types of Development

1. Assisted and Interactive Development

flowchart LR
    A[Developer] --> B[Requests task]
    B --> C[AI executes]
    C --> D[Developer approves/adjusts]
    D --> B
Loading

Characteristics:

  • More interaction with AI
  • High degree of granularity
  • You see each diff
  • Problem: You end up "watching" AI program

2. Development with Long-Running Agents

flowchart TB
    subgraph Planning
        A[PRD/Product] --> B[Specifications]
        B --> C[Action Plan]
        C --> D[Features]
        D --> E[Tasks]
    end

    subgraph Execution
        E --> F[Agent 1]
        E --> G[Agent 2]
        E --> H[Agent 3]
        F --> I[Code Review]
        G --> I
        H --> I
    end

    subgraph Feedback
        I --> J{Approved?}
        J -->|Yes| K[Merge]
        J -->|No| C
    end
Loading

Characteristics:

  • Agents execute in parallel
  • You're not an "AI babysitter"
  • Review by checkpoints
  • Planning is critical

The Importance of Planning

If planning fails, everything else fails with it. If you wanted a picture of a horse but wrote "camel", AI will paint a camel.


6. Pillars of AI-Assisted Development

mindmap
  root((Workflow))
    Tools
      IDEs
      CLIs
      Remote Environments
    Documentation
      PRD
      Design Docs
      Specifications
    Commands and Prompts
      Commands
      Prompts
      Agents
    Skills
      Guidelines
      Scripts
      Hooks
    Memory
      Short Term
      Medium Term
      Long Term
    MCPs
      Integrations
      Context
      Tools
    Models
      Cost
      Latency
      Capability
    Environment
      Local
      Remote
      Cloud
Loading

Pillar Details

Tools

  • IDEs: Cursor, VS Code, Windsurf, JetBrains
  • CLIs: Claude Code, GitHub Copilot CLI
  • Function: Initialize projects, chat with AI, validate, debug

Documentation and Artifacts

  • Product: PRD, business requirements
  • Technical: Design docs, specifications
  • Generated: By AI for humans and for AI

It has never been so easy to generate documentation, but it has also never been so fast to develop problematic software.

Commands vs Prompts

Aspect Prompts Commands
Objective Avoid repetition, conversation Execute actions
Effect Informational Triggers tools/agents
Example "How to implement X?" /commit, /test, /deploy

Skills

Skills provide on demand capabilities for AI:

  • Development guidelines
  • Validation scripts
  • Hooks (before/after actions)
  • Reference material

Skills are not automatically loaded by the agent and there is no guarantee of use. The skill description defines its success - it works like SEO for AI.

Memory

graph TB
    subgraph "Short Term"
        A[Current session]
    end

    subgraph "Medium Term"
        B[Recent information]
        C[Ongoing tasks]
    end

    subgraph "Long Term"
        D[Knowledge base]
        E[Preferences]
        F[History]
    end

    A --> B
    B --> D
Loading

Models

Fundamental trade-off:

  • Intelligent model: Expensive, slow, high quality
  • Fast model: Cheap, fast, lower quality

When to use each:

  • Complex planning/evaluation → Premium model
  • Daily tasks → Medium model
  • Repetitive/simple tasks → Cheap model

MCPs (Model Context Protocol)

Protocol to connect AI to the external world:

  • Database access
  • API integration
  • Access to Figma, Notion, etc.

7. Development Tools

IDE Comparison

Tool Type Highlights
Cursor IDE Fast proprietary model, native integration
VS Code + Copilot IDE Microsoft ecosystem, widely adopted
Windsurf IDE Cursor competitor
JetBrains AI IDE For JetBrains users
Claude Code CLI Long-running agents, skills

Claude Code Deep Dive

Anthropic's CLI tool with:

  • Agent execution
  • Skills system
  • Configurable hooks
  • Long-running workflows

CLI vs IDE

IDEs:

  • Visual interface
  • More accessible for beginners
  • Granular interaction

CLIs:

  • More power and flexibility
  • Automated workflows
  • Background execution

It seems like going back 20 years, using slash commands, but that's exactly what's working best currently.


8. Claude Code and Ecosystem

Architecture

graph TB
    subgraph "Claude Code"
        A[CLI] --> B[Agents]
        B --> C[Skills]
        B --> D[Commands]
        B --> E[MCPs]
    end

    subgraph "Integrations"
        E --> F[Notion]
        E --> G[Figma]
        E --> H[GitHub]
        E --> I[Databases]
    end
Loading

Difference Between Concepts

Concept What it is Example
Plugins Tool extensions MCP servers
Skills On demand capabilities skill-go-development
Commands Executable actions /commit, /test
Agents Autonomous executors Code review agent

Modern Workflow with Claude Code

  1. Planning: PRD → Specifications → Tasks
  2. Execution: Agents execute in parallel
  3. Validation: Automatic code review
  4. Iteration: Planning feedback

9. AI Agents

What Is (and What Is NOT) an Agent

Is an agent:

  • LLM as decision core
  • Decisions made by AI
  • Known and delimited environment
  • Available tools

Is NOT an agent:

  • Simple chatbot
  • ChatGPT prompt
  • Any AI usage

Essential Characteristics

graph LR
    A[Input] --> B[LLM]
    B --> C{Decision}
    C --> D[Tool 1]
    C --> E[Tool 2]
    C --> F[Tool 3]
    D --> G[Observation]
    E --> G
    F --> G
    G --> B
    B --> H[Output]
Loading

Context Window

Critical problem:

  • Window has token limit
  • When full, old information is lost
  • Summarization degrades quality
sequenceDiagram
    participant U as User
    participant A as Agent
    participant J as Context Window

    U->>A: Instruction
    A->>J: Stores
    Note over J: Filling up...
    J->>J: Summarizes (loses info)
    A->>U: Degraded response
Loading

Solution: Multi-Agents

graph TB
    A[Main Agent] --> B[Sub-agent 1]
    A --> C[Sub-agent 2]
    A --> D[Sub-agent 3]

    B --> E[Window 1]
    C --> F[Window 2]
    D --> G[Window 3]

    B --> H[Result]
    C --> H
    D --> H
    H --> A
Loading

Advantages:

  • Each agent has separate window
  • No context contamination
  • Specialized agents

Multi-Agent Orchestration

flowchart TB
    subgraph Orchestrator
        O[Orchestrator Agent]
    end

    subgraph "Sub-agents"
        A1[Planning Agent]
        A2[Coding Agent]
        A3[Test Agent]
        A4[Code Review Agent]
    end

    O --> A1
    O --> A2
    O --> A3
    O --> A4

    A1 --> R1[Plan]
    A2 --> R2[Code]
    A3 --> R3[Tests]
    A4 --> R4[Review]

    R1 --> O
    R2 --> O
    R3 --> O
    R4 --> O
Loading

10. Agent Frameworks

LangChain

  • Framework for building LLM applications
  • Components for prompts, chains, memory
  • Integrations with various providers

LangGraph

  • LangChain extension for agents
  • State management
  • Complex flows with decisions
  • Built-in memory management

Google ADK (Agent Development Kit)

  • Google's framework for agents
  • Integration with Gemini
  • Proprietary protocols

When to Use Each

Framework Use Case
LangChain Simple chains, RAG
LangGraph Complex agents, multi-step
Google ADK Google ecosystem
Crew AI Agent teams

It's essential to understand the framework concepts. Otherwise, the result will be disorganized, low-quality code.


11. MCP - Model Context Protocol

What It Is

Protocol created by Anthropic to connect LLMs to the external world.

graph LR
    A[LLM] --> B[MCP Client]
    B --> C[MCP Server 1<br>Notion]
    B --> D[MCP Server 2<br>Figma]
    B --> E[MCP Server 3<br>Database]
Loading

Components

  • Resources: Data that can be read
  • Tools: Actions that can be executed
  • Prompts: Reusable templates

Popular MCPs

MCP Functionality
Notion Access to documents and databases
Figma Design reading
Context7 Consolidated documentation
Firecrawl Web scraping
GitHub Repository operations

Usage Example

# Accessing Notion via MCP
from mcp import NotionClient

client = NotionClient()
doc = client.get_page("spec-rate-limiter")
# AI can use doc content as context

12. The Future of Development

The New Type of Software

The type of software developed today will be completely different from what we will develop in the near future.

Changes:

  • Applications with AI at the core
  • Interfaces via agents
  • Communication protocols between agents
  • User increasingly distant from the "physical" interface

Essential Skills

mindmap
  root((AI Dev))
    Fundamentals
      Architecture
      Design Patterns
      Languages
    Prompt Engineering
      Techniques
      Optimization
      Evaluation
    Tools
      IDEs
      CLIs
      MCPs
    Agents
      Frameworks
      Orchestration
      Memory
Loading

Skills Roadmap

  1. Solid fundamentals - Architecture, patterns, languages
  2. Prompt Engineering - Techniques, optimization
  3. Tools - Master IDEs and CLIs
  4. Workflow - Create efficient processes
  5. Agents - Build and orchestrate agents
  6. Complex systems - Multi-agents, memory

The Importance of Fundamentals

Never in the history of technology has it been so important to have solid fundamentals. AI multiplies what you are. Accepting code "that looks ok" without understanding is like signing a contract without reading it.

Why fundamentals matter more now:

  • AI multiplies capabilities (good and bad)
  • You need to validate what AI produces
  • Without fundamentals, you become an "enter key presser"
  • Anyone can do that

13. Claude Code: Complete Guide

Claude Code Architecture

Claude Code is Anthropic's official CLI tool for AI-assisted development. Unlike simple chatbots, it functions as a complete development agent.

graph TB
    subgraph "Claude Code Architecture"
        CLI[CLI Interface] --> Core[Core Engine]
        Core --> Agent[Agent Runtime]
        Agent --> Tools[Built-in Tools]
        Agent --> Skills[Skills System]
        Agent --> MCPs[MCP Connections]

        Tools --> FS[File System]
        Tools --> Git[Git Operations]
        Tools --> Shell[Shell Commands]

        Skills --> PS[Project Skills]
        Skills --> US[User Skills]
        Skills --> ES[Enterprise Skills]

        MCPs --> External[External Services]
    end
Loading

CLI vs IDE: When to Use Each

Aspect CLI (Claude Code) IDE (Cursor, Copilot)
Control Total via commands Visual interface
Automation Excellent Limited
Long-running agents Native support Limited
Learning curve Higher Lower
Flexibility Maximum Moderate
Headless/CI Supported Not supported
Ideal for Power users, automation Interactive development

Initial Configuration

CLAUDE.md - The Heart of the Project

The CLAUDE.md file in the project root is the main document that AI consults to understand context:

# Project: My Application

## Overview
Web application for task management.

## Tech Stack
- Backend: Node.js + Express
- Frontend: React + TypeScript
- Database: PostgreSQL
- Cache: Redis

## Conventions
- Commits in Portuguese
- Branch naming: feature/, fix/, hotfix/
- Tests required for new features

## Important Commands
- `npm run dev` - Development environment
- `npm run test` - Run tests
- `npm run build` - Production build

## Project Structure
src/
├── api/        # REST endpoints
├── services/   # Business logic
├── models/     # Data models
└── utils/      # Utilities

settings.json - Claude Code Configurations

{
  "model": "claude-sonnet-4-20250514",
  "maxTokens": 8192,
  "permissions": {
    "allowedTools": ["Read", "Write", "Edit", "Bash", "Glob", "Grep"],
    "allowedCommands": ["npm", "git", "docker"]
  },
  "hooks": {
    "preCommit": ".claude/hooks/pre-commit/validate.sh"
  }
}

.claude/ Directory Structure

.claude/
├── settings.json        # Local configurations
├── commands.yaml        # Custom commands
├── keybindings.json     # Keyboard shortcuts
├── hooks/               # Automation scripts
│   ├── pre-commit/
│   └── post-deploy/
├── skills/              # Project skills
│   └── custom-skill/
│       └── SKILL.md
├── mcp/                 # MCP configuration
│   └── servers.json
└── agents/              # Agent configuration
    └── orchestrator.yaml

Essential Commands

Command Description
claude Start interactive session
claude "task" Execute task directly
claude --resume Resume last session
claude --print Non-interactive mode (output only)
claude config Manage configurations
claude mcp Manage MCP servers
/help Help within session
/clear Clear session context
/compact Compact history

14. Skills: Creating Custom Capabilities

What Are Skills

Skills are specialized capabilities that can be loaded on demand by Claude Code. Unlike fixed prompts, skills are activated contextually based on description and tags.

graph LR
    U[User] --> |"create tests"| CC[Claude Code]
    CC --> |Search skill| SM[Skills Manager]
    SM --> |Match| S1[skill-testing]
    S1 --> |Loads| CC
    CC --> |Execute with context| R[Result]
Loading

Skill Structure (SKILL.md)

A skill is defined by a SKILL.md file with YAML frontmatter and Markdown content:

---
name: "go-development"
description: "Skill for Go development with best practices"
tags: ["go", "golang", "backend", "api"]
version: "1.0.0"
author: "your-name"
---

# Go Development Skill

## Context
This skill provides guidelines for Go development following
community best practices.

## Code Conventions

### Project Structure

cmd/ ├── api/ │ └── main.go internal/ ├── handlers/ ├── services/ └── repository/ pkg/ └── shared/


### Patterns
- Use `error` as last return
- Prefer composition over inheritance
- Document public functions

## Available Commands
- `go run cmd/api/main.go` - Run the API
- `go test ./...` - Run all tests

YAML Frontmatter

Field Required Description
name Yes Unique skill identifier
description Yes Clear description (crucial for matching)
tags No Tags for search and categorization
version No Semantic version
author No Skill author
tools No Tools the skill can use
triggers No Patterns that activate the skill

Skills Hierarchy (Precedence)

graph TD
    E[Enterprise Skills] --> |Overrides| P[Personal Skills]
    P --> |Overrides| Pr[Project Skills]
    Pr --> |Overrides| D[Default Skills]

    style E fill:#f44336,color:#fff
    style P fill:#ff9800,color:#fff
    style Pr fill:#4caf50,color:#fff
    style D fill:#9e9e9e,color:#fff
Loading
Level Location Scope
Enterprise Corporate server Entire organization
Personal ~/.claude/skills/ User
Project .claude/skills/ Project
Default Built-in Global

Skills-Creator for Automation

Use the /skill-creator skill to automatically generate new skills:

/skill-creator

I want to create a skill for REST API development
with FastAPI that includes:
- Standard project structure
- JWT authentication patterns
- Tests with pytest
- OpenAPI documentation

Best Practices for Descriptions (SEO for AI)

The skill description works like SEO - the better it is, the higher the chance of matching:

Bad:

description: "Python skill"

Good:

description: "Skill for Python development focused on REST APIs,
including FastAPI, Flask, testing with pytest, type hints and automatic
documentation. Use to create endpoints, data validation and authentication."

Tips:

  • Include synonyms and variations
  • List specific use cases
  • Mention related technologies
  • Use action verbs (create, test, validate)

15. MCPs: Model Context Protocol in Depth

Client/Server Architecture

MCP follows a client-server architecture where Claude Code acts as client and connects to multiple servers:

graph TB
    subgraph "Claude Code (Client)"
        CC[MCP Client]
    end

    subgraph "MCP Servers"
        S1[Notion Server]
        S2[GitHub Server]
        S3[Database Server]
        S4[Figma Server]
    end

    subgraph "External Services"
        E1[Notion API]
        E2[GitHub API]
        E3[PostgreSQL]
        E4[Figma API]
    end

    CC <--> |JSON-RPC| S1
    CC <--> |JSON-RPC| S2
    CC <--> |JSON-RPC| S3
    CC <--> |JSON-RPC| S4

    S1 <--> E1
    S2 <--> E2
    S3 <--> E3
    S4 <--> E4
Loading

MCP Components

Resources

Data that can be read by the model:

  • Documents
  • Files
  • Database records
  • Application states

Tools

Actions that can be executed:

  • Create/update records
  • Execute queries
  • Trigger webhooks
  • Manipulate files

Prompts (Templates)

Reusable templates for common interactions:

  • Standardized queries
  • Predefined workflows
  • Response formats

Configuration (servers.json)

{
  "mcpServers": {
    "notion": {
      "command": "npx",
      "args": ["-y", "@notionhq/mcp-server"],
      "env": {
        "NOTION_API_KEY": "${NOTION_API_KEY}"
      }
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@github/mcp-server"],
      "env": {
        "GITHUB_TOKEN": "${GITHUB_TOKEN}"
      }
    },
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "DATABASE_URL": "${DATABASE_URL}"
      }
    },
    "context7": {
      "command": "npx",
      "args": ["-y", "@context7/mcp-server"]
    }
  }
}

Official vs Community MCPs

Type Characteristics Examples
Official Maintained by companies, guaranteed support GitHub MCP, Notion MCP, Linear MCP
Anthropic Reference implementation Filesystem, PostgreSQL, Brave Search
Community Variety, variable quality Various on awesome-mcp-servers

MCP Request Lifecycle

sequenceDiagram
    participant U as User
    participant CC as Claude Code
    participant MCP as MCP Server
    participant API as External API

    U->>CC: "fetch my open PRs"
    CC->>CC: Identifies need for GitHub MCP
    CC->>MCP: tools/call: list_pull_requests
    MCP->>API: GET /repos/{owner}/{repo}/pulls
    API-->>MCP: Response JSON
    MCP-->>CC: Formatted result
    CC-->>U: "You have 3 open PRs..."
Loading

16. Recommended Plugins by Category

Product and Specifications

Plugin Category Description Main Features
Notion MCP PRD/Docs Access to Notion documents and databases Page reading, search, databases
Linear MCP Management Issues, projects and roadmap CRUD issues, cycles, projects
Jira MCP (Atlassian) Management Tickets, sprints and boards Issues, sprints, reports
Confluence MCP Documentation Pages and spaces Doc reading and search

Architecture and Design

Plugin Category Description Main Features
Mermaid MCP Diagrams 22+ types of diagrams Flowcharts, sequence, ER, C4
Figma MCP Design Access to designs and components Frame reading, export
Context7 Documentation Consolidated documentation Framework docs search
Excalidraw MCP Diagrams Hand-drawn diagrams Creation and editing

Code and Quality

Plugin Category Description Main Features
GitHub MCP Versioning PRs, issues, repos Complete CRUD, code review
Git MCP Versioning Local git operations Commits, branches, merge
ESLint MCP Linting JavaScript static analysis Lint, automatic fix
Playwright MCP Testing Browser automation E2E tests, screenshots
SonarQube MCP Quality Code analysis Metrics, vulnerabilities

Data and Infrastructure

Plugin Category Description Main Features
Postgres MCP Database PostgreSQL queries and schemas CRUD, migrations, schemas
Firebase MCP Backend Complete Firebase Auth, Firestore, Functions
AWS MCP Cloud AWS services S3, Lambda, DynamoDB
Pinecone MCP Vector DB Vector search Embeddings, similarity search
Redis MCP Cache Cache and pub/sub GET/SET, pub/sub, streams
MongoDB MCP Database MongoDB operations CRUD, aggregations

Communication and Collaboration

Plugin Category Description Main Features
Slack MCP Communication Slack integration Messages, channels
Discord MCP Communication Discord integration Messages, webhooks
Gmail MCP Email Email operations Reading, sending, search

17. Sub-agents and Orchestration

The Context Window Problem

The context window is the token limit that the model can process simultaneously. When exceeded:

graph TD
    subgraph "Context Window"
        A[Initial Instructions] --> B[Project Context]
        B --> C[Conversation History]
        C --> D[Current Task]
        D --> E[⚠️ LIMIT]
    end

    E --> F[Summarization]
    F --> G[Information Loss]
    G --> H[Quality Degradation]
Loading

Consequences:

  • Old information is summarized or discarded
  • Initial instructions may be "forgotten"
  • Response quality degrades progressively
  • Complex tasks become impossible

Multi-Agent Architecture

The solution is to divide work among multiple agents, each with its own context window:

graph TB
    subgraph "Orchestrator"
        O[Main Agent]
        O_CTX[Context: Overview]
    end

    subgraph "Specialized Agents"
        A1[Backend Agent]
        A1_CTX[Context: APIs, DB]

        A2[Frontend Agent]
        A2_CTX[Context: UI, UX]

        A3[Test Agent]
        A3_CTX[Context: Specs, Fixtures]

        A4[DevOps Agent]
        A4_CTX[Context: Infra, CI/CD]
    end

    O --> |Delegates| A1
    O --> |Delegates| A2
    O --> |Delegates| A3
    O --> |Delegates| A4

    A1 --> |Result| O
    A2 --> |Result| O
    A3 --> |Result| O
    A4 --> |Result| O
Loading

The Main Orchestrator

The orchestrator is responsible for:

  1. Receiving the task from user
  2. Decomposing into subtasks
  3. Assigning to specialized agents
  4. Aggregating results
  5. Validating final quality
# orchestrator.yaml
name: "main-orchestrator"
description: "Project main orchestrator"

agents:
  - name: "backend-dev"
    skills: ["api-development", "database"]
    triggers: ["endpoint", "api", "database", "query"]

  - name: "frontend-dev"
    skills: ["react", "typescript", "css"]
    triggers: ["component", "ui", "style", "page"]

  - name: "test-writer"
    skills: ["testing", "playwright"]
    triggers: ["test", "spec", "e2e", "coverage"]

  - name: "code-reviewer"
    skills: ["code-review"]
    triggers: ["review", "pr", "quality"]

workflow:
  - receive_task
  - decompose_tasks
  - assign_to_agents
  - monitor_progress
  - aggregate_results
  - validate_quality

Agent Specialization

Each agent should have a well-defined scope:

Agent Responsibility Tools
Planner Analysis and decomposition Read, Glob, Grep
Developer Implementation Read, Write, Edit, Bash
Tester Test creation Read, Write, Bash (test)
Reviewer Code review Read, Grep, GitHub MCP
DevOps Deploy and infra Bash, AWS/Firebase MCP

Communication Between Agents

sequenceDiagram
    participant U as User
    participant O as Orchestrator
    participant D as Dev Agent
    participant T as Test Agent
    participant R as Review Agent

    U->>O: "Implement feature X"
    O->>O: Decompose task
    O->>D: "Create endpoint /api/x"
    D->>D: Implements code
    D-->>O: Code ready
    O->>T: "Create tests for /api/x"
    T->>T: Writes tests
    T-->>O: Tests ready
    O->>R: "Review implementation"
    R->>R: Analyzes code
    R-->>O: Feedback
    O-->>U: "Feature X implemented and tested"
Loading

Orchestration Patterns

Sequential

Agents execute one after another:

Plan → Develop → Test → Review → Deploy

Parallel

Agents execute simultaneously:

         ┌─ Frontend ─┐
Input ───┼─ Backend  ─┼─── Merge
         └─ Tests    ─┘

Hierarchical

Agents can spawn sub-agents:

Orchestrator
    └─ Dev Lead
        ├─ Junior Dev 1
        └─ Junior Dev 2

18. Hooks: Workflow Automation

What Are Hooks

Hooks are scripts that execute automatically in response to specific events in Claude Code. They allow automation of validations, formatting, and integrations.

graph LR
    E[Event] --> H{Hook Exists?}
    H -->|Yes| P[Pre-hook]
    P --> A[Main Action]
    A --> Po[Post-hook]
    Po --> R[Result]
    H -->|No| A
Loading

Types of Hooks

Type Moment Common Use
Pre-commit Before commit Lint, format, unit tests
Post-commit After commit Notifications, sync
Pre-push Before push Complete tests, build
Post-push After push Deploy staging, webhooks
Pre-deploy Before deploy Validation, backup
Post-deploy After deploy Health check, notifications

Script Location

.claude/
└── hooks/
    ├── pre-commit/
    │   ├── lint.sh
    │   ├── format.sh
    │   └── test-unit.sh
    ├── post-commit/
    │   └── notify.sh
    ├── pre-push/
    │   └── test-full.sh
    ├── pre-deploy/
    │   └── validate.sh
    └── post-deploy/
        ├── health-check.sh
        └── notify-slack.sh

Practical Examples

Pre-commit: Code Validation

#!/bin/bash
# .claude/hooks/pre-commit/validate.sh

set -e

echo "🔍 Running pre-commit validations..."

# Lint
echo "→ ESLint..."
npm run lint

# Format check
echo "→ Prettier..."
npm run format:check

# Type check
echo "→ TypeScript..."
npm run typecheck

# Unit tests
echo "→ Unit tests..."
npm run test:unit

echo "✅ All validations passed!"

Pre-commit: Python with Ruff

#!/bin/bash
# .claude/hooks/pre-commit/python-validate.sh

set -e

echo "🐍 Running Python validations..."

# Ruff linting
echo "→ Ruff lint..."
ruff check .

# Ruff formatting
echo "→ Ruff format..."
ruff format --check .

# Type checking
echo "→ MyPy..."
mypy src/

# Tests
echo "→ Pytest..."
pytest tests/ -q

echo "✅ Python validations passed!"

Post-deploy: Health Check

#!/bin/bash
# .claude/hooks/post-deploy/health-check.sh

set -e

API_URL="${DEPLOY_URL:-https://api.myapp.com}"
MAX_RETRIES=5
RETRY_DELAY=10

echo "🏥 Running health check on $API_URL..."

for i in $(seq 1 $MAX_RETRIES); do
    HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "$API_URL/health")

    if [ "$HTTP_CODE" = "200" ]; then
        echo "✅ Health check passed! (HTTP $HTTP_CODE)"
        exit 0
    fi

    echo "⏳ Attempt $i/$MAX_RETRIES failed (HTTP $HTTP_CODE). Retrying in ${RETRY_DELAY}s..."
    sleep $RETRY_DELAY
done

echo "❌ Health check failed after $MAX_RETRIES attempts"
exit 1

Post-deploy: Slack Notification

#!/bin/bash
# .claude/hooks/post-deploy/notify-slack.sh

SLACK_WEBHOOK="${SLACK_WEBHOOK_URL}"
DEPLOY_ENV="${DEPLOY_ENV:-production}"
VERSION="${VERSION:-unknown}"
DEPLOYER="${DEPLOYER:-CI/CD}"

curl -X POST "$SLACK_WEBHOOK" \
  -H 'Content-Type: application/json' \
  -d "{
    \"blocks\": [
      {
        \"type\": \"section\",
        \"text\": {
          \"type\": \"mrkdwn\",
          \"text\": \"🚀 *Deploy Completed*\n• Environment: \`$DEPLOY_ENV\`\n• Version: \`$VERSION\`\n• Deployed by: $DEPLOYER\"
        }
      }
    ]
  }"

echo "📨 Slack notification sent!"

Best Practices for Hooks

  1. Always use set -e - Fail fast on error
  2. Keep hooks fast - Pre-commit < 30s ideally
  3. Use cache - Avoid redundant work
  4. Provide clear feedback - Emojis and informative messages
  5. Be idempotent - Multiple executions = same result
  6. Document dependencies - What needs to be installed

19. Complete Professional Workflow

This workflow represents a complete software development cycle with AI, from conception to deploy.

┌─────────────────────────────────────────────────────────────┐
│ PHASE 1: DISCOVERY                                          │
│ - Define problem/opportunity                                │
│ - Collect initial requirements                              │
│ - Identify stakeholders                                     │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 2: PRD (Product Requirements Document)                │
│ - Product vision                                            │
│ - User stories                                              │
│ - Acceptance criteria                                       │
│ - Success metrics                                           │
│ Plugins: Notion MCP, Linear MCP                             │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 3: ADR (Architecture Decision Records)                │
│ - Context and problem                                       │
│ - Options considered                                        │
│ - Decision and justification                                │
│ - Consequences                                              │
│ Plugins: Context7, Mermaid MCP                              │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 4: TECHNICAL SPECIFICATION                            │
│ - System architecture                                       │
│ - APIs and contracts                                        │
│ - Data model                                                │
│ - Diagrams (C4, sequence, etc)                              │
│ Plugins: Mermaid MCP, Context7, Figma MCP                   │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 5: TASK PLANNING                                      │
│ - Break into features                                       │
│ - Break into tasks                                          │
│ - Estimates and dependencies                                │
│ - Assignment to agents                                      │
│ Plugins: Linear MCP, Jira MCP, GitHub MCP                   │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 6: DEVELOPMENT (Parallel)                             │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐             │
│ │ Dev Agent   │ │ Dev Agent   │ │ Test Agent  │             │
│ │ Feature A   │ │ Feature B   │ │ Tests       │             │
│ └─────────────┘ └─────────────┘ └─────────────┘             │
│ Plugins: GitHub MCP, ESLint MCP, Playwright MCP             │
│ Hooks: pre-commit (lint, format, test)                      │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 7: CODE REVIEW                                        │
│ - Automatic review by agent                                 │
│ - Quality checklist                                         │
│ - Structured feedback                                       │
│ Skill: /code-review                                         │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 8: VALIDATION AND TESTING                             │
│ - Unit tests                                                │
│ - Integration tests                                         │
│ - E2E tests                                                 │
│ - Performance tests                                         │
│ Plugins: Playwright MCP, Jest/Vitest                        │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 9: DEPLOY                                             │
│ - Build and validation                                      │
│ - Deploy staging                                            │
│ - Smoke tests                                               │
│ - Deploy production                                         │
│ Plugins: Firebase MCP, AWS MCP                              │
│ Hooks: pre-deploy, post-deploy                              │
└─────────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────────┐
│ PHASE 10: DOCUMENTATION AND RETROSPECTIVE                   │
│ - Update CLAUDE.md                                          │
│ - Document decisions                                        │
│ - Lessons learned                                           │
│ - Metrics and KPIs                                          │
└─────────────────────────────────────────────────────────────┘

Phase Details

Phase 1: Discovery

Objective: Deeply understand the problem before any code.

Inputs:

  • User feedback
  • Business metrics
  • Competitor analysis

Outputs:

  • Clear problem statement
  • List of stakeholders
  • Initial scope defined

Phase 2: PRD

Basic PRD template:

# PRD: [Feature Name]

## Overview
[2-3 paragraph description]

## Problem
[What problem are we solving?]

## User Stories

### US-001: [Title]
**As a** [user type]
**I want** [action]
**So that** [benefit]

**Acceptance Criteria:**
- [ ] AC-001: [Criterion]
- [ ] AC-002: [Criterion]

## Success Metrics
- [ ] Metric 1: [Definition and goal]
- [ ] Metric 2: [Definition and goal]

## Out of Scope
- Item 1
- Item 2

Phase 3: ADR

ADR template:

# ADR-001: [Decision Title]

## Status
[Proposed | Accepted | Deprecated | Superseded]

## Context
[What is the problem or situation?]

## Decision
[What was decided?]

## Options Considered

### Option 1: [Name]
- ✅ Advantage 1
- ✅ Advantage 2
- ❌ Disadvantage 1

### Option 2: [Name]
- ✅ Advantage 1
- ❌ Disadvantage 1
- ❌ Disadvantage 2

## Consequences
- Positive: [List]
- Negative: [List]
- Risks: [List]

Phase 6: Parallel Development

Agent configuration for parallel development:

# .claude/agents/dev-team.yaml
team:
  orchestrator:
    model: "claude-sonnet"
    role: "coordinator"

  agents:
    - name: "feature-a-dev"
      model: "claude-sonnet"
      focus: "backend API endpoints"
      branch: "feature/api-endpoints"

    - name: "feature-b-dev"
      model: "claude-sonnet"
      focus: "frontend components"
      branch: "feature/ui-components"

    - name: "test-writer"
      model: "claude-haiku"
      focus: "test coverage"
      branch: "feature/tests"

workflow:
  parallel:
    - feature-a-dev
    - feature-b-dev
    - test-writer
  then:
    - code-review
    - merge

20. Ideal Project Configuration

Recommended Directory Structure

project/
├── CLAUDE.md                    # Main documentation for AI
├── README.md                    # Documentation for humans
├── .claude/
│   ├── settings.json            # Claude Code configurations
│   ├── commands.yaml            # Custom commands
│   ├── keybindings.json         # Keyboard shortcuts
│   ├── hooks/
│   │   ├── pre-commit/
│   │   │   ├── lint.sh
│   │   │   ├── format.sh
│   │   │   └── test-unit.sh
│   │   ├── pre-push/
│   │   │   └── test-full.sh
│   │   └── post-deploy/
│   │       ├── health-check.sh
│   │       └── notify.sh
│   ├── skills/
│   │   ├── project-conventions/
│   │   │   └── SKILL.md
│   │   └── custom-commands/
│   │       └── SKILL.md
│   ├── mcp/
│   │   └── servers.json
│   └── agents/
│       ├── orchestrator.yaml
│       └── specialists.yaml
├── docs/
│   ├── prd/                     # Product Requirements
│   │   └── feature-x.md
│   ├── adr/                     # Architecture Decisions
│   │   ├── 001-database.md
│   │   └── 002-auth.md
│   ├── specs/                   # Technical Specifications
│   │   └── api-spec.md
│   └── architecture/            # Diagrams and Architecture
│       ├── c4-context.md
│       └── sequence-diagrams.md
├── src/                         # Source code
├── tests/                       # Test files
├── scripts/                     # Utility scripts
└── .github/
    └── workflows/               # CI/CD pipelines

Complete CLAUDE.md

# Project: [Project Name]

## Overview
[Concise project description in 2-3 lines]

## Quick Start
\`\`\`bash
# Install dependencies
npm install

# Run development environment
npm run dev

# Run tests
npm run test
\`\`\`

## Tech Stack
- **Runtime**: Node.js 20 LTS
- **Framework**: Express.js 4.x
- **Database**: PostgreSQL 15 + Prisma ORM
- **Cache**: Redis 7
- **Frontend**: React 18 + TypeScript 5
- **Testing**: Jest + Playwright

## Architecture

### Folder Structure
\`\`\`
src/
├── api/           # Controllers and routes
├── services/      # Business logic
├── repositories/  # Data access
├── models/        # Types and interfaces
├── middlewares/   # Express middlewares
├── utils/         # Utilities
└── config/        # Configurations
\`\`\`

### Patterns
- Repository Pattern for data access
- Service Layer for business logic
- DTOs for data transfer
- Centralized error handling

## Code Conventions

### Naming
- Files: kebab-case (`user-service.ts`)
- Classes: PascalCase (`UserService`)
- Functions/variables: camelCase (`getUserById`)
- Constants: SCREAMING_SNAKE_CASE (`MAX_RETRIES`)

### Git
- Branch: `feature/`, `fix/`, `hotfix/`, `refactor/`
- Commits: Conventional Commits
- PRs: Always with description and checklist

### Testing
- Unit: `*.spec.ts` in same directory
- Integration: `tests/integration/`
- E2E: `tests/e2e/`
- Minimum coverage: 80%

## Available Commands

| Command | Description |
|---------|-------------|
| `npm run dev` | Development with hot-reload |
| `npm run build` | Production build |
| `npm run test` | All tests |
| `npm run test:unit` | Unit only |
| `npm run test:e2e` | E2E only |
| `npm run lint` | Check linting |
| `npm run lint:fix` | Fix linting |
| `npm run db:migrate` | Run migrations |
| `npm run db:seed` | Seed database |

## Environment Variables
\`\`\`env
DATABASE_URL=postgresql://...
REDIS_URL=redis://...
JWT_SECRET=...
API_PORT=3000
\`\`\`

## Useful Links
- PRD: `docs/prd/`
- ADRs: `docs/adr/`
- API Docs: http://localhost:3000/docs
- Storybook: http://localhost:6006

## Contacts
- Tech Lead: name@email.com
- Product Owner: name@email.com

Complete settings.json

{
  "model": "claude-sonnet-4-20250514",
  "fallbackModel": "claude-haiku-3-5-20241022",
  "maxTokens": 8192,
  "temperature": 0.7,

  "permissions": {
    "allowedTools": [
      "Read", "Write", "Edit",
      "Bash", "Glob", "Grep",
      "WebFetch", "WebSearch"
    ],
    "allowedCommands": [
      "npm", "npx", "node",
      "git", "gh",
      "docker", "docker-compose",
      "psql", "redis-cli"
    ],
    "deniedPaths": [
      ".env",
      ".env.*",
      "**/secrets/**",
      "**/credentials/**"
    ]
  },

  "hooks": {
    "preCommit": ".claude/hooks/pre-commit/",
    "prePush": ".claude/hooks/pre-push/",
    "postDeploy": ".claude/hooks/post-deploy/"
  },

  "mcp": {
    "configPath": ".claude/mcp/servers.json"
  },

  "agents": {
    "orchestratorConfig": ".claude/agents/orchestrator.yaml"
  },

  "context": {
    "includeGitStatus": true,
    "includePackageJson": true,
    "maxFileSize": "100KB"
  }
}

21. Marketplaces and Resources

MCP Marketplaces

Marketplace URL Description
MCP Market mcpmarket.com Official marketplace with curation
MCPServers.org mcpservers.org MCP servers directory
LobeHub MCP lobehub.com/mcp Visual marketplace with filters
Awesome MCP Servers GitHub Curated list on GitHub
Cline's MCP Marketplace GitHub MCPs for Cline

Skills Resources

Resource URL Description
Anthropic Skills GitHub Official Anthropic skills
Community Skills GitHub Topics Community skills
Skills Documentation Claude Docs Official documentation

Official Documentation

Resource URL Description
Claude Code Docs code.claude.com/docs Complete documentation
MCP Protocol Spec modelcontextprotocol.io Protocol specification
Claude API Docs docs.anthropic.com API and SDK
Claude Help Center support.anthropic.com Support and FAQs

Communities

Community Platform Focus
Anthropic Discord Discord Official support
Claude Code GitHub GitHub Discussions Issues and discussions
r/ClaudeAI Reddit General community
MCP Community Discord MCP developers

Complementary Tools

Tool Description URL
Claude Dev VS Code extension VS Code Marketplace
MCP Inspector MCP debugging GitHub
Prompt Tester Test prompts Various options
Token Counter Count tokens tiktoken, etc

Templates and Starters

Template Stack URL
Claude Code Starter Node.js GitHub
MCP Server Template TypeScript GitHub
Skills Template Markdown Anthropic
Full Project Template Multi-stack Community

22. Agentic Workflows and Loops

This chapter covers the fundamental patterns for building AI agents that operate in continuous loops, making decisions and taking actions autonomously.

What Are Agentic Loops?

Agentic loops are the core mechanism that enables AI agents to work autonomously. They follow a continuous cycle of perception, reasoning, action, and observation.

flowchart LR
    subgraph "AGENTIC LOOP"
        P[PERCEIVE<br>Input] --> R[REASON<br>Think]
        R --> A[ACT<br>Output]
        A --> O[OBSERVE<br>Feedback]
        O --> P
    end

    style P fill:#e3f2fd
    style R fill:#fff3e0
    style A fill:#e8f5e9
    style O fill:#fce4ec
Loading

Loop continues until: goal reached OR max iterations

Agentic Workflow Patterns

Pattern 1: ReAct Loop

The ReAct (Reasoning + Acting) loop combines thinking, decision-making, and execution with continuous feedback.

Use cases: Search + reasoning tasks, tool-using agents, interactive problem solving.

Pattern 2: Plan-Execute Loop

Creates an initial plan, executes steps sequentially, and replans when needed.

Use cases: Multi-step tasks, project planning, complex workflows.

Pattern 3: Self-Refine Loop

Generates a response, critiques it, and improves iteratively until satisfactory.

Use cases: Content generation, code review, quality-critical outputs.

Pattern 4: Multi-Agent Orchestration

Coordinates multiple specialized agents to work together on complex tasks.

flowchart TB
    subgraph Orchestration["MULTI-AGENT ORCHESTRATION"]
        O[ORCHESTRATOR<br>Coordinator]
        O --> A1[Agent 1<br>Research]
        O --> A2[Agent 2<br>Code]
        O --> A3[Agent 3<br>Review]
        A1 --> C[COMBINE<br>RESULTS]
        A2 --> C
        A3 --> C
        C --> O
    end

    style O fill:#7e57c2,color:#fff
    style A1 fill:#42a5f5,color:#fff
    style A2 fill:#66bb6a,color:#fff
    style A3 fill:#ffa726,color:#fff
    style C fill:#78909c,color:#fff
Loading

Use cases: Large codebases, multi-domain problems, parallel processing.

Recommended Development Workflow

flowchart TB
    subgraph Workflow["AI DEVELOPMENT WORKFLOW"]
        D[1. DEFINITION<br>Define objective & success criteria]
        DC[2. DECOMPOSITION<br>Break into sub-tasks]
        PS[3. PATTERN SELECTION]
        IM[4. IMPLEMENTATION]
        VA[5. VALIDATION]

        D --> DC --> PS --> IM --> VA
    end

    subgraph Patterns["Pattern Options"]
        P1[Simple → ReAct]
        P2[Complex → Plan-Execute]
        P3[Quality → Self-Refine]
        P4[Multi-domain → Multi-Agent]
    end

    PS --- Patterns

    style D fill:#e3f2fd
    style DC fill:#e8f5e9
    style PS fill:#fff3e0
    style IM fill:#f3e5f5
    style VA fill:#fce4ec
Loading

Decision Table: Which Pattern to Use?

Scenario Recommended Pattern Rationale
Search + Reasoning ReAct Loop Combines tool use with reasoning
Multi-step tasks Plan-Execute Structured approach with replanning
Content generation Self-Refine Iterative quality improvement
Data pipeline Prompt Chaining Sequential processing
Multiple perspectives Multi-Agent Specialized expertise per agent
Complex decisions Tree of Thoughts Explores multiple solution paths

Practical Tips for Agentic Loops

  1. Always define max_iterations - Avoids infinite loops and exploding costs
  2. Logging is essential - Record each iteration for debugging
  3. Clear stopping criteria - The agent needs to know when to finish
  4. Fallbacks - Have a plan B if the loop doesn't converge
  5. Cost awareness - Each iteration = more tokens = more cost

23. Feature Development Framework

This chapter presents a systematic 5-step framework for AI-assisted feature development, ensuring quality and consistency in your development process.

Overview

The Feature Development Framework provides a structured approach to building features with AI assistance. It combines multiple prompt engineering techniques with best practices for software development.

flowchart TB
    FR[FEATURE REQUEST] --> U

    subgraph Framework["FEATURE DEVELOPMENT FRAMEWORK"]
        U["1. UNDERSTANDING<br>• What does the user want?<br>• What are the requirements?<br>• What are the constraints?"]
        D["2. DECOMPOSITION<br>• Break into sub-tasks<br>• Identify dependencies<br>• Prioritize by value"]
        E["3. EXPLORATION<br>• Consider alternatives<br>• Evaluate trade-offs<br>• Document decisions"]
        I["4. IMPLEMENTATION<br>• Build with context<br>• Iterate with refinement<br>• Validate with tests"]
        R["5. REVIEW<br>• Code review<br>• Update documentation<br>• Create ADR if needed"]

        U --> D --> E --> I --> R
    end

    style FR fill:#ff7043,color:#fff
    style U fill:#42a5f5,color:#fff
    style D fill:#66bb6a,color:#fff
    style E fill:#ffca28,color:#000
    style I fill:#ab47bc,color:#fff
    style R fill:#26a69a,color:#fff
Loading

Step 1: Understanding

Objective: Clarify requirements before any code is written.

Use Chain of Thought reasoning to deeply understand:

  • What does the user actually want?
  • What are the explicit and implicit requirements?
  • What constraints exist (time, resources, technology)?
  • What are the success criteria?

Prompt approach:

Before implementing this feature, let me think through the requirements step by step:
1. What is the core functionality being requested?
2. What are the edge cases?
3. What are the acceptance criteria?

Step 2: Decomposition

Objective: Break the feature into manageable sub-tasks.

Apply Least-to-Most decomposition:

  • Break large features into smaller tasks
  • Identify dependencies between tasks
  • Prioritize by business value and technical dependencies
  • Create a clear execution order

Example decomposition:

Feature: User Authentication
├── Sub-task 1: Create user model and database schema
├── Sub-task 2: Implement registration endpoint
├── Sub-task 3: Implement login endpoint
├── Sub-task 4: Add JWT token generation
├── Sub-task 5: Create authentication middleware
└── Sub-task 6: Write tests for all endpoints

Step 3: Exploration

Objective: Consider alternatives and evaluate trade-offs.

Use Tree of Thoughts approach:

  • Generate multiple solution approaches
  • Evaluate pros and cons of each
  • Consider scalability, maintainability, performance
  • Document decisions in ADRs when significant

Decision framework:

Option Pros Cons Recommendation
Option A Fast, simple Less scalable Good for MVP
Option B Scalable, robust More complex Good for production
Option C Most flexible Highest complexity Overkill for current needs

Step 4: Implementation

Objective: Build the feature with AI assistance.

Best practices:

  • Provide rich context (CLAUDE.md, relevant files)
  • Use iterative refinement (Self-Refine pattern)
  • Validate with tests as you build
  • Keep commits atomic and well-documented

Implementation cycle:

flowchart LR
    G[Generate<br>Code] --> R[Review<br>Output] --> RF[Refine<br>If Needed]
    RF --> G

    style G fill:#66bb6a,color:#fff
    style R fill:#42a5f5,color:#fff
    style RF fill:#ffca28,color:#000
Loading

Step 5: Review

Objective: Ensure quality and documentation.

Review checklist:

  • Code follows project conventions
  • Tests pass and coverage is adequate
  • Documentation is updated
  • No security vulnerabilities introduced
  • Performance is acceptable
  • ADR created for significant decisions

Decision Table: Pattern Selection by Scenario

Development Scenario Primary Pattern Supporting Techniques
New feature from scratch Plan-Execute Chain of Thought, Decomposition
Bug fix ReAct Self-Refine
Refactoring Self-Refine Tree of Thoughts
Performance optimization Tree of Thoughts Self-Refine
API integration Plan-Execute Prompt Chaining
UI component Self-Refine Multi-Agent (if complex)
Architecture decision Tree of Thoughts Chain of Thought
Code review Self-Refine Chain of Thought

Practical Application

Scenario: Building a rate limiter feature

  1. Understanding: "I need a rate limiter for the API that limits requests per user to 100/minute"

  2. Decomposition:

    • Design rate limiting algorithm (token bucket vs sliding window)
    • Create middleware
    • Add Redis storage for distributed tracking
    • Implement bypass for admin users
    • Add monitoring/metrics
    • Write tests
  3. Exploration: Compare token bucket vs sliding window algorithms, document choice in ADR

  4. Implementation: Build incrementally with tests, using self-refine for code quality

  5. Review: Code review, update API docs, verify performance under load


24. Glossary

Term Definition
LLM Large Language Model - Large scale language model
Token LLM processing unit (approx. 4 characters)
Prompt Instruction/text sent to LLM
Context Window Token limit that the model can process
RAG Retrieval Augmented Generation - Search + Generation
Embedding Vector representation of text
Vector Database Database optimized for vector search
MCP Model Context Protocol - Model context protocol
Agent Autonomous system with LLM as decision core
Multi-Agent System with multiple collaborating agents
Orchestrator Agent that coordinates other agents
Skill Capability/knowledge provided on demand to AI
Hook Action executed before/after events
VibeCoding Development without planning, based on intuition
Chain of Thought Prompt technique requesting step-by-step reasoning
ReAct Reason + Act - Reasoning and action pattern
Guardrails Protections/limits for AI behavior
Prompt Injection Attack that manipulates AI behavior via prompt
Evaluation Process of measuring prompt/response quality
Trade-off Compromise between different aspects (cost x quality)
Sub-agent Specialized agent that executes subtasks delegated by orchestrator
CLAUDE.md Main configuration file providing project context for AI
Frontmatter YAML metadata at beginning of Markdown files
CI/CD Continuous Integration/Continuous Deployment
PRD Product Requirements Document
ADR Architecture Decision Record
E2E End-to-End - End-to-end tests
Headless Execution without graphical interface
Agentic Loop Continuous cycle of perceive-reason-act-observe for autonomous agents
Self-Refine Pattern where AI critiques and improves its own output iteratively
Plan-Execute Pattern where AI creates a plan first, then executes steps

25. References

Official Documentation

Source URL Description
Claude Code Skills Docs code.claude.com/docs/en/skills Official Skills documentation
Claude Help Center - Skills support.claude.com Skills creation tutorial
MCP Protocol Spec modelcontextprotocol.io Official MCP specification
Anthropic MCP Partners anthropic.com/partners/mcp Official MCP partners

MCPs and Plugins

Source URL Description
MCP Market mcpmarket.com MCP marketplace
Awesome MCP Servers GitHub Curated MCP servers list
GitHub MCP Server GitHub Official GitHub MCP
ESLint MCP eslint.org/docs/latest/use/mcp ESLint MCP documentation
Linear MCP linear.app/docs/mcp Linear MCP documentation
Atlassian MCP atlassian.com/blog Atlassian MCP announcement

Community Resources

Source URL Description
Anthropic Skills Repository GitHub Official Anthropic skills
12 Factor Agents GitHub Agent building patterns
MCPServers.org mcpservers.org MCP servers directory
LobeHub MCP lobehub.com Visual MCP marketplace

Frameworks and Tools

Source URL Description
LangChain langchain.com Framework for LLM applications
LangGraph langchain-ai.github.io/langgraph Agent framework
Playwright playwright.dev Browser automation
Pinecone pinecone.io Vector database
graph LR
    subgraph "Primary Sources"
        A[Anthropic Docs]
        B[MCP Spec]
        C[GitHub Repos]
    end

    subgraph "Secondary Sources"
        D[Marketplaces]
        E[Community]
        F[Official Blogs]
    end

    subgraph "This Guide"
        G[AI for Developers Guide]
    end

    A --> G
    B --> G
    C --> G
    D --> G
    E --> G
    F --> G
Loading

References and Resources

Tools Mentioned

Patterns and Documentation


Note: This document was structured to serve as a reference and study guide on AI for developers.

Last updated: February 2026

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages