Skip to content

feat: include makefile skill#144

Open
ssoto wants to merge 6 commits intomainfrom
feat/include-common-skills
Open

feat: include makefile skill#144
ssoto wants to merge 6 commits intomainfrom
feat/include-common-skills

Conversation

@ssoto
Copy link
Copy Markdown
Collaborator

@ssoto ssoto commented Feb 5, 2026

This pull request introduces a new professional "Makefile" skill for the Claude system, providing comprehensive documentation, templates, and scripts to support Makefile creation, analysis, modification, and debugging. The changes include detailed workflow documentation, ready-to-use templates for various Python project types, and a Python analysis script to inspect Makefile structure and dependencies.

Makefile Skill Documentation and Workflow:

  • Adds .claude/skills/makefile/SKILL.md, a thorough guide detailing when and how to use the Makefile skill, including workflows for creating, analyzing, modifying, and debugging Makefiles, as well as best practices, common pitfalls, and output formatting standards.

Professional Makefile Templates:

  • Introduces .claude/skills/makefile/references/python_templates.md, which provides production-ready Makefile templates for Python projects, including basic, advanced (with Docker/CI), Flask/FastAPI web apps, and data science/ML projects. Each template follows best practices and is well-documented for real-world use.

Automation and Analysis Scripts:

  • Adds .claude/skills/makefile/scripts/analyze_makefile.py, a Python script that parses a Makefile to extract variables, targets, dependencies, phony declarations, and recipes, and prints a detailed structural analysis with statistics and error handling.

Attribution Metadata:

  • Adds .claude/settings.json to record attribution for the skill and note that it was generated with Claude Code.

ssoto added 5 commits February 5, 2026 17:57
…is and validation scripts

- Created Java Makefile templates for basic, Maven-style, Spring Boot, multi-module, and Gradle-based projects.
- Developed Python Makefile templates for basic, advanced (with Docker and CI), Flask/FastAPI web applications, and data science projects.
- Introduced `analyze_makefile.py` to analyze Makefile structure, targets, dependencies, and variables.
- Added `validate_makefile.py` to validate Makefile syntax and detect common issues.
- Updated main Makefile with improved target definitions, installation processes, and help messages.
…ents. This prevents the deleted files from being tracked in future commits, ensuring a cleaner commit history and avoiding confusion about the status of these files.
- Introduced Java Makefile templates for basic, Maven-style, Spring Boot, multi-module, and Gradle-based projects.
- Added Python Makefile templates for basic, advanced (with Docker and CI), Flask/FastAPI web applications, and Data Science/ML projects.
- Implemented scripts for analyzing and validating Makefile syntax, detecting common issues.
- Enhanced the main Makefile with improved structure, color-coded output, and additional targets for building, testing, and cleaning.
@ssoto ssoto self-assigned this Feb 5, 2026
@ssoto ssoto requested a review from Copilot February 5, 2026 17:18
@sonarqubecloud
Copy link
Copy Markdown

sonarqubecloud bot commented Feb 5, 2026

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request has a significant discrepancy between its description and actual content. While the PR title and description claim to introduce a "Makefile skill" with documentation, templates, and analysis scripts, the changes actually include:

Changes:

  • Addition of a new src/codeas/core/output_models.py file (not mentioned in PR description) that introduces Pydantic models to replace a type() antipattern in UI components
  • Comprehensive Makefile improvements with better structure, colored output, and additional targets
  • New CLAUDE.md documentation file describing the Codeas project architecture and conventions
  • Makefile skill implementation in .claude/skills/makefile/ including validation/analysis scripts, templates, and best practices documentation
  • Minor additions to .gitignore and new .claude/settings.json attribution file

Reviewed changes

Copilot reviewed 11 out of 12 changed files in this pull request and generated 17 comments.

Show a summary per file
File Description
src/codeas/core/output_models.py Introduces unified Pydantic models (UseCaseOutput, ParsedUseCaseOutput) to replace dynamic type() antipattern, but code is not integrated and remains unused
Makefile Significantly refactored with improved structure, colored output, proper phony declarations, and comprehensive targets for development workflow
CLAUDE.md New comprehensive project documentation for AI assistants detailing architecture, conventions, tech stack, and development patterns
.gitignore Adds .node/ directory to ignore list
.claude/skills/makefile/scripts/validate_makefile.py Python script for validating Makefile syntax and detecting common issues like tab/space problems
.claude/skills/makefile/scripts/analyze_makefile.py Python script for analyzing Makefile structure including targets, dependencies, and variables
.claude/skills/makefile/references/python_templates.md Comprehensive Makefile templates for Python projects (basic, advanced, web apps, ML/data science)
.claude/skills/makefile/references/java_templates.md Professional Makefile templates for Java projects (basic, Maven, Spring Boot, multi-module, Gradle)
.claude/skills/makefile/references/best_practices.md Extensive documentation of Makefile best practices, patterns, common mistakes, and optimization techniques
.claude/skills/makefile/SKILL.md Main skill documentation describing workflows for creating, analyzing, modifying, and debugging Makefiles
.claude/settings.json Attribution metadata indicating generation with Claude Code

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +1 to +216
"""
Unified output data models for use case operations.

These models replace the dynamic type() antipattern used throughout
the UI components for creating mock Output objects.
"""

from typing import Any, Dict, List, Optional, Type, TypeVar

from pydantic import BaseModel, Field


class CostInfo(BaseModel):
"""Token cost information."""

input_cost: float = 0.0
output_cost: float = 0.0
total_cost: float = 0.0


class TokenInfo(BaseModel):
"""Token count information."""

input_tokens: int = 0
output_tokens: int = 0
total_tokens: int = 0


class UseCaseOutput(BaseModel):
"""
Unified output model for all use case operations.

This replaces the `type("Output", (), {...})` antipattern used
throughout the UI components.
"""

response: Any = None
cost: CostInfo = Field(default_factory=CostInfo)
tokens: TokenInfo = Field(default_factory=TokenInfo)
messages: List[Dict[str, str]] = Field(default_factory=list)

@classmethod
def from_agent_output(cls, agent_output) -> "UseCaseOutput":
"""
Convert an AgentOutput to UseCaseOutput.

Args:
agent_output: AgentOutput instance from agent.run()

Returns:
UseCaseOutput instance
"""
cost_dict = agent_output.cost
tokens_dict = agent_output.tokens

return cls(
response=agent_output.response,
cost=CostInfo(
input_cost=cost_dict.get("input_cost", 0.0),
output_cost=cost_dict.get("output_cost", 0.0),
total_cost=cost_dict.get("total_cost", 0.0),
),
tokens=TokenInfo(
input_tokens=tokens_dict.get("input_tokens", 0),
output_tokens=tokens_dict.get("output_tokens", 0),
total_tokens=tokens_dict.get("total_tokens", 0),
),
messages=(
agent_output.messages if isinstance(agent_output.messages, list) else []
),
)

@classmethod
def from_cached(cls, cached_data: dict) -> "UseCaseOutput":
"""
Reconstruct from cached JSON data.

Args:
cached_data: Dictionary loaded from JSON cache file

Returns:
UseCaseOutput instance
"""
cost_data = cached_data.get("cost", {})
tokens_data = cached_data.get("tokens", {})

return cls(
response={"content": cached_data.get("content")},
cost=CostInfo(
input_cost=cost_data.get("input_cost", 0.0),
output_cost=cost_data.get("output_cost", 0.0),
total_cost=cost_data.get("total_cost", 0.0),
),
tokens=TokenInfo(
input_tokens=tokens_data.get("input_tokens", 0),
output_tokens=tokens_data.get("output_tokens", 0),
total_tokens=tokens_data.get("total_tokens", 0),
),
messages=cached_data.get("messages", []),
)

def to_cache_dict(self) -> dict:
"""
Convert to dictionary for JSON caching.

Returns:
Dictionary suitable for JSON serialization
"""
content = None
if isinstance(self.response, dict):
content = self.response.get("content", self.response)
else:
content = self.response

return {
"content": content,
"cost": self.cost.model_dump(),
"tokens": self.tokens.model_dump(),
"messages": self.messages,
}


T = TypeVar("T", bound=BaseModel)


class ParsedUseCaseOutput(UseCaseOutput):
"""
Output model for structured response formats.

Used when the agent returns a Pydantic model response.
"""

parsed_content: Optional[Any] = None

@classmethod
def from_agent_output(
cls, agent_output, model_class: Optional[Type[T]] = None
) -> "ParsedUseCaseOutput":
"""
Convert an AgentOutput with structured response to ParsedUseCaseOutput.

Args:
agent_output: AgentOutput instance from agent.run()
model_class: Optional Pydantic model class for parsing

Returns:
ParsedUseCaseOutput instance
"""
base = UseCaseOutput.from_agent_output(agent_output)
instance = cls(
response=base.response,
cost=base.cost,
tokens=base.tokens,
messages=base.messages,
)

# Extract parsed content from structured response
if model_class and hasattr(agent_output.response, "choices"):
try:
instance.parsed_content = agent_output.response.choices[
0
].message.parsed
except (AttributeError, IndexError):
pass

return instance

@classmethod
def from_cached(
cls, cached_data: dict, model_class: Optional[Type[T]] = None
) -> "ParsedUseCaseOutput":
"""
Reconstruct from cached JSON data with optional model validation.

Args:
cached_data: Dictionary loaded from JSON cache file
model_class: Optional Pydantic model class for content validation

Returns:
ParsedUseCaseOutput instance
"""
base = UseCaseOutput.from_cached(cached_data)
instance = cls(
response=base.response,
cost=base.cost,
tokens=base.tokens,
messages=base.messages,
)

if model_class and "content" in cached_data:
try:
instance.parsed_content = model_class.model_validate(
cached_data["content"]
)
except Exception:
pass

return instance

def to_cache_dict(self) -> dict:
"""
Convert to dictionary for JSON caching.

Returns:
Dictionary suitable for JSON serialization
"""
result = super().to_cache_dict()

# If we have parsed content, use that for caching
if self.parsed_content is not None:
if hasattr(self.parsed_content, "model_dump"):
result["content"] = self.parsed_content.model_dump()
else:
result["content"] = self.parsed_content

return result
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new output_models.py module lacks test coverage. The module introduces critical data transformation logic with methods like from_agent_output(), from_cached(), and to_cache_dict() that handle data conversion between different formats. Given that these conversions involve dictionary access with .get() and optional fields, they should be tested to ensure correct behavior with various input scenarios (missing fields, None values, etc.).

However, the project appears to have no test directory or test files, so this may be consistent with the current project structure. Consider adding tests if this module is used in production.

Copilot uses AI. Check for mistakes.
{
"attribution": {
"commit": "Co-Authored-By: Claude <noreply@anthropic.com>",
"pr": "Generated with [Claude Code](https://claude.ai/code)"
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The attribution settings reference "Claude Code" which suggests this was generated using Claude's code generation features. However, if this is intended as a project configuration file, it should be documented in the project README or CLAUDE.md so other contributors understand its purpose and whether it's project-specific or tool-specific configuration.

Suggested change
"pr": "Generated with [Claude Code](https://claude.ai/code)"
"pr": "Generated with assistance from Claude"

Copilot uses AI. Check for mistakes.
@@ -1,45 +1,142 @@
.PHONY: help install install-dev venv clean test lint format style run docs build dist upload
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The .PHONY declaration on line 1 lists docs as a phony target, but there is no docs target defined anywhere in the Makefile. This is misleading and could confuse users who see it in the phony list but can't find or use the target.

Remove docs from the .PHONY declaration or implement the missing target.

Suggested change
.PHONY: help install install-dev venv clean test lint format style run docs build dist upload
.PHONY: help install install-dev venv clean test lint format style run build dist upload

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +216
"""
Unified output data models for use case operations.

These models replace the dynamic type() antipattern used throughout
the UI components for creating mock Output objects.
"""

from typing import Any, Dict, List, Optional, Type, TypeVar

from pydantic import BaseModel, Field


class CostInfo(BaseModel):
"""Token cost information."""

input_cost: float = 0.0
output_cost: float = 0.0
total_cost: float = 0.0


class TokenInfo(BaseModel):
"""Token count information."""

input_tokens: int = 0
output_tokens: int = 0
total_tokens: int = 0


class UseCaseOutput(BaseModel):
"""
Unified output model for all use case operations.

This replaces the `type("Output", (), {...})` antipattern used
throughout the UI components.
"""

response: Any = None
cost: CostInfo = Field(default_factory=CostInfo)
tokens: TokenInfo = Field(default_factory=TokenInfo)
messages: List[Dict[str, str]] = Field(default_factory=list)

@classmethod
def from_agent_output(cls, agent_output) -> "UseCaseOutput":
"""
Convert an AgentOutput to UseCaseOutput.

Args:
agent_output: AgentOutput instance from agent.run()

Returns:
UseCaseOutput instance
"""
cost_dict = agent_output.cost
tokens_dict = agent_output.tokens

return cls(
response=agent_output.response,
cost=CostInfo(
input_cost=cost_dict.get("input_cost", 0.0),
output_cost=cost_dict.get("output_cost", 0.0),
total_cost=cost_dict.get("total_cost", 0.0),
),
tokens=TokenInfo(
input_tokens=tokens_dict.get("input_tokens", 0),
output_tokens=tokens_dict.get("output_tokens", 0),
total_tokens=tokens_dict.get("total_tokens", 0),
),
messages=(
agent_output.messages if isinstance(agent_output.messages, list) else []
),
)

@classmethod
def from_cached(cls, cached_data: dict) -> "UseCaseOutput":
"""
Reconstruct from cached JSON data.

Args:
cached_data: Dictionary loaded from JSON cache file

Returns:
UseCaseOutput instance
"""
cost_data = cached_data.get("cost", {})
tokens_data = cached_data.get("tokens", {})

return cls(
response={"content": cached_data.get("content")},
cost=CostInfo(
input_cost=cost_data.get("input_cost", 0.0),
output_cost=cost_data.get("output_cost", 0.0),
total_cost=cost_data.get("total_cost", 0.0),
),
tokens=TokenInfo(
input_tokens=tokens_data.get("input_tokens", 0),
output_tokens=tokens_data.get("output_tokens", 0),
total_tokens=tokens_data.get("total_tokens", 0),
),
messages=cached_data.get("messages", []),
)

def to_cache_dict(self) -> dict:
"""
Convert to dictionary for JSON caching.

Returns:
Dictionary suitable for JSON serialization
"""
content = None
if isinstance(self.response, dict):
content = self.response.get("content", self.response)
else:
content = self.response

return {
"content": content,
"cost": self.cost.model_dump(),
"tokens": self.tokens.model_dump(),
"messages": self.messages,
}


T = TypeVar("T", bound=BaseModel)


class ParsedUseCaseOutput(UseCaseOutput):
"""
Output model for structured response formats.

Used when the agent returns a Pydantic model response.
"""

parsed_content: Optional[Any] = None

@classmethod
def from_agent_output(
cls, agent_output, model_class: Optional[Type[T]] = None
) -> "ParsedUseCaseOutput":
"""
Convert an AgentOutput with structured response to ParsedUseCaseOutput.

Args:
agent_output: AgentOutput instance from agent.run()
model_class: Optional Pydantic model class for parsing

Returns:
ParsedUseCaseOutput instance
"""
base = UseCaseOutput.from_agent_output(agent_output)
instance = cls(
response=base.response,
cost=base.cost,
tokens=base.tokens,
messages=base.messages,
)

# Extract parsed content from structured response
if model_class and hasattr(agent_output.response, "choices"):
try:
instance.parsed_content = agent_output.response.choices[
0
].message.parsed
except (AttributeError, IndexError):
pass

return instance

@classmethod
def from_cached(
cls, cached_data: dict, model_class: Optional[Type[T]] = None
) -> "ParsedUseCaseOutput":
"""
Reconstruct from cached JSON data with optional model validation.

Args:
cached_data: Dictionary loaded from JSON cache file
model_class: Optional Pydantic model class for content validation

Returns:
ParsedUseCaseOutput instance
"""
base = UseCaseOutput.from_cached(cached_data)
instance = cls(
response=base.response,
cost=base.cost,
tokens=base.tokens,
messages=base.messages,
)

if model_class and "content" in cached_data:
try:
instance.parsed_content = model_class.model_validate(
cached_data["content"]
)
except Exception:
pass

return instance

def to_cache_dict(self) -> dict:
"""
Convert to dictionary for JSON caching.

Returns:
Dictionary suitable for JSON serialization
"""
result = super().to_cache_dict()

# If we have parsed content, use that for caching
if self.parsed_content is not None:
if hasattr(self.parsed_content, "model_dump"):
result["content"] = self.parsed_content.model_dump()
else:
result["content"] = self.parsed_content

return result
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new output_models.py module is not being used anywhere in the codebase. The UI components (deployment_ui.py, documentation_ui.py, refactoring_ui.py, testing_ui.py) are still using the type("Output", (), {...}) antipattern that this module is intended to replace.

To complete this refactoring:

  1. Import UseCaseOutput and ParsedUseCaseOutput in the UI component files
  2. Replace all instances of type("Output", ...) with calls to UseCaseOutput.from_cached()
  3. Update the use case modules to return these proper model instances instead of raw AgentOutput objects where appropriate

Without these changes, this new module adds dead code that doesn't solve the stated problem.

Copilot uses AI. Check for mistakes.
Comment on lines +50 to +53
@echo "$(BLUE)Creating virtual environment...$(NC)"
$(PYTHON) -m venv $(VENV_DIR)
@echo "$(GREEN)Virtual environment created at $(VENV_DIR)$(NC)"
@echo "Activate it with: source $(VENV_DIR)/bin/activate"
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The venv target always recreates the virtual environment from scratch, even if it already exists. This is inefficient and can be disruptive to development workflows. Each time make install or make run is called, it will recreate the venv.

Consider making this target idempotent by checking if the virtual environment already exists:

venv:
	@if [ ! -d "$(VENV_DIR)" ]; then \
		echo "$(BLUE)Creating virtual environment...$(NC)"; \
		$(PYTHON) -m venv $(VENV_DIR); \
		echo "$(GREEN)Virtual environment created at $(VENV_DIR)$(NC)"; \
	fi

This follows the pattern used in the old Makefile and is a common best practice.

Suggested change
@echo "$(BLUE)Creating virtual environment...$(NC)"
$(PYTHON) -m venv $(VENV_DIR)
@echo "$(GREEN)Virtual environment created at $(VENV_DIR)$(NC)"
@echo "Activate it with: source $(VENV_DIR)/bin/activate"
@if [ ! -d "$(VENV_DIR)" ]; then \
echo "$(BLUE)Creating virtual environment...$(NC)"; \
$(PYTHON) -m venv $(VENV_DIR); \
echo "$(GREEN)Virtual environment created at $(VENV_DIR)$(NC)"; \
echo "Activate it with: source $(VENV_DIR)/bin/activate"; \
fi

Copilot uses AI. Check for mistakes.
PIP := $(PYTHON) -m pip
PROJECT_NAME := codeas
SRC_DIR := src
TEST_DIR := tests
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Makefile defines TEST_DIR := tests and includes multiple targets that reference this directory (format, lint, test). However, the tests directory does not exist in the project. This will cause issues:

  1. The format target will attempt to format non-existent test files (though it uses || true to continue on error)
  2. The lint target will similarly attempt to lint non-existent tests
  3. The test target will fail when pytest tries to discover tests in a non-existent directory

Either:

  1. Remove $(TEST_DIR) references from targets if no tests exist yet, or
  2. Create an empty tests directory with a placeholder, or
  3. Make the test-related commands conditional on the directory existing

The current implementation may confuse users who run make test expecting it to work.

Suggested change
TEST_DIR := tests
TEST_DIR :=

Copilot uses AI. Check for mistakes.

# Variables
PYTHON := python3
PIP := $(PYTHON) -m pip
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PIP variable is defined as $(PYTHON) -m pip but is never used in the Makefile. All pip commands use direct pip calls after activating the virtual environment (e.g., . $(VENV_DIR)/bin/activate && pip install ...).

Either:

  1. Remove the unused PIP variable, or
  2. Use it consistently: . $(VENV_DIR)/bin/activate && $(PIP) install ...

Using the variable would make the Makefile more maintainable if pip invocation needs to change.

Copilot uses AI. Check for mistakes.
Comment on lines +40 to +70
messages: List[Dict[str, str]] = Field(default_factory=list)

@classmethod
def from_agent_output(cls, agent_output) -> "UseCaseOutput":
"""
Convert an AgentOutput to UseCaseOutput.

Args:
agent_output: AgentOutput instance from agent.run()

Returns:
UseCaseOutput instance
"""
cost_dict = agent_output.cost
tokens_dict = agent_output.tokens

return cls(
response=agent_output.response,
cost=CostInfo(
input_cost=cost_dict.get("input_cost", 0.0),
output_cost=cost_dict.get("output_cost", 0.0),
total_cost=cost_dict.get("total_cost", 0.0),
),
tokens=TokenInfo(
input_tokens=tokens_dict.get("input_tokens", 0),
output_tokens=tokens_dict.get("output_tokens", 0),
total_tokens=tokens_dict.get("total_tokens", 0),
),
messages=(
agent_output.messages if isinstance(agent_output.messages, list) else []
),
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The messages field is typed as List[Dict[str, str]] but in the from_agent_output method, it's populated directly from agent_output.messages which has type Union[list, dict] in the AgentOutput class. This type mismatch means:

  1. If agent_output.messages is a dict, the isinstance check passes it through as-is, violating the type annotation
  2. The type annotation suggests each message is a flat string-to-string dict, but actual message structures in LLM APIs often have nested structures

Consider either:

  1. Updating the type annotation to List[Dict[str, Any]] or Union[List[Dict], List] to match actual usage
  2. Adding proper conversion logic to ensure the data matches the type annotation
  3. Validating message structure in the conversion methods

Copilot uses AI. Check for mistakes.
Comment on lines +190 to +196
if model_class and "content" in cached_data:
try:
instance.parsed_content = model_class.model_validate(
cached_data["content"]
)
except Exception:
pass
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The exception handling in from_cached() catches bare Exception which is overly broad. This will silently suppress all errors during model validation, including programming errors, making debugging difficult. The caught exception should be more specific to validation errors.

Consider catching specific Pydantic validation exceptions:

from pydantic import ValidationError

try:
    instance.parsed_content = model_class.model_validate(cached_data["content"])
except (ValidationError, KeyError, TypeError) as e:
    # Optionally log the error
    pass

This makes the error handling more explicit about what failures are expected and acceptable.

Copilot uses AI. Check for mistakes.
instance.parsed_content = agent_output.response.choices[
0
].message.parsed
except (AttributeError, IndexError):
Copy link

Copilot AI Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'except' clause does nothing but pass and there is no explanatory comment.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants