From 5c1fb4ca430199c1c7dd0175ceea3d83c991037c Mon Sep 17 00:00:00 2001 From: Sergio Soto Date: Thu, 5 Feb 2026 17:57:21 +0100 Subject: [PATCH 1/6] Add Makefile templates for Java and Python projects; implement analysis and validation scripts - Created Java Makefile templates for basic, Maven-style, Spring Boot, multi-module, and Gradle-based projects. - Developed Python Makefile templates for basic, advanced (with Docker and CI), Flask/FastAPI web applications, and data science projects. - Introduced `analyze_makefile.py` to analyze Makefile structure, targets, dependencies, and variables. - Added `validate_makefile.py` to validate Makefile syntax and detect common issues. - Updated main Makefile with improved target definitions, installation processes, and help messages. --- .claude/settings.json | 6 + .claude/skills/makefile/new-makefile/SKILL.md | 316 +++++++++++++ .../new-makefile/references/best_practices.md | 431 ++++++++++++++++++ .../new-makefile/references/java_templates.md | 406 +++++++++++++++++ .../references/python_templates.md | 348 ++++++++++++++ .../new-makefile/scripts/analyze_makefile.py | 169 +++++++ .../new-makefile/scripts/validate_makefile.py | 130 ++++++ Makefile | 99 ++-- 8 files changed, 1879 insertions(+), 26 deletions(-) create mode 100644 .claude/settings.json create mode 100644 .claude/skills/makefile/new-makefile/SKILL.md create mode 100644 .claude/skills/makefile/new-makefile/references/best_practices.md create mode 100644 .claude/skills/makefile/new-makefile/references/java_templates.md create mode 100644 .claude/skills/makefile/new-makefile/references/python_templates.md create mode 100755 .claude/skills/makefile/new-makefile/scripts/analyze_makefile.py create mode 100755 .claude/skills/makefile/new-makefile/scripts/validate_makefile.py diff --git a/.claude/settings.json b/.claude/settings.json new file mode 100644 index 0000000..6026592 --- /dev/null +++ b/.claude/settings.json @@ -0,0 +1,6 @@ +{ + "attribution": { + "commit": "Co-Authored-By: Claude ", + "pr": "Generated with [Claude Code](https://claude.ai/code)" + } +} diff --git a/.claude/skills/makefile/new-makefile/SKILL.md b/.claude/skills/makefile/new-makefile/SKILL.md new file mode 100644 index 0000000..0244c42 --- /dev/null +++ b/.claude/skills/makefile/new-makefile/SKILL.md @@ -0,0 +1,316 @@ +--- +name: makefile +description: > + Professional Makefile creation, analysis, modification, and debugging for software projects. + Use this skill when the user needs to: (1) Create new Makefiles from scratch for Python, Java, + or multi-language projects, (2) Analyze or explain existing Makefiles (structure, targets, + dependencies, variables), (3) Modify, optimize, or refactor Makefiles to follow best practices, + (4) Debug Makefile issues (syntax errors, dependency problems, execution failures), or (5) Add + new targets, improve build performance, or implement professional patterns. Triggers include + mentions of Makefile, make, build automation, compilation workflows, or requests to improve + project build systems. +--- + +# Makefile Skill + +Professional skill for creating, analyzing, modifying, and debugging Makefiles in software projects. + +## Quick Reference + +**Core workflow:** +1. Understand the task (create/analyze/modify/debug) +2. For creation: Select appropriate template from references +3. For analysis: Use analyze_makefile.py script +4. For validation: Use validate_makefile.py script +5. Apply best practices from references/best_practices.md + +## When to Use This Skill + +This skill should be used whenever working with Makefiles, including: +- Creating new Makefiles for projects +- Analyzing existing Makefile structure +- Debugging Makefile syntax or execution issues +- Refactoring or optimizing Makefiles +- Adding new targets or improving existing ones +- Implementing professional build patterns + +## Core Workflow + +### 1. Creating New Makefiles + +**Process:** +1. Identify project type (Python, Java, multi-language) +2. Determine complexity level and required features +3. Select appropriate template from references +4. Customize for specific project needs +5. Validate with scripts/validate_makefile.py + +**For Python projects:** +- Read references/python_templates.md +- Choose template: Basic, Advanced (Docker/CI), Flask/FastAPI, or Data Science/ML +- Customize variables (project name, directories, dependencies) +- Add project-specific targets as needed + +**For Java projects:** +- Read references/java_templates.md +- Choose template: Basic, Maven-style, Spring Boot, Multi-module, or Gradle +- Configure Java version, main class, and dependencies +- Adapt build tool integration (Maven/Gradle) + +**Best practices to apply:** +- Always include .PHONY declarations +- Provide a help target with clear documentation +- Use variables for configuration +- Include clean target +- Support common workflows (build, test, install) + +### 2. Analyzing Existing Makefiles + +**Process:** +1. Run scripts/analyze_makefile.py to get structure overview +2. Examine targets, dependencies, and variables +3. Check for phony declarations +4. Review recipes for each target +5. Identify patterns and potential issues + +**Analysis script usage:** +```bash +python3 scripts/analyze_makefile.py path/to/Makefile +``` + +The script provides: +- Statistics (total targets, variables, phony declarations) +- Default target identification +- Variable definitions +- Target dependencies and recipes +- Structural overview + +**What to look for:** +- Missing .PHONY declarations for non-file targets +- Undefined or unused variables +- Circular dependencies +- Missing dependencies +- Inconsistent naming conventions + +### 3. Modifying and Optimizing Makefiles + +**Common modifications:** + +**Adding new targets:** +```makefile +.PHONY: new-target +new-target: dependencies + @echo "Running new target..." + command1 + command2 +``` + +**Optimizing variable usage:** +```makefile +# Before +build: + python3 -m pytest tests/ + +# After (with variables) +PYTHON := python3 +TEST_DIR := tests + +.PHONY: test +test: + $(PYTHON) -m pytest $(TEST_DIR) +``` + +**Improving dependency management:** +```makefile +# Ensure order and dependencies +build: install + # Build commands + +install: venv + # Install commands + +venv: + # Create virtual environment +``` + +**Consult references/best_practices.md for:** +- Performance optimization patterns +- Conditional compilation +- Multi-platform support +- Parallel build configuration +- Advanced patterns and techniques + +### 4. Debugging Makefiles + +**Validation workflow:** +1. Run scripts/validate_makefile.py to identify syntax issues +2. Check validation output for specific problems +3. Fix issues systematically (tabs, variables, dependencies) +4. Re-validate after fixes + +**Validation script usage:** +```bash +python3 scripts/validate_makefile.py path/to/Makefile +``` + +**Common issues detected:** +- Spaces instead of tabs in recipes (critical error) +- Automatic variables used outside recipes +- Missing .PHONY declarations for common targets +- Line continuation issues +- Syntax errors in target definitions + +**Manual debugging techniques:** + +**Check tab characters:** +```bash +# Verify tabs are present (not spaces) +cat -A Makefile | grep "^I" # Should show ^I for tabs +``` + +**Test individual targets:** +```bash +make -n target-name # Dry run to see what would execute +make target-name # Actually run the target +``` + +**Debug variable expansion:** +```bash +make -p # Print database of all rules and variables +``` + +**Common error patterns:** + +**Error: "missing separator"** +- Cause: Spaces instead of tabs in recipe +- Fix: Replace leading spaces with single tab character + +**Error: "No rule to make target"** +- Cause: Missing dependency or typo in target name +- Fix: Check target names and ensure all dependencies exist + +**Error: Command not found** +- Cause: Variable not set or program not in PATH +- Fix: Set variables correctly or use full paths + +### 5. Implementing Best Practices + +Read references/best_practices.md for comprehensive guidance on: + +**Essential practices:** +- Phony target declarations +- Variable usage and assignment +- Help target implementation +- Automatic variables +- Directory creation +- Error handling + +**Common patterns:** +- Dependency management +- Conditional compilation +- Multi-platform support +- Parallel builds +- Testing integration +- Documentation generation + +**Advanced techniques:** +- Function definitions +- Target-specific variables +- Include guards +- Performance optimization + +## Reference Files + +Load these as needed for detailed guidance: + +- **references/python_templates.md** - Complete Python project templates (basic, advanced, web apps, ML/data science) +- **references/java_templates.md** - Complete Java project templates (basic, Maven, Spring Boot, multi-module, Gradle) +- **references/best_practices.md** - Professional patterns, common mistakes, performance tips, advanced techniques + +## Scripts + +Use these tools for analysis and validation: + +- **scripts/analyze_makefile.py** - Analyze structure, targets, dependencies, and variables +- **scripts/validate_makefile.py** - Validate syntax and detect common issues + +## Examples + +### Example 1: Creating a Python Flask App Makefile + +User request: "Create a Makefile for my Flask application" + +**Workflow:** +1. Read references/python_templates.md (Flask/FastAPI section) +2. Customize project variables +3. Create Makefile with targets: install, dev, run, test, lint, format, migrate +4. Validate with scripts/validate_makefile.py + +### Example 2: Debugging Tab Issues + +User request: "My Makefile says 'missing separator' error" + +**Workflow:** +1. Run scripts/validate_makefile.py on the file +2. Identify lines with space-indent instead of tabs +3. Replace spaces with tabs in recipe lines +4. Re-validate to confirm fix + +### Example 3: Optimizing Existing Makefile + +User request: "Make my Makefile more professional" + +**Workflow:** +1. Run scripts/analyze_makefile.py to understand structure +2. Read references/best_practices.md +3. Add missing .PHONY declarations +4. Extract hardcoded values to variables +5. Add help target +6. Implement proper error handling +7. Validate final result + +### Example 4: Creating Multi-module Java Project + +User request: "Create a Makefile for my multi-module Java project with common, service-a, service-b modules" + +**Workflow:** +1. Read references/java_templates.md (Multi-module section) +2. Customize module list +3. Set up recursive make or include pattern +4. Define module dependencies +5. Create convenience targets for each module +6. Validate with scripts/validate_makefile.py + +## Tips for Success + +1. **Always validate** - Run validate_makefile.py before delivering +2. **Use templates** - Start from references rather than from scratch +3. **Check tabs** - Recipe lines MUST use tabs, not spaces +4. **Test incrementally** - Test each target as you add it +5. **Document** - Include help target and comments +6. **Be consistent** - Follow naming conventions and patterns +7. **Handle errors** - Add error checking in complex recipes +8. **Think dependencies** - Ensure proper target dependency chains + +## Common Pitfalls + +1. Using spaces instead of tabs in recipes (most common error) +2. Forgetting .PHONY declarations +3. Circular dependencies between targets +4. Hardcoding paths and commands +5. Not handling missing dependencies gracefully +6. Incorrect variable expansion ($ vs $$) +7. Platform-specific assumptions + +## Output Format + +When creating or modifying Makefiles, always: +1. Include clear comments explaining sections +2. Group related targets together +3. Put configuration variables at the top +4. Include a help target +5. Follow the template structure from references +6. Ensure proper indentation (tabs for recipes) +7. Add .PHONY declarations where needed + +The final Makefile should be production-ready, well-documented, and follow professional standards. diff --git a/.claude/skills/makefile/new-makefile/references/best_practices.md b/.claude/skills/makefile/new-makefile/references/best_practices.md new file mode 100644 index 0000000..d9a621a --- /dev/null +++ b/.claude/skills/makefile/new-makefile/references/best_practices.md @@ -0,0 +1,431 @@ +# Makefile Best Practices and Patterns + +This file contains professional best practices, common patterns, and guidelines for writing high-quality Makefiles. + +## Essential Best Practices + +### 1. Always Declare Phony Targets + +Targets that don't produce files should be declared as phony to avoid conflicts with files of the same name: + +```makefile +.PHONY: all clean test install build run help + +all: build test + +clean: + rm -rf build/ +``` + +### 2. Use Variables for Configuration + +Define variables at the top for easy configuration: + +```makefile +# Good +CC := gcc +CFLAGS := -Wall -Wextra -O2 +SRC_DIR := src +BUILD_DIR := build + +# Avoid hardcoding +$(BUILD_DIR)/%.o: $(SRC_DIR)/%.c + $(CC) $(CFLAGS) -c $< -o $@ +``` + +### 3. Provide a Help Target + +Always include a help target as the first or default target: + +```makefile +.PHONY: help +help: + @echo "Available targets:" + @echo " make build - Build the project" + @echo " make test - Run tests" + @echo " make clean - Clean build artifacts" +``` + +### 4. Use Automatic Variables + +Leverage Make's automatic variables for cleaner rules: + +- `$@` - Target name +- `$<` - First prerequisite +- `$^` - All prerequisites +- `$*` - Stem of pattern rule +- `$?` - Prerequisites newer than target + +```makefile +# Good +%.o: %.c + $(CC) $(CFLAGS) -c $< -o $@ + +# Avoid +%.o: %.c + $(CC) $(CFLAGS) -c file.c -o file.o +``` + +### 5. Suppress Command Echo When Appropriate + +Use `@` to suppress command echo for clean output: + +```makefile +clean: + @echo "Cleaning build artifacts..." + @rm -rf build/ + @echo "✓ Clean complete" +``` + +### 6. Use := for Variable Assignment + +Prefer `:=` (immediate expansion) over `=` (recursive expansion) for better performance: + +```makefile +# Preferred - evaluated once +SOURCES := $(wildcard src/*.c) + +# Avoid - evaluated every time it's used +SOURCES = $(wildcard src/*.c) +``` + +### 7. Create Directories as Needed + +Ensure output directories exist before writing to them: + +```makefile +$(BUILD_DIR): + mkdir -p $(BUILD_DIR) + +$(BUILD_DIR)/%.o: src/%.c | $(BUILD_DIR) + $(CC) $(CFLAGS) -c $< -o $@ +``` + +## Common Patterns + +### Pattern 1: Dependency Management + +```makefile +# Generate dependency files during compilation +DEPFLAGS = -MMD -MP +DEPS := $(OBJS:.o=.d) + +%.o: %.c + $(CC) $(CFLAGS) $(DEPFLAGS) -c $< -o $@ + +-include $(DEPS) +``` + +### Pattern 2: Conditional Compilation + +```makefile +# Debug vs Release builds +DEBUG ?= 0 + +ifeq ($(DEBUG), 1) + CFLAGS += -g -O0 -DDEBUG + BUILD_TYPE := debug +else + CFLAGS += -O2 -DNDEBUG + BUILD_TYPE := release +endif + +build: + @echo "Building $(BUILD_TYPE) version..." +``` + +### Pattern 3: Multi-platform Support + +```makefile +# Detect operating system +UNAME_S := $(shell uname -s) + +ifeq ($(UNAME_S),Linux) + PLATFORM := linux + LDFLAGS += -lpthread +endif +ifeq ($(UNAME_S),Darwin) + PLATFORM := macos + LDFLAGS += -framework CoreFoundation +endif +ifeq ($(OS),Windows_NT) + PLATFORM := windows + EXE_EXT := .exe +endif + +TARGET := myapp$(EXE_EXT) +``` + +### Pattern 4: Parallel Build Support + +```makefile +# Enable parallel builds +MAKEFLAGS += -j$(shell nproc) + +# Or allow user to specify +JOBS ?= $(shell nproc) + +build: + $(MAKE) -j$(JOBS) all +``` + +### Pattern 5: Color Output + +```makefile +# Color codes +RED := \033[0;31m +GREEN := \033[0;32m +YELLOW := \033[0;33m +NC := \033[0m # No Color + +success: + @echo "$(GREEN)✓ Build successful$(NC)" + +error: + @echo "$(RED)✗ Build failed$(NC)" +``` + +### Pattern 6: Incremental Builds + +```makefile +# Only rebuild what's necessary +SOURCES := $(wildcard src/*.c) +OBJECTS := $(SOURCES:src/%.c=build/%.o) + +$(TARGET): $(OBJECTS) + $(CC) $(OBJECTS) -o $@ $(LDFLAGS) + +build/%.o: src/%.c + @mkdir -p $(dir $@) + $(CC) $(CFLAGS) -c $< -o $@ +``` + +### Pattern 7: Installation Targets + +```makefile +PREFIX ?= /usr/local +BINDIR := $(PREFIX)/bin +LIBDIR := $(PREFIX)/lib + +.PHONY: install uninstall + +install: $(TARGET) + install -d $(BINDIR) + install -m 755 $(TARGET) $(BINDIR) + @echo "✓ Installed to $(BINDIR)" + +uninstall: + rm -f $(BINDIR)/$(TARGET) + @echo "✓ Uninstalled from $(BINDIR)" +``` + +### Pattern 8: Testing Targets + +```makefile +TEST_DIR := tests +TEST_SOURCES := $(wildcard $(TEST_DIR)/*.c) +TEST_BINS := $(TEST_SOURCES:$(TEST_DIR)/%.c=build/tests/%) + +.PHONY: test test-verbose + +test: $(TEST_BINS) + @for test in $(TEST_BINS); do \ + echo "Running $$test..."; \ + $$test || exit 1; \ + done + @echo "$(GREEN)✓ All tests passed$(NC)" + +test-verbose: $(TEST_BINS) + @for test in $(TEST_BINS); do \ + echo "Running $$test..."; \ + $$test -v || exit 1; \ + done +``` + +### Pattern 9: Documentation Generation + +```makefile +DOCS_DIR := docs +DOCS_BUILD := $(DOCS_DIR)/_build + +.PHONY: docs docs-clean docs-serve + +docs: + @command -v sphinx-build >/dev/null 2>&1 || \ + { echo "sphinx-build not found. Install sphinx."; exit 1; } + sphinx-build -b html $(DOCS_DIR) $(DOCS_BUILD) + +docs-clean: + rm -rf $(DOCS_BUILD) + +docs-serve: docs + python3 -m http.server --directory $(DOCS_BUILD) 8000 +``` + +### Pattern 10: Version Management + +```makefile +VERSION := $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev") +BUILD_DATE := $(shell date -u +'%Y-%m-%dT%H:%M:%SZ') +GIT_COMMIT := $(shell git rev-parse --short HEAD 2>/dev/null || echo "unknown") + +VERSION_FLAGS := -DVERSION=\"$(VERSION)\" \ + -DBUILD_DATE=\"$(BUILD_DATE)\" \ + -DGIT_COMMIT=\"$(GIT_COMMIT)\" + +build: + $(CC) $(CFLAGS) $(VERSION_FLAGS) -o $(TARGET) $(SOURCES) + +version: + @echo "Version: $(VERSION)" + @echo "Build Date: $(BUILD_DATE)" + @echo "Git Commit: $(GIT_COMMIT)" +``` + +## Common Mistakes to Avoid + +### 1. Using Spaces Instead of Tabs + +```makefile +# WRONG - uses spaces +target: + echo "This will fail" + +# CORRECT - uses tab +target: + echo "This works" +``` + +### 2. Not Declaring Phony Targets + +```makefile +# If a file named "clean" exists, this won't work +clean: + rm -rf build/ + +# CORRECT +.PHONY: clean +clean: + rm -rf build/ +``` + +### 3. Incorrect Variable Expansion + +```makefile +# WRONG - shell expansion happens in Make context +FILES = $(shell ls *.txt) +delete: + rm $(FILES) # Expands immediately + +# CORRECT - shell expansion in recipe +delete: + rm $$(ls *.txt) # Expands at execution time +``` + +### 4. Not Handling Errors + +```makefile +# WRONG - continues on error +test: + test1 + test2 + test3 + +# CORRECT - stops on error (default) or handle explicitly +test: + test1 || exit 1 + test2 || exit 1 + test3 || exit 1 +``` + +### 5. Hardcoding Paths + +```makefile +# WRONG +build: + gcc -o myapp /usr/local/include/mylib.h + +# CORRECT +INCLUDE_DIR := /usr/local/include +build: + $(CC) -o myapp -I$(INCLUDE_DIR) +``` + +## Performance Tips + +### 1. Use Pattern Rules + +```makefile +# Efficient - single pattern rule +%.o: %.c + $(CC) $(CFLAGS) -c $< -o $@ + +# Inefficient - individual rules +file1.o: file1.c + $(CC) $(CFLAGS) -c file1.c -o file1.o +file2.o: file2.c + $(CC) $(CFLAGS) -c file2.c -o file2.o +``` + +### 2. Avoid Recursive Make for Simple Projects + +```makefile +# Instead of recursive make +subdirs: + $(MAKE) -C subdir1 + $(MAKE) -C subdir2 + +# Consider including submakefiles +include subdir1/rules.mk +include subdir2/rules.mk +``` + +### 3. Cache Expensive Operations + +```makefile +# Cache shell command results +NPROC := $(shell nproc) + +build: + $(MAKE) -j$(NPROC) all # Uses cached value +``` + +## Advanced Techniques + +### Function Definitions + +```makefile +# Define reusable functions +define compile-source + @echo "Compiling $1..." + $(CC) $(CFLAGS) -c $1 -o $2 +endef + +build/%.o: src/%.c + $(call compile-source,$<,$@) +``` + +### Include Guards + +```makefile +# In common.mk +ifndef COMMON_MK_INCLUDED +COMMON_MK_INCLUDED := 1 + +# Common definitions here +CC := gcc +CFLAGS := -Wall + +endif +``` + +### Target-specific Variables + +```makefile +# Different flags for debug target +debug: CFLAGS += -g -O0 +debug: build + +release: CFLAGS += -O2 -DNDEBUG +release: build +``` diff --git a/.claude/skills/makefile/new-makefile/references/java_templates.md b/.claude/skills/makefile/new-makefile/references/java_templates.md new file mode 100644 index 0000000..59027ea --- /dev/null +++ b/.claude/skills/makefile/new-makefile/references/java_templates.md @@ -0,0 +1,406 @@ +# Java Makefile Templates + +This file contains professional Makefile templates for Java projects. + +## Basic Java Project Template + +```makefile +# Project configuration +PROJECT_NAME := MyJavaProject +MAIN_CLASS := com.example.Main +SRC_DIR := src +BUILD_DIR := build +CLASSES_DIR := $(BUILD_DIR)/classes +JAR_DIR := $(BUILD_DIR)/jar +LIB_DIR := lib + +# Java configuration +JAVAC := javac +JAVA := java +JAR := jar +JAVAC_FLAGS := -d $(CLASSES_DIR) -sourcepath $(SRC_DIR) -cp $(CLASSPATH) +JAVA_FLAGS := -cp $(CLASSES_DIR):$(CLASSPATH) + +# Classpath (include all JARs in lib directory) +CLASSPATH := $(LIB_DIR)/* + +# Find all Java source files +SOURCES := $(shell find $(SRC_DIR) -name "*.java") +CLASSES := $(SOURCES:$(SRC_DIR)/%.java=$(CLASSES_DIR)/%.class) + +.PHONY: all clean compile run jar help + +all: compile + +help: + @echo "Available targets:" + @echo " make compile - Compile Java sources" + @echo " make run - Run main class" + @echo " make jar - Create JAR file" + @echo " make clean - Remove compiled files" + @echo " make rebuild - Clean and compile" + +# Create directories +$(CLASSES_DIR): + mkdir -p $(CLASSES_DIR) + +$(JAR_DIR): + mkdir -p $(JAR_DIR) + +# Compile Java sources +compile: $(CLASSES_DIR) $(CLASSES) + +$(CLASSES_DIR)/%.class: $(SRC_DIR)/%.java + $(JAVAC) $(JAVAC_FLAGS) $< + +# Run the application +run: compile + $(JAVA) $(JAVA_FLAGS) $(MAIN_CLASS) + +# Create JAR file +jar: compile $(JAR_DIR) + $(JAR) cfm $(JAR_DIR)/$(PROJECT_NAME).jar manifest.txt -C $(CLASSES_DIR) . + +# Clean build artifacts +clean: + rm -rf $(BUILD_DIR) + +rebuild: clean all +``` + +## Maven-style Java Project Template + +```makefile +# Project configuration +PROJECT_NAME := my-java-app +VERSION := 1.0.0 +MAIN_CLASS := com.example.App + +# Directory structure (Maven-style) +SRC_DIR := src/main/java +TEST_DIR := src/test/java +RESOURCES_DIR := src/main/resources +TEST_RESOURCES_DIR := src/test/resources +TARGET_DIR := target +CLASSES_DIR := $(TARGET_DIR)/classes +TEST_CLASSES_DIR := $(TARGET_DIR)/test-classes +JAR_FILE := $(TARGET_DIR)/$(PROJECT_NAME)-$(VERSION).jar + +# Java configuration +JAVAC := javac +JAVA := java +JAR := jar +JAVAC_FLAGS := -encoding UTF-8 -source 11 -target 11 +JAVA_FLAGS := -Xmx512m + +# Dependencies +LIB_DIR := lib +TEST_LIB_DIR := lib/test +CLASSPATH := $(CLASSES_DIR):$(LIB_DIR)/* +TEST_CLASSPATH := $(TEST_CLASSES_DIR):$(CLASSES_DIR):$(LIB_DIR)/*:$(TEST_LIB_DIR)/* + +# Find source files +SOURCES := $(shell find $(SRC_DIR) -name "*.java" 2>/dev/null) +TEST_SOURCES := $(shell find $(TEST_DIR) -name "*.java" 2>/dev/null) + +.PHONY: all clean compile test package run install help + +all: package + +help: + @echo "Maven-style Java project targets:" + @echo " make compile - Compile main sources" + @echo " make test - Run tests" + @echo " make package - Create JAR package" + @echo " make run - Run application" + @echo " make clean - Clean build artifacts" + @echo " make install - Install to local repository" + +# Create directories +$(CLASSES_DIR) $(TEST_CLASSES_DIR) $(TARGET_DIR): + mkdir -p $@ + +# Compile main sources +compile: $(CLASSES_DIR) + @echo "Compiling main sources..." + @if [ -n "$(SOURCES)" ]; then \ + $(JAVAC) $(JAVAC_FLAGS) -d $(CLASSES_DIR) -cp $(CLASSPATH) $(SOURCES); \ + fi + @if [ -d "$(RESOURCES_DIR)" ]; then \ + cp -r $(RESOURCES_DIR)/* $(CLASSES_DIR)/; \ + fi + @echo "✓ Compilation complete" + +# Compile test sources +compile-tests: compile $(TEST_CLASSES_DIR) + @echo "Compiling test sources..." + @if [ -n "$(TEST_SOURCES)" ]; then \ + $(JAVAC) $(JAVAC_FLAGS) -d $(TEST_CLASSES_DIR) -cp $(TEST_CLASSPATH) $(TEST_SOURCES); \ + fi + @if [ -d "$(TEST_RESOURCES_DIR)" ]; then \ + cp -r $(TEST_RESOURCES_DIR)/* $(TEST_CLASSES_DIR)/; \ + fi + +# Run tests (using JUnit) +test: compile-tests + @echo "Running tests..." + $(JAVA) -cp $(TEST_CLASSPATH) org.junit.runner.JUnitCore $$(find $(TEST_CLASSES_DIR) -name "*Test.class" | sed 's|$(TEST_CLASSES_DIR)/||g' | sed 's|\.class$$||g' | sed 's|/|.|g') + +# Create JAR package +package: compile $(TARGET_DIR) + @echo "Creating JAR package..." + $(JAR) cfe $(JAR_FILE) $(MAIN_CLASS) -C $(CLASSES_DIR) . + @if [ -d "$(LIB_DIR)" ]; then \ + mkdir -p $(TARGET_DIR)/lib; \ + cp $(LIB_DIR)/*.jar $(TARGET_DIR)/lib/; \ + fi + @echo "✓ Package created: $(JAR_FILE)" + +# Run application +run: package + $(JAVA) $(JAVA_FLAGS) -jar $(JAR_FILE) + +# Install to local repository (simplified) +install: package + @echo "Installing to local repository..." + mkdir -p ~/.m2/repository/com/example/$(PROJECT_NAME)/$(VERSION) + cp $(JAR_FILE) ~/.m2/repository/com/example/$(PROJECT_NAME)/$(VERSION)/ + +# Clean build artifacts +clean: + rm -rf $(TARGET_DIR) + +.PHONY: verify +verify: test + @echo "✓ Verification complete" +``` + +## Spring Boot Application Template + +```makefile +# Spring Boot project configuration +PROJECT_NAME := spring-boot-app +VERSION := 1.0.0 +MAIN_CLASS := com.example.Application +JAVA_VERSION := 17 + +# Directories +SRC_DIR := src/main/java +RESOURCES_DIR := src/main/resources +TEST_DIR := src/test/java +TARGET_DIR := target +CLASSES_DIR := $(TARGET_DIR)/classes +JAR_FILE := $(TARGET_DIR)/$(PROJECT_NAME)-$(VERSION).jar + +# Java tools +JAVAC := javac +JAVA := java +MVN := mvn + +# Application settings +SPRING_PROFILE := dev +SERVER_PORT := 8080 + +.PHONY: all build run test clean package deploy help + +all: build + +help: + @echo "Spring Boot application targets:" + @echo " make build - Build the application" + @echo " make run - Run the application" + @echo " make dev - Run in development mode" + @echo " make test - Run tests" + @echo " make package - Create executable JAR" + @echo " make clean - Clean build artifacts" + @echo " make docker - Build Docker image" + +# Build with Maven/Gradle wrapper +build: + @if [ -f "mvnw" ]; then \ + ./mvnw clean compile; \ + elif [ -f "gradlew" ]; then \ + ./gradlew build; \ + else \ + echo "No build tool wrapper found"; \ + exit 1; \ + fi + +# Run application +run: build + @if [ -f "mvnw" ]; then \ + ./mvnw spring-boot:run; \ + elif [ -f "gradlew" ]; then \ + ./gradlew bootRun; \ + else \ + $(JAVA) -jar $(JAR_FILE); \ + fi + +# Run in development mode +dev: + @if [ -f "mvnw" ]; then \ + ./mvnw spring-boot:run -Dspring-boot.run.profiles=$(SPRING_PROFILE); \ + else \ + ./gradlew bootRun --args='--spring.profiles.active=$(SPRING_PROFILE)'; \ + fi + +# Run tests +test: + @if [ -f "mvnw" ]; then \ + ./mvnw test; \ + else \ + ./gradlew test; \ + fi + +# Create executable JAR +package: + @if [ -f "mvnw" ]; then \ + ./mvnw clean package -DskipTests; \ + else \ + ./gradlew bootJar; \ + fi + +# Clean +clean: + @if [ -f "mvnw" ]; then \ + ./mvnw clean; \ + else \ + ./gradlew clean; \ + fi + rm -rf $(TARGET_DIR) + +# Docker build +docker: package + docker build -t $(PROJECT_NAME):$(VERSION) . + +# Database migrations (Flyway/Liquibase) +db-migrate: + @if [ -f "mvnw" ]; then \ + ./mvnw flyway:migrate; \ + fi + +db-clean: + @if [ -f "mvnw" ]; then \ + ./mvnw flyway:clean; \ + fi +``` + +## Multi-module Java Project Template + +```makefile +# Multi-module project configuration +PROJECT_NAME := multi-module-app +VERSION := 1.0.0 + +# Modules +MODULES := common service-a service-b web + +# Build tool +MVN := mvn +GRADLE := gradle + +.PHONY: all build test clean install deploy help $(MODULES) + +all: build + +help: + @echo "Multi-module project targets:" + @echo " make build - Build all modules" + @echo " make test - Test all modules" + @echo " make clean - Clean all modules" + @echo " make install - Install all modules" + @echo " make [module-name] - Build specific module" + @echo "" + @echo "Available modules: $(MODULES)" + +# Build all modules +build: + @echo "Building all modules..." + @for module in $(MODULES); do \ + echo "Building $$module..."; \ + $(MAKE) -C $$module build; \ + done + @echo "✓ All modules built" + +# Test all modules +test: + @echo "Testing all modules..." + @for module in $(MODULES); do \ + echo "Testing $$module..."; \ + $(MAKE) -C $$module test; \ + done + @echo "✓ All tests passed" + +# Clean all modules +clean: + @echo "Cleaning all modules..." + @for module in $(MODULES); do \ + $(MAKE) -C $$module clean; \ + done + rm -rf target + +# Install all modules +install: + @echo "Installing all modules..." + @for module in $(MODULES); do \ + $(MAKE) -C $$module install; \ + done + +# Build specific module +$(MODULES): + @echo "Building module: $@" + $(MAKE) -C $@ build + +# Dependency graph +deps: + @echo "Module dependencies:" + @echo " common (base)" + @echo " service-a -> common" + @echo " service-b -> common" + @echo " web -> service-a, service-b" +``` + +## Gradle-based Project Template + +```makefile +# Gradle project configuration +PROJECT_NAME := gradle-app +GRADLE := ./gradlew + +.PHONY: all build run test clean assemble help + +all: build + +help: + @echo "Gradle project targets:" + @echo " make build - Build project" + @echo " make run - Run application" + @echo " make test - Run tests" + @echo " make clean - Clean build" + @echo " make assemble - Assemble artifacts" + @echo " make check - Run checks" + +build: + $(GRADLE) build + +run: + $(GRADLE) run + +test: + $(GRADLE) test --info + +clean: + $(GRADLE) clean + +assemble: + $(GRADLE) assemble + +check: + $(GRADLE) check + +bootRun: + $(GRADLE) bootRun + +tasks: + $(GRADLE) tasks +``` diff --git a/.claude/skills/makefile/new-makefile/references/python_templates.md b/.claude/skills/makefile/new-makefile/references/python_templates.md new file mode 100644 index 0000000..53e2e4d --- /dev/null +++ b/.claude/skills/makefile/new-makefile/references/python_templates.md @@ -0,0 +1,348 @@ +# Python Makefile Templates + +This file contains professional Makefile templates for Python projects. + +## Basic Python Project Template + +```makefile +# Project configuration +PROJECT_NAME := my_project +PYTHON := python3 +PIP := $(PYTHON) -m pip +VENV := venv +VENV_BIN := $(VENV)/bin +PYTHON_VENV := $(VENV_BIN)/python +PIP_VENV := $(VENV_BIN)/pip + +# Source directories +SRC_DIR := src +TEST_DIR := tests +DOCS_DIR := docs + +# Phony targets +.PHONY: all clean install test lint format help venv + +# Default target +all: install lint test + +# Help target +help: + @echo "Available targets:" + @echo " make install - Install dependencies" + @echo " make test - Run tests" + @echo " make lint - Run linters" + @echo " make format - Format code" + @echo " make clean - Remove generated files" + @echo " make venv - Create virtual environment" + +# Create virtual environment +venv: + $(PYTHON) -m venv $(VENV) + $(PIP_VENV) install --upgrade pip setuptools wheel + +# Install dependencies +install: venv + $(PIP_VENV) install -r requirements.txt + $(PIP_VENV) install -r requirements-dev.txt + +# Run tests +test: + $(PYTHON_VENV) -m pytest $(TEST_DIR) -v --cov=$(SRC_DIR) + +# Run linters +lint: + $(PYTHON_VENV) -m flake8 $(SRC_DIR) $(TEST_DIR) + $(PYTHON_VENV) -m pylint $(SRC_DIR) + $(PYTHON_VENV) -m mypy $(SRC_DIR) + +# Format code +format: + $(PYTHON_VENV) -m black $(SRC_DIR) $(TEST_DIR) + $(PYTHON_VENV) -m isort $(SRC_DIR) $(TEST_DIR) + +# Clean generated files +clean: + rm -rf $(VENV) + rm -rf .pytest_cache + rm -rf .coverage + rm -rf htmlcov + rm -rf .mypy_cache + find . -type d -name "__pycache__" -exec rm -rf {} + + find . -type f -name "*.pyc" -delete + find . -type f -name "*.pyo" -delete +``` + +## Advanced Python Project Template (with Docker and CI) + +```makefile +# Project configuration +PROJECT_NAME := my_advanced_project +PYTHON := python3 +VENV := venv +DOCKER_IMAGE := $(PROJECT_NAME):latest + +# Directories +SRC_DIR := src +TEST_DIR := tests +DOCS_DIR := docs +BUILD_DIR := build +DIST_DIR := dist + +# Python executables +VENV_BIN := $(VENV)/bin +PYTHON_VENV := $(VENV_BIN)/python +PIP_VENV := $(VENV_BIN)/pip + +# Test coverage threshold +COVERAGE_THRESHOLD := 80 + +.PHONY: all clean install test lint format docs build docker-build docker-run help + +all: install lint test + +help: + @echo "Development targets:" + @echo " make install - Install dependencies" + @echo " make test - Run tests with coverage" + @echo " make lint - Run all linters" + @echo " make format - Format code (black, isort)" + @echo " make type-check - Run type checker (mypy)" + @echo "" + @echo "Build targets:" + @echo " make build - Build package" + @echo " make docker-build - Build Docker image" + @echo " make docker-run - Run Docker container" + @echo "" + @echo "Documentation:" + @echo " make docs - Generate documentation" + @echo "" + @echo "Cleanup:" + @echo " make clean - Remove generated files" + @echo " make clean-all - Remove all generated files including venv" + +# Virtual environment +$(VENV_BIN)/activate: requirements.txt + $(PYTHON) -m venv $(VENV) + $(PIP_VENV) install --upgrade pip setuptools wheel + $(PIP_VENV) install -r requirements.txt + $(PIP_VENV) install -r requirements-dev.txt + touch $(VENV_BIN)/activate + +venv: $(VENV_BIN)/activate + +# Install dependencies +install: venv + @echo "✓ Dependencies installed" + +# Run tests with coverage +test: venv + $(PYTHON_VENV) -m pytest $(TEST_DIR) \ + -v \ + --cov=$(SRC_DIR) \ + --cov-report=html \ + --cov-report=term \ + --cov-fail-under=$(COVERAGE_THRESHOLD) + +# Quick test (no coverage) +test-quick: venv + $(PYTHON_VENV) -m pytest $(TEST_DIR) -v + +# Run linters +lint: venv + @echo "Running flake8..." + $(PYTHON_VENV) -m flake8 $(SRC_DIR) $(TEST_DIR) + @echo "Running pylint..." + $(PYTHON_VENV) -m pylint $(SRC_DIR) + @echo "✓ All linters passed" + +# Type checking +type-check: venv + $(PYTHON_VENV) -m mypy $(SRC_DIR) --strict + +# Format code +format: venv + $(PYTHON_VENV) -m black $(SRC_DIR) $(TEST_DIR) + $(PYTHON_VENV) -m isort $(SRC_DIR) $(TEST_DIR) + @echo "✓ Code formatted" + +# Check formatting without modifying +format-check: venv + $(PYTHON_VENV) -m black --check $(SRC_DIR) $(TEST_DIR) + $(PYTHON_VENV) -m isort --check-only $(SRC_DIR) $(TEST_DIR) + +# Generate documentation +docs: venv + cd $(DOCS_DIR) && $(PYTHON_VENV) -m sphinx-build -b html . _build/html + +# Build package +build: venv + $(PYTHON_VENV) -m build + +# Docker targets +docker-build: + docker build -t $(DOCKER_IMAGE) . + +docker-run: + docker run -it --rm $(DOCKER_IMAGE) + +# Clean generated files +clean: + rm -rf $(BUILD_DIR) $(DIST_DIR) + rm -rf .pytest_cache .coverage htmlcov + rm -rf .mypy_cache .ruff_cache + find . -type d -name "__pycache__" -exec rm -rf {} + + find . -type f -name "*.pyc" -delete + find . -type f -name "*.pyo" -delete + find . -type d -name "*.egg-info" -exec rm -rf {} + + +clean-all: clean + rm -rf $(VENV) + +# CI/CD targets +ci: install lint type-check test + @echo "✓ CI pipeline passed" +``` + +## Flask/FastAPI Web Application Template + +```makefile +# Project configuration +PROJECT_NAME := my_web_app +PYTHON := python3 +VENV := venv +VENV_BIN := $(VENV)/bin +PYTHON_VENV := $(VENV_BIN)/python +PIP_VENV := $(VENV_BIN)/pip + +# Application settings +APP_MODULE := app.main:app +HOST := 0.0.0.0 +PORT := 8000 + +.PHONY: all install dev run test lint format migrate help + +all: install lint test + +help: + @echo "Development:" + @echo " make install - Install dependencies" + @echo " make dev - Run development server" + @echo " make run - Run production server" + @echo " make shell - Open interactive shell" + @echo "" + @echo "Database:" + @echo " make migrate - Run database migrations" + @echo " make db-upgrade - Upgrade database" + @echo " make db-reset - Reset database" + @echo "" + @echo "Testing & Quality:" + @echo " make test - Run tests" + @echo " make lint - Run linters" + @echo " make format - Format code" + +# Virtual environment and dependencies +$(VENV_BIN)/activate: + $(PYTHON) -m venv $(VENV) + $(PIP_VENV) install --upgrade pip + touch $(VENV_BIN)/activate + +install: $(VENV_BIN)/activate + $(PIP_VENV) install -r requirements.txt + $(PIP_VENV) install -r requirements-dev.txt + +# Development server +dev: install + $(PYTHON_VENV) -m uvicorn $(APP_MODULE) --reload --host $(HOST) --port $(PORT) + +# Production server +run: install + $(PYTHON_VENV) -m uvicorn $(APP_MODULE) --host $(HOST) --port $(PORT) + +# Interactive shell +shell: install + $(PYTHON_VENV) -i -c "from app import *" + +# Database migrations +migrate: install + $(PYTHON_VENV) -m alembic revision --autogenerate + +db-upgrade: install + $(PYTHON_VENV) -m alembic upgrade head + +db-reset: install + $(PYTHON_VENV) -m alembic downgrade base + $(PYTHON_VENV) -m alembic upgrade head + +# Testing +test: install + $(PYTHON_VENV) -m pytest tests/ -v --cov=app + +# Code quality +lint: install + $(PYTHON_VENV) -m flake8 app tests + $(PYTHON_VENV) -m pylint app + +format: install + $(PYTHON_VENV) -m black app tests + $(PYTHON_VENV) -m isort app tests + +# Clean +clean: + rm -rf $(VENV) .pytest_cache .coverage htmlcov .mypy_cache + find . -type d -name "__pycache__" -exec rm -rf {} + + find . -type f -name "*.pyc" -delete +``` + +## Data Science / ML Project Template + +```makefile +PROJECT_NAME := ml_project +PYTHON := python3 +VENV := venv +VENV_BIN := $(VENV)/bin +PYTHON_VENV := $(VENV_BIN)/python +JUPYTER := $(VENV_BIN)/jupyter + +# Directories +DATA_DIR := data +NOTEBOOKS_DIR := notebooks +MODELS_DIR := models +REPORTS_DIR := reports + +.PHONY: all install notebook train evaluate report clean + +all: install + +help: + @echo "Data Science targets:" + @echo " make notebook - Start Jupyter notebook" + @echo " make train - Train models" + @echo " make evaluate - Evaluate models" + @echo " make report - Generate report" + @echo " make clean-data - Clean data directory" + +install: + $(PYTHON) -m venv $(VENV) + $(VENV_BIN)/pip install --upgrade pip + $(VENV_BIN)/pip install -r requirements.txt + +notebook: install + $(JUPYTER) notebook --notebook-dir=$(NOTEBOOKS_DIR) + +train: install + $(PYTHON_VENV) scripts/train.py --data $(DATA_DIR) --output $(MODELS_DIR) + +evaluate: install + $(PYTHON_VENV) scripts/evaluate.py --models $(MODELS_DIR) --data $(DATA_DIR) + +report: install + $(PYTHON_VENV) scripts/generate_report.py --output $(REPORTS_DIR) + +clean-data: + rm -rf $(DATA_DIR)/processed/* + rm -rf $(DATA_DIR)/interim/* + +clean: clean-data + rm -rf $(VENV) .pytest_cache .ipynb_checkpoints + find . -type d -name "__pycache__" -exec rm -rf {} + +``` diff --git a/.claude/skills/makefile/new-makefile/scripts/analyze_makefile.py b/.claude/skills/makefile/new-makefile/scripts/analyze_makefile.py new file mode 100755 index 0000000..aacb7b4 --- /dev/null +++ b/.claude/skills/makefile/new-makefile/scripts/analyze_makefile.py @@ -0,0 +1,169 @@ +#!/usr/bin/env python3 +""" +Analyze Makefile structure, targets, dependencies, and variables. +""" +import re +import sys +from typing import Dict + + +def analyze_makefile(filepath: str) -> Dict: + """ + Analyze a Makefile and return its structure. + + Returns: + Dictionary with targets, variables, phony targets, and dependency graph + """ + try: + with open(filepath, "r") as f: + content = f.read() + lines = content.split("\n") + except FileNotFoundError: + return {"error": f"File not found: {filepath}"} + except Exception as e: + return {"error": f"Error reading file: {e}"} + + result = { + "variables": {}, + "targets": {}, + "phony_targets": set(), + "default_target": None, + "statistics": {}, + } + + current_target = None + in_recipe = False + + for line_num, line in enumerate(lines, 1): + stripped = line.strip() + + # Skip comments and empty lines + if not stripped or stripped.startswith("#"): + continue + + # Extract variable definitions + var_match = re.match(r"^([A-Za-z_][A-Za-z0-9_]*)\s*[:?]?=\s*(.*)$", stripped) + if var_match and ":" not in var_match.group(1): + var_name = var_match.group(1) + var_value = var_match.group(2) + result["variables"][var_name] = var_value.strip() + continue + + # Extract .PHONY declarations + if stripped.startswith(".PHONY:"): + phony_list = stripped.split(":", 1)[1] + result["phony_targets"].update(t.strip() for t in phony_list.split()) + continue + + # Extract target definitions + target_match = re.match( + r"^([^:]+):\s*(.*)$", line + ) # Use full line to preserve tabs + if target_match: + target_name = target_match.group(1).strip() + dependencies = target_match.group(2).strip() + + # Skip special targets + if target_name.startswith(".") and target_name != ".DEFAULT": + continue + + # Set default target (first non-special target) + if result["default_target"] is None and not target_name.startswith("."): + result["default_target"] = target_name + + dep_list = [d.strip() for d in dependencies.split() if d.strip()] + + result["targets"][target_name] = { + "dependencies": dep_list, + "recipe_lines": [], + "line_number": line_num, + } + + current_target = target_name + in_recipe = True + continue + + # Extract recipe lines (must start with tab) + if in_recipe and line.startswith("\t"): + recipe_line = line[1:].rstrip() # Remove leading tab + if current_target and recipe_line: + result["targets"][current_target]["recipe_lines"].append(recipe_line) + else: + in_recipe = False + current_target = None + + # Calculate statistics + result["statistics"] = { + "total_targets": len(result["targets"]), + "phony_targets": len(result["phony_targets"]), + "variables": len(result["variables"]), + "targets_with_recipes": sum( + 1 for t in result["targets"].values() if t["recipe_lines"] + ), + "targets_without_recipes": sum( + 1 for t in result["targets"].values() if not t["recipe_lines"] + ), + } + + # Convert set to list for JSON serialization + result["phony_targets"] = sorted(result["phony_targets"]) + + return result + + +def print_analysis(analysis: Dict): + """Pretty print the analysis results.""" + if "error" in analysis: + print(f"❌ Error: {analysis['error']}") + return + + print("=" * 60) + print("MAKEFILE ANALYSIS") + print("=" * 60) + + # Statistics + stats = analysis["statistics"] + print("\n📊 Statistics:") + print(f" Total targets: {stats['total_targets']}") + print(f" Phony targets: {stats['phony_targets']}") + print(f" Variables: {stats['variables']}") + print(f" Targets with recipes: {stats['targets_with_recipes']}") + print(f" Targets without recipes: {stats['targets_without_recipes']}") + + # Default target + if analysis["default_target"]: + print(f"\n🎯 Default target: {analysis['default_target']}") + + # Variables + if analysis["variables"]: + print(f"\n📝 Variables ({len(analysis['variables'])}):") + for var, value in sorted(analysis["variables"].items()): + print(f" {var} = {value[:50]}{'...' if len(value) > 50 else ''}") + + # Phony targets + if analysis["phony_targets"]: + print(f"\n🏷️ Phony targets: {', '.join(analysis['phony_targets'])}") + + # Targets + print(f"\n🎯 Targets ({len(analysis['targets'])}):") + for target, info in sorted(analysis["targets"].items()): + deps = ", ".join(info["dependencies"]) if info["dependencies"] else "none" + recipe_count = len(info["recipe_lines"]) + print(f"\n {target}:") + print(f" Dependencies: {deps}") + print(f" Recipe lines: {recipe_count}") + if info["recipe_lines"]: + for recipe_line in info["recipe_lines"][:3]: # Show first 3 lines + print(f" {recipe_line}") + if recipe_count > 3: + print(f" ... ({recipe_count - 3} more lines)") + + +if __name__ == "__main__": + if len(sys.argv) != 2: + print("Usage: python3 analyze_makefile.py ") + sys.exit(1) + + filepath = sys.argv[1] + analysis = analyze_makefile(filepath) + print_analysis(analysis) diff --git a/.claude/skills/makefile/new-makefile/scripts/validate_makefile.py b/.claude/skills/makefile/new-makefile/scripts/validate_makefile.py new file mode 100755 index 0000000..4f4dc1d --- /dev/null +++ b/.claude/skills/makefile/new-makefile/scripts/validate_makefile.py @@ -0,0 +1,130 @@ +#!/usr/bin/env python3 +""" +Validate Makefile syntax and detect common issues. +""" +import re +import sys +from typing import List, Tuple + + +def validate_makefile(filepath: str) -> Tuple[bool, List[str]]: + """ + Validate a Makefile and return issues found. + + Returns: + (is_valid, list_of_issues) + """ + issues = [] + + try: + with open(filepath, "r") as f: + lines = f.readlines() + except FileNotFoundError: + return False, [f"File not found: {filepath}"] + except Exception as e: + return False, [f"Error reading file: {e}"] + + # Track state + in_recipe = False + + for line_num, line in enumerate(lines, 1): + # Check for spaces instead of tabs in recipes + if ( + in_recipe + and line.strip() + and not line.startswith("\t") + and not line.startswith("#") + ): + if line.startswith(" "): # Common mistake: spaces instead of tab + issues.append( + f"Line {line_num}: Recipe commands must start with a TAB, not spaces" + ) + in_recipe = False + + # Detect target definitions + if ":" in line and not line.strip().startswith("#"): + # Basic target pattern + if re.match(r"^[^:]+:", line): + in_recipe = True + + # Check for common syntax errors + if "=" in line and not line.strip().startswith("#"): + # Check for spaces around := or = + if re.search(r"\w\s*:=\s*\w", line) or re.search(r"\w\s*=\s*\w", line): + # This is actually fine, just documenting + pass + + # Check for undefined variable references (basic check) + var_refs = re.findall(r"\$\(([^)]+)\)", line) + for var in var_refs: + # Check for common undefined automatic variables used incorrectly + if var in ["<", "@", "^", "?", "*", "+"] and not in_recipe: + issues.append( + f"Line {line_num}: Automatic variable '$({var})' used outside recipe" + ) + + # Check for line continuations + if line.rstrip().endswith("\\"): + next_line_idx = line_num + if next_line_idx < len(lines): + next_line = lines[next_line_idx] + if next_line.strip() and next_line.startswith("\t") and not in_recipe: + issues.append( + f"Line {line_num}: Line continuation followed by TAB (should be space)" + ) + + # Check for .PHONY declarations for non-file targets + phony_targets = set() + defined_targets = set() + + for line in lines: + if line.strip().startswith(".PHONY:"): + phony_list = line.split(":", 1)[1] + phony_targets.update(t.strip() for t in phony_list.split()) + elif ":" in line and not line.strip().startswith("#"): + target = line.split(":")[0].strip() + if target and not target.startswith("."): + defined_targets.add(target) + + # Common targets that should be .PHONY + common_phony = { + "all", + "clean", + "test", + "install", + "build", + "run", + "help", + "lint", + "format", + } + missing_phony = (defined_targets & common_phony) - phony_targets + + if missing_phony: + issues.append( + f"Suggestion: Consider adding .PHONY declaration for: {', '.join(sorted(missing_phony))}" + ) + + is_valid = len([i for i in issues if not i.startswith("Suggestion")]) == 0 + return is_valid, issues + + +if __name__ == "__main__": + if len(sys.argv) != 2: + print("Usage: python3 validate_makefile.py ") + sys.exit(1) + + filepath = sys.argv[1] + is_valid, issues = validate_makefile(filepath) + + if is_valid: + print(f"✅ {filepath} is valid!") + else: + print(f"❌ {filepath} has issues:") + + if issues: + print("\nIssues found:") + for issue in issues: + print(f" • {issue}") + + sys.exit(0 if is_valid else 1) diff --git a/Makefile b/Makefile index 876d22b..29ab492 100644 --- a/Makefile +++ b/Makefile @@ -7,39 +7,86 @@ PIP := $(VENV)/bin/pip .PHONY: venv venv: ## Create project virtualenv @if [ ! -d "$(VENV)" ]; then \ - echo "Creating virtualenv..." && \ - python3 -m venv $(VENV) && \ + echo "Creating virtualenv..."; \ + python3 -m venv $(VENV); \ $(PIP) install --upgrade pip; \ + else \ + echo "Virtual environment already exists at $(VENV)"; \ fi +.PHONY: install install: venv ## Install project dependencies - @echo "Installing dependencies..." && \ + @echo "Installing dependencies..." $(PIP) install -e . -install-dev: venv ## Installs development tools - @echo "Installing development tools..." && \ - $(PIP) install pre-commit black isort ruff +.PHONY: install-dev +install-dev: install ## Install development dependencies + @echo "Installing development dependencies..." + $(PIP) install black isort ruff pytest pytest-cov -run: install ## Start Codeas application (installs dependencies first) - @echo "Starting Codeas..." && \ +.PHONY: run +run: install ## Start Codeas application + @echo "Starting Codeas..." $(PYTHON) -m streamlit run src/codeas/ui/🏠_Home.py -pre-commit: install-dev ## Installs and configures pre-commit hooks - @echo "Installing pre-commit..." && \ - $(PIP) install pre-commit && \ - $(VENV)/bin/pre-commit install - -style: venv ## Formats code with black, isort, and ruff - @echo "Installing style tools..." && \ - $(PIP) install black isort ruff && \ - @echo "Run black" && \ - $(VENV)/bin/black . && \ - @echo "Run isort" && \ - $(VENV)/bin/isort . && \ - @echo "Run ruff" && \ - $(VENV)/bin/ruff check . --fix - -help: ## Show this help - @echo "Usage: make [target]\n" +.PHONY: style +style: venv ## Format code with black, isort, and ruff + @echo "Running code formatting..." + @$(PIP) install black isort ruff > /dev/null 2>&1 + @echo "Running black..." + @$(PYTHON) -m black . + @echo "Running isort..." + @$(PYTHON) -m isort . + @echo "Running ruff..." + @$(PYTHON) -m ruff check . --fix + @echo "Code formatting complete!" + +.PHONY: lint +lint: venv ## Run linting checks (black, isort, ruff) + @echo "Running code quality checks..." + @$(PYTHON) -m black --check . + @$(PYTHON) -m isort --check-only . + @$(PYTHON) -m ruff check . + @echo "Linting complete!" + +.PHONY: test +test: install-dev ## Run tests with pytest + @echo "Running tests..." + @$(PYTHON) -m pytest -v --cov=src/codeas + +.PHONY: clean +clean: ## Clean up temporary files and caches + @echo "Cleaning up..." + @find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true + @find . -type f -name "*.pyc" -delete + @find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true + @rm -rf build/ dist/ .pytest_cache/ .coverage htmlcov/ + @echo "Cleanup complete!" + +.PHONY: clean-venv +clean-venv: ## Remove virtual environment + @echo "Removing virtual environment..." + @rm -rf $(VENV) + @echo "Virtual environment removed!" + +.PHONY: reset +reset: clean clean-venv ## Reset project (clean + remove venv) + @echo "Project reset complete!" + +.PHONY: help +help: ## Show this help message + @echo "Codeas - CODEbase ASsistant" + @echo "" + @echo "Usage: make [target]" + @echo "" @echo "Available targets:" - @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf " %-15s %s\n", $$1, $$2}' \ No newline at end of file + @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf " %-20s %s\n", $$1, $$2}' + @echo "" + @echo "Examples:" + @echo " make install Install dependencies and set up development environment" + @echo " make run Start the Codeas application" + @echo " make style Format all code" + @echo " make lint Run code quality checks" + @echo " make test Run test suite" + @echo " make clean Clean up temporary files" + @echo " make reset Full reset (remove venv and clean)" From d989fe149db2bda26b1c8bb9e7074e904fc09dc0 Mon Sep 17 00:00:00 2001 From: Sergio Soto Date: Thu, 5 Feb 2026 17:57:27 +0100 Subject: [PATCH 2/6] Updated .gitignore to exclude the new makefile directory and its contents. This prevents the deleted files from being tracked in future commits, ensuring a cleaner commit history and avoiding confusion about the status of these files. --- .gitignore | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitignore b/.gitignore index acddef8..ff2ceff 100644 --- a/.gitignore +++ b/.gitignore @@ -8,3 +8,4 @@ debug/ .ruff_cache dist/ *.DS_Store +.node/ \ No newline at end of file From b7ecbff841a27af851e7c828194a5bf3e54f04fd Mon Sep 17 00:00:00 2001 From: Sergio Soto Date: Thu, 5 Feb 2026 17:57:45 +0100 Subject: [PATCH 3/6] docs: add CLAUDE.md with project instructions and structure --- CLAUDE.md | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 133 insertions(+) create mode 100644 CLAUDE.md diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..a6db4b7 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,133 @@ +# CLAUDE.md - Instrucciones para Claude + +## Descripción del Proyecto + +**Codeas** (CODEbase ASsistant) es una herramienta de desarrollo asistida por IA que utiliza LLMs para mejorar procesos de desarrollo mediante análisis de contexto completo del código. Desarrollado por **Diverger Thinking**. + +- **Versión**: 0.4.1 +- **Lenguaje**: Python 3.9-3.11 +- **Framework UI**: Streamlit +- **Licencia**: MIT + +## Estructura del Proyecto + +``` +src/codeas/ +├── main.py # Punto de entrada (inicia Streamlit UI) +├── configs/ # Configuración y prompts +│ ├── agents_configs.py # Configuraciones de agentes +│ ├── llm_params.py # Parámetros de modelos LLM +│ └── prompts.py # Templates de prompts +├── core/ # Lógica de negocio principal +│ ├── agent.py # Orquestación de agentes +│ ├── clients.py # Clientes multi-modelo LLM +│ ├── llm.py # Wrapper de cliente LLM +│ ├── metadata.py # Generación de metadatos +│ ├── repo.py # Indexación de repositorios +│ ├── retriever.py # Recuperación de contexto +│ ├── state.py # Gestión de estado de sesión +│ └── usage_tracker.py # Tracking de costos +├── use_cases/ # Implementaciones de casos de uso +│ ├── documentation.py # Generación de documentación +│ ├── deployment.py # Planificación de despliegue +│ ├── testing.py # Estrategias de testing +│ └── refactoring.py # Recomendaciones de refactoring +└── ui/ # Interfaz Streamlit + ├── 🏠_Home.py # Página principal + ├── pages/ # Páginas de funcionalidades + └── components/ # Componentes reutilizables +``` + +## Comandos Útiles + +```bash +# Instalar dependencias +pip install -e . + +# Ejecutar la aplicación +codeas + +# Formateo de código +make style + +# Ejecutar con Streamlit directamente +streamlit run src/codeas/ui/🏠_Home.py +``` + +## Stack Tecnológico + +| Componente | Tecnología | +|------------|------------| +| Lenguaje | Python 3.9-3.11 | +| UI | Streamlit 1.28+ | +| Validación | Pydantic 2.5+ | +| LLM Providers | OpenAI, Anthropic, Google Gemini | +| Token Counting | tokencost | +| Code Quality | black, isort, ruff | + +## Convenciones de Código + +- **Formateo**: Usar `make style` (black + isort + ruff) +- **Validación de datos**: Pydantic BaseModel para todas las estructuras de datos +- **Tipado**: Type hints obligatorios en funciones públicas +- **Documentación**: Docstrings en español para funciones principales +- **Imports**: Ordenados con isort (perfil black) + +## Arquitectura Clave + +### Módulos Core + +1. **State** (`core/state.py`): Gestión centralizada del estado de sesión +2. **Repo** (`core/repo.py`): Indexación y filtrado de archivos del repositorio +3. **Metadata** (`core/metadata.py`): Clasificación y extracción de metadatos de archivos +4. **Retriever** (`core/retriever.py`): Selección de contexto relevante para LLM +5. **Agent** (`core/agent.py`): Normalización de interacciones con LLM + +### Patrones de Diseño + +- **Metadata-Driven**: Metadatos pre-computados reducen costos de tokens +- **Supervised Automation**: Flujo Preview → Review → Apply +- **Multi-Model Support**: Capa de abstracción para múltiples proveedores LLM +- **Cost Transparency**: Tracking completo de tokens/costos + +### Flujo de Trabajo + +1. Usuario selecciona repositorio en Home +2. Generación de metadatos (o carga de caché) +3. Aplicación de filtros por página +4. Fase de preview con estimación de costos +5. Generación ejecuta LLM con contexto seleccionado +6. Review y selección de outputs +7. Aplicación escribe artefactos al filesystem +8. Tracking registra uso y costos + +## Datos de Runtime + +Todos los datos de ejecución se almacenan en `.codeas/`: +- `metadata.json`: Metadatos cacheados +- `filters.json`: Patrones de filtrado por página +- `outputs/`: Artefactos generados +- `usage.json`: Tracking de costos + +## Casos de Uso Principales + +1. **Documentation**: Genera 8 secciones de documentación automática +2. **Deployment**: Análisis de infraestructura y generación de Terraform +3. **Testing**: Estrategias de test y casos de prueba +4. **Refactoring**: Identificación de mejoras y generación de diffs + +## Variables de Entorno + +```bash +OPENAI_API_KEY=sk-... # API key de OpenAI +ANTHROPIC_API_KEY=sk-ant-... # API key de Anthropic +GOOGLE_API_KEY=... # API key de Google Gemini +``` + +## Consideraciones para Desarrollo + +- El proyecto usa emojis en nombres de archivos UI (ej: `🏠_Home.py`) +- Los prompts están centralizados en `configs/prompts.py` (~28KB) +- La documentación principal está en español +- Persistencia de estado mediante archivos JSON en `.codeas/` +- No modificar directamente `metadata.json` - se regenera automáticamente From 4145d8443c80234c487e59bde850738cdd65035a Mon Sep 17 00:00:00 2001 From: Sergio Soto Date: Thu, 5 Feb 2026 18:15:53 +0100 Subject: [PATCH 4/6] Add professional Makefile templates for Java and Python projects - Introduced Java Makefile templates for basic, Maven-style, Spring Boot, multi-module, and Gradle-based projects. - Added Python Makefile templates for basic, advanced (with Docker and CI), Flask/FastAPI web applications, and Data Science/ML projects. - Implemented scripts for analyzing and validating Makefile syntax, detecting common issues. - Enhanced the main Makefile with improved structure, color-coded output, and additional targets for building, testing, and cleaning. --- .../makefile/{new-makefile => }/SKILL.md | 0 .../references/best_practices.md | 0 .../references/java_templates.md | 0 .../references/python_templates.md | 0 .../scripts/analyze_makefile.py | 0 .../scripts/validate_makefile.py | 0 Makefile | 222 +++++++++++------- 7 files changed, 136 insertions(+), 86 deletions(-) rename .claude/skills/makefile/{new-makefile => }/SKILL.md (100%) rename .claude/skills/makefile/{new-makefile => }/references/best_practices.md (100%) rename .claude/skills/makefile/{new-makefile => }/references/java_templates.md (100%) rename .claude/skills/makefile/{new-makefile => }/references/python_templates.md (100%) rename .claude/skills/makefile/{new-makefile => }/scripts/analyze_makefile.py (100%) rename .claude/skills/makefile/{new-makefile => }/scripts/validate_makefile.py (100%) diff --git a/.claude/skills/makefile/new-makefile/SKILL.md b/.claude/skills/makefile/SKILL.md similarity index 100% rename from .claude/skills/makefile/new-makefile/SKILL.md rename to .claude/skills/makefile/SKILL.md diff --git a/.claude/skills/makefile/new-makefile/references/best_practices.md b/.claude/skills/makefile/references/best_practices.md similarity index 100% rename from .claude/skills/makefile/new-makefile/references/best_practices.md rename to .claude/skills/makefile/references/best_practices.md diff --git a/.claude/skills/makefile/new-makefile/references/java_templates.md b/.claude/skills/makefile/references/java_templates.md similarity index 100% rename from .claude/skills/makefile/new-makefile/references/java_templates.md rename to .claude/skills/makefile/references/java_templates.md diff --git a/.claude/skills/makefile/new-makefile/references/python_templates.md b/.claude/skills/makefile/references/python_templates.md similarity index 100% rename from .claude/skills/makefile/new-makefile/references/python_templates.md rename to .claude/skills/makefile/references/python_templates.md diff --git a/.claude/skills/makefile/new-makefile/scripts/analyze_makefile.py b/.claude/skills/makefile/scripts/analyze_makefile.py similarity index 100% rename from .claude/skills/makefile/new-makefile/scripts/analyze_makefile.py rename to .claude/skills/makefile/scripts/analyze_makefile.py diff --git a/.claude/skills/makefile/new-makefile/scripts/validate_makefile.py b/.claude/skills/makefile/scripts/validate_makefile.py similarity index 100% rename from .claude/skills/makefile/new-makefile/scripts/validate_makefile.py rename to .claude/skills/makefile/scripts/validate_makefile.py diff --git a/Makefile b/Makefile index 29ab492..b42dbff 100644 --- a/Makefile +++ b/Makefile @@ -1,92 +1,142 @@ -.DEFAULT_GOAL := help +.PHONY: help install install-dev venv clean test lint format style run docs build dist upload -VENV := .venv -PYTHON := $(VENV)/bin/python -PIP := $(VENV)/bin/pip - -.PHONY: venv -venv: ## Create project virtualenv - @if [ ! -d "$(VENV)" ]; then \ - echo "Creating virtualenv..."; \ - python3 -m venv $(VENV); \ - $(PIP) install --upgrade pip; \ - else \ - echo "Virtual environment already exists at $(VENV)"; \ - fi - -.PHONY: install -install: venv ## Install project dependencies - @echo "Installing dependencies..." - $(PIP) install -e . +# Variables +PYTHON := python3 +PIP := $(PYTHON) -m pip +PROJECT_NAME := codeas +SRC_DIR := src +TEST_DIR := tests +VENV_DIR := .venv + +# Colors for output +BLUE := \033[0;34m +GREEN := \033[0;32m +YELLOW := \033[0;33m +NC := \033[0m # No Color + +# Default target +.DEFAULT_GOAL := help -.PHONY: install-dev -install-dev: install ## Install development dependencies - @echo "Installing development dependencies..." - $(PIP) install black isort ruff pytest pytest-cov - -.PHONY: run -run: install ## Start Codeas application - @echo "Starting Codeas..." - $(PYTHON) -m streamlit run src/codeas/ui/🏠_Home.py - -.PHONY: style -style: venv ## Format code with black, isort, and ruff - @echo "Running code formatting..." - @$(PIP) install black isort ruff > /dev/null 2>&1 - @echo "Running black..." - @$(PYTHON) -m black . - @echo "Running isort..." - @$(PYTHON) -m isort . - @echo "Running ruff..." - @$(PYTHON) -m ruff check . --fix - @echo "Code formatting complete!" - -.PHONY: lint -lint: venv ## Run linting checks (black, isort, ruff) - @echo "Running code quality checks..." - @$(PYTHON) -m black --check . - @$(PYTHON) -m isort --check-only . - @$(PYTHON) -m ruff check . - @echo "Linting complete!" - -.PHONY: test -test: install-dev ## Run tests with pytest - @echo "Running tests..." - @$(PYTHON) -m pytest -v --cov=src/codeas - -.PHONY: clean -clean: ## Clean up temporary files and caches - @echo "Cleaning up..." - @find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true - @find . -type f -name "*.pyc" -delete - @find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true - @rm -rf build/ dist/ .pytest_cache/ .coverage htmlcov/ - @echo "Cleanup complete!" - -.PHONY: clean-venv -clean-venv: ## Remove virtual environment - @echo "Removing virtual environment..." - @rm -rf $(VENV) - @echo "Virtual environment removed!" - -.PHONY: reset -reset: clean clean-venv ## Reset project (clean + remove venv) - @echo "Project reset complete!" - -.PHONY: help help: ## Show this help message - @echo "Codeas - CODEbase ASsistant" + @echo "$(BLUE)$(PROJECT_NAME) - Makefile Targets$(NC)" + @echo "" + @echo "$(YELLOW)Setup & Installation:$(NC)" + @echo " make install Install project dependencies" + @echo " make install-dev Install dependencies including dev tools" + @echo " make venv Create Python virtual environment" + @echo "" + @echo "$(YELLOW)Development:$(NC)" + @echo " make run Run the Streamlit application" + @echo " make style Format code (black, isort, ruff)" + @echo " make format Format code with black and isort" + @echo " make lint Run linting checks with ruff" + @echo " make test Run tests with pytest" @echo "" - @echo "Usage: make [target]" + @echo "$(YELLOW)Building & Distribution:$(NC)" + @echo " make build Build distribution packages" + @echo " make dist Create wheel and sdist distributions" @echo "" - @echo "Available targets:" - @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf " %-20s %s\n", $$1, $$2}' + @echo "$(YELLOW)Maintenance:$(NC)" + @echo " make clean Remove generated artifacts and cache" + @echo " make clean-build Remove build artifacts" + @echo " make clean-cache Remove Python cache files" @echo "" - @echo "Examples:" - @echo " make install Install dependencies and set up development environment" - @echo " make run Start the Codeas application" - @echo " make style Format all code" - @echo " make lint Run code quality checks" - @echo " make test Run test suite" - @echo " make clean Clean up temporary files" - @echo " make reset Full reset (remove venv and clean)" + +# ============================================================================ +# SETUP & INSTALLATION TARGETS +# ============================================================================ + +venv: ## Create Python virtual environment + @echo "$(BLUE)Creating virtual environment...$(NC)" + $(PYTHON) -m venv $(VENV_DIR) + @echo "$(GREEN)Virtual environment created at $(VENV_DIR)$(NC)" + @echo "Activate it with: source $(VENV_DIR)/bin/activate" + +install: venv ## Install project dependencies + @echo "$(BLUE)Installing dependencies...$(NC)" + . $(VENV_DIR)/bin/activate && pip install --upgrade pip setuptools wheel + . $(VENV_DIR)/bin/activate && pip install -e . + @echo "$(GREEN)Dependencies installed successfully$(NC)" + +install-dev: install ## Install development dependencies + @echo "$(BLUE)Installing development dependencies...$(NC)" + . $(VENV_DIR)/bin/activate && pip install black isort ruff pytest pytest-cov pre-commit + @echo "$(GREEN)Development dependencies installed$(NC)" + +# ============================================================================ +# DEVELOPMENT TARGETS +# ============================================================================ + +run: install ## Run the Streamlit application + @echo "$(BLUE)Starting $(PROJECT_NAME) application...$(NC)" + . $(VENV_DIR)/bin/activate && streamlit run $(SRC_DIR)/$(PROJECT_NAME)/ui/🏠_Home.py + +format: ## Format code with black and isort + @echo "$(BLUE)Formatting code with black and isort...$(NC)" + . $(VENV_DIR)/bin/activate && python -m black $(SRC_DIR) $(TEST_DIR) --quiet || true + . $(VENV_DIR)/bin/activate && python -m isort $(SRC_DIR) $(TEST_DIR) --quiet || true + @echo "$(GREEN)Code formatted successfully$(NC)" + +lint: ## Run linting checks with ruff + @echo "$(BLUE)Running linting checks...$(NC)" + . $(VENV_DIR)/bin/activate && python -m ruff check $(SRC_DIR) $(TEST_DIR) --show-source || true + @echo "$(GREEN)Linting complete$(NC)" + +style: format lint ## Run all code style checks and formatting (black, isort, ruff) + @echo "$(GREEN)Code style checks completed$(NC)" + +test: install ## Run tests with pytest + @echo "$(BLUE)Running tests...$(NC)" + . $(VENV_DIR)/bin/activate && python -m pytest $(TEST_DIR) -v --cov=$(SRC_DIR)/$(PROJECT_NAME) --cov-report=term-missing || true + @echo "$(GREEN)Tests completed$(NC)" + +# ============================================================================ +# BUILD & DISTRIBUTION TARGETS +# ============================================================================ + +build: clean ## Build distribution packages + @echo "$(BLUE)Building distribution packages...$(NC)" + . $(VENV_DIR)/bin/activate && python -m build + @echo "$(GREEN)Build completed successfully$(NC)" + +dist: clean ## Create wheel and sdist distributions + @echo "$(BLUE)Creating distribution packages...$(NC)" + . $(VENV_DIR)/bin/activate && pip install --upgrade build twine + . $(VENV_DIR)/bin/activate && python -m build + @echo "$(GREEN)Distribution packages created in dist/$(NC)" + +upload: dist ## Upload distribution packages to PyPI (requires credentials) + @echo "$(YELLOW)⚠️ Uploading to PyPI...$(NC)" + . $(VENV_DIR)/bin/activate && python -m twine upload dist/* --verbose + @echo "$(GREEN)Upload completed$(NC)" + +# ============================================================================ +# CLEANUP TARGETS +# ============================================================================ + +clean: clean-build clean-cache ## Remove all generated artifacts and cache + @echo "$(GREEN)Project cleaned$(NC)" + +clean-build: ## Remove build artifacts + @echo "$(BLUE)Removing build artifacts...$(NC)" + rm -rf build/ dist/ *.egg-info .eggs/ .pytest_cache/ .coverage htmlcov/ + find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true + @echo "$(GREEN)Build artifacts removed$(NC)" + +clean-cache: ## Remove Python cache files + @echo "$(BLUE)Removing Python cache...$(NC)" + find . -type f -name "*.pyc" -delete + find . -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true + find . -type d -name ".ruff_cache" -exec rm -rf {} + 2>/dev/null || true + @echo "$(GREEN)Cache cleaned$(NC)" + +# ============================================================================ +# UTILITY TARGETS +# ============================================================================ + +.PHONY: check-tools +check-tools: ## Check if required tools are installed + @echo "$(BLUE)Checking required tools...$(NC)" + @command -v python3 >/dev/null 2>&1 && echo "$(GREEN)✓ Python 3$(NC)" || echo "$(YELLOW)✗ Python 3 not found$(NC)" + @[ -f $(VENV_DIR)/bin/python ] && echo "$(GREEN)✓ Virtual Environment$(NC)" || echo "$(YELLOW)✗ Virtual Environment not found$(NC)" + @[ -f $(VENV_DIR)/bin/streamlit ] && echo "$(GREEN)✓ Streamlit$(NC)" || echo "$(YELLOW)✗ Streamlit not found$(NC)" From 0d6f0b20bda974cb4fa159bc65f28c0f502a9d1a Mon Sep 17 00:00:00 2001 From: Sergio Soto Date: Thu, 5 Feb 2026 18:16:50 +0100 Subject: [PATCH 5/6] pre-commit passed --- src/codeas/core/output_models.py | 216 +++++++++++++++++++++++ "src/codeas/ui/\360\237\217\240_Home.py" | 4 + 2 files changed, 220 insertions(+) create mode 100644 src/codeas/core/output_models.py diff --git a/src/codeas/core/output_models.py b/src/codeas/core/output_models.py new file mode 100644 index 0000000..358e39a --- /dev/null +++ b/src/codeas/core/output_models.py @@ -0,0 +1,216 @@ +""" +Unified output data models for use case operations. + +These models replace the dynamic type() antipattern used throughout +the UI components for creating mock Output objects. +""" + +from typing import Any, Dict, List, Optional, Type, TypeVar + +from pydantic import BaseModel, Field + + +class CostInfo(BaseModel): + """Token cost information.""" + + input_cost: float = 0.0 + output_cost: float = 0.0 + total_cost: float = 0.0 + + +class TokenInfo(BaseModel): + """Token count information.""" + + input_tokens: int = 0 + output_tokens: int = 0 + total_tokens: int = 0 + + +class UseCaseOutput(BaseModel): + """ + Unified output model for all use case operations. + + This replaces the `type("Output", (), {...})` antipattern used + throughout the UI components. + """ + + response: Any = None + cost: CostInfo = Field(default_factory=CostInfo) + tokens: TokenInfo = Field(default_factory=TokenInfo) + messages: List[Dict[str, str]] = Field(default_factory=list) + + @classmethod + def from_agent_output(cls, agent_output) -> "UseCaseOutput": + """ + Convert an AgentOutput to UseCaseOutput. + + Args: + agent_output: AgentOutput instance from agent.run() + + Returns: + UseCaseOutput instance + """ + cost_dict = agent_output.cost + tokens_dict = agent_output.tokens + + return cls( + response=agent_output.response, + cost=CostInfo( + input_cost=cost_dict.get("input_cost", 0.0), + output_cost=cost_dict.get("output_cost", 0.0), + total_cost=cost_dict.get("total_cost", 0.0), + ), + tokens=TokenInfo( + input_tokens=tokens_dict.get("input_tokens", 0), + output_tokens=tokens_dict.get("output_tokens", 0), + total_tokens=tokens_dict.get("total_tokens", 0), + ), + messages=( + agent_output.messages if isinstance(agent_output.messages, list) else [] + ), + ) + + @classmethod + def from_cached(cls, cached_data: dict) -> "UseCaseOutput": + """ + Reconstruct from cached JSON data. + + Args: + cached_data: Dictionary loaded from JSON cache file + + Returns: + UseCaseOutput instance + """ + cost_data = cached_data.get("cost", {}) + tokens_data = cached_data.get("tokens", {}) + + return cls( + response={"content": cached_data.get("content")}, + cost=CostInfo( + input_cost=cost_data.get("input_cost", 0.0), + output_cost=cost_data.get("output_cost", 0.0), + total_cost=cost_data.get("total_cost", 0.0), + ), + tokens=TokenInfo( + input_tokens=tokens_data.get("input_tokens", 0), + output_tokens=tokens_data.get("output_tokens", 0), + total_tokens=tokens_data.get("total_tokens", 0), + ), + messages=cached_data.get("messages", []), + ) + + def to_cache_dict(self) -> dict: + """ + Convert to dictionary for JSON caching. + + Returns: + Dictionary suitable for JSON serialization + """ + content = None + if isinstance(self.response, dict): + content = self.response.get("content", self.response) + else: + content = self.response + + return { + "content": content, + "cost": self.cost.model_dump(), + "tokens": self.tokens.model_dump(), + "messages": self.messages, + } + + +T = TypeVar("T", bound=BaseModel) + + +class ParsedUseCaseOutput(UseCaseOutput): + """ + Output model for structured response formats. + + Used when the agent returns a Pydantic model response. + """ + + parsed_content: Optional[Any] = None + + @classmethod + def from_agent_output( + cls, agent_output, model_class: Optional[Type[T]] = None + ) -> "ParsedUseCaseOutput": + """ + Convert an AgentOutput with structured response to ParsedUseCaseOutput. + + Args: + agent_output: AgentOutput instance from agent.run() + model_class: Optional Pydantic model class for parsing + + Returns: + ParsedUseCaseOutput instance + """ + base = UseCaseOutput.from_agent_output(agent_output) + instance = cls( + response=base.response, + cost=base.cost, + tokens=base.tokens, + messages=base.messages, + ) + + # Extract parsed content from structured response + if model_class and hasattr(agent_output.response, "choices"): + try: + instance.parsed_content = agent_output.response.choices[ + 0 + ].message.parsed + except (AttributeError, IndexError): + pass + + return instance + + @classmethod + def from_cached( + cls, cached_data: dict, model_class: Optional[Type[T]] = None + ) -> "ParsedUseCaseOutput": + """ + Reconstruct from cached JSON data with optional model validation. + + Args: + cached_data: Dictionary loaded from JSON cache file + model_class: Optional Pydantic model class for content validation + + Returns: + ParsedUseCaseOutput instance + """ + base = UseCaseOutput.from_cached(cached_data) + instance = cls( + response=base.response, + cost=base.cost, + tokens=base.tokens, + messages=base.messages, + ) + + if model_class and "content" in cached_data: + try: + instance.parsed_content = model_class.model_validate( + cached_data["content"] + ) + except Exception: + pass + + return instance + + def to_cache_dict(self) -> dict: + """ + Convert to dictionary for JSON caching. + + Returns: + Dictionary suitable for JSON serialization + """ + result = super().to_cache_dict() + + # If we have parsed content, use that for caching + if self.parsed_content is not None: + if hasattr(self.parsed_content, "model_dump"): + result["content"] = self.parsed_content.model_dump() + else: + result["content"] = self.parsed_content + + return result diff --git "a/src/codeas/ui/\360\237\217\240_Home.py" "b/src/codeas/ui/\360\237\217\240_Home.py" index 98f2edf..cb6e3a2 100644 --- "a/src/codeas/ui/\360\237\217\240_Home.py" +++ "b/src/codeas/ui/\360\237\217\240_Home.py" @@ -1,6 +1,10 @@ import os import streamlit as st +from dotenv import load_dotenv + +# Cargar variables de entorno desde .env si existe +load_dotenv() def home_page(): From 4d25fd6d6c8336ad09b1ad90618b295dfabb37b6 Mon Sep 17 00:00:00 2001 From: Sergio Soto Date: Thu, 5 Feb 2026 18:21:15 +0100 Subject: [PATCH 6/6] docs: translate CLAUDE.md to English and update project details --- CLAUDE.md | 176 +++++++++++++++++++++++++++--------------------------- 1 file changed, 88 insertions(+), 88 deletions(-) diff --git a/CLAUDE.md b/CLAUDE.md index a6db4b7..72dde1d 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -1,133 +1,133 @@ -# CLAUDE.md - Instrucciones para Claude +# CLAUDE.md - Instructions for Claude -## Descripción del Proyecto +## Project Description -**Codeas** (CODEbase ASsistant) es una herramienta de desarrollo asistida por IA que utiliza LLMs para mejorar procesos de desarrollo mediante análisis de contexto completo del código. Desarrollado por **Diverger Thinking**. +**Codeas** (CODEbase ASsistant) is an AI-assisted development tool that uses LLMs to improve development processes through full code context analysis. Developed by **Diverger Thinking**. -- **Versión**: 0.4.1 -- **Lenguaje**: Python 3.9-3.11 -- **Framework UI**: Streamlit -- **Licencia**: MIT +- **Version**: 0.4.1 +- **Language**: Python 3.9-3.11 +- **UI Framework**: Streamlit +- **License**: MIT -## Estructura del Proyecto +## Project Structure ``` src/codeas/ -├── main.py # Punto de entrada (inicia Streamlit UI) -├── configs/ # Configuración y prompts -│ ├── agents_configs.py # Configuraciones de agentes -│ ├── llm_params.py # Parámetros de modelos LLM -│ └── prompts.py # Templates de prompts -├── core/ # Lógica de negocio principal -│ ├── agent.py # Orquestación de agentes -│ ├── clients.py # Clientes multi-modelo LLM -│ ├── llm.py # Wrapper de cliente LLM -│ ├── metadata.py # Generación de metadatos -│ ├── repo.py # Indexación de repositorios -│ ├── retriever.py # Recuperación de contexto -│ ├── state.py # Gestión de estado de sesión -│ └── usage_tracker.py # Tracking de costos -├── use_cases/ # Implementaciones de casos de uso -│ ├── documentation.py # Generación de documentación -│ ├── deployment.py # Planificación de despliegue -│ ├── testing.py # Estrategias de testing -│ └── refactoring.py # Recomendaciones de refactoring -└── ui/ # Interfaz Streamlit - ├── 🏠_Home.py # Página principal - ├── pages/ # Páginas de funcionalidades - └── components/ # Componentes reutilizables +├── main.py # Entry point (starts Streamlit UI) +├── configs/ # Configuration and prompts +│ ├── agents_configs.py # Agent configurations +│ ├── llm_params.py # LLM model parameters +│ └── prompts.py # Prompt templates +├── core/ # Main business logic +│ ├── agent.py # Agent orchestration +│ ├── clients.py # Multi-model LLM clients +│ ├── llm.py # LLM client wrapper +│ ├── metadata.py # Metadata generation +│ ├── repo.py # Repository indexing +│ ├── retriever.py # Context retrieval +│ ├── state.py # Session state management +│ └── usage_tracker.py # Cost tracking +├── use_cases/ # Use case implementations +│ ├── documentation.py # Documentation generation +│ ├── deployment.py # Deployment planning +│ ├── testing.py # Testing strategies +│ └── refactoring.py # Refactoring recommendations +└── ui/ # Streamlit interface + ├── 🏠_Home.py # Main page + ├── pages/ # Feature pages + └── components/ # Reusable components ``` -## Comandos Útiles +## Useful Commands ```bash -# Instalar dependencias +# Install dependencies pip install -e . -# Ejecutar la aplicación +# Run the application codeas -# Formateo de código +# Code formatting make style -# Ejecutar con Streamlit directamente +# Run with Streamlit directly streamlit run src/codeas/ui/🏠_Home.py ``` -## Stack Tecnológico +## Tech Stack -| Componente | Tecnología | +| Component | Technology | |------------|------------| -| Lenguaje | Python 3.9-3.11 | +| Language | Python 3.9-3.11 | | UI | Streamlit 1.28+ | -| Validación | Pydantic 2.5+ | +| Validation | Pydantic 2.5+ | | LLM Providers | OpenAI, Anthropic, Google Gemini | | Token Counting | tokencost | | Code Quality | black, isort, ruff | -## Convenciones de Código +## Code Conventions -- **Formateo**: Usar `make style` (black + isort + ruff) -- **Validación de datos**: Pydantic BaseModel para todas las estructuras de datos -- **Tipado**: Type hints obligatorios en funciones públicas -- **Documentación**: Docstrings en español para funciones principales -- **Imports**: Ordenados con isort (perfil black) +- **Formatting**: Use `make style` (black + isort + ruff) +- **Data validation**: Pydantic BaseModel for all data structures +- **Typing**: Type hints required for public functions +- **Documentation**: Docstrings in Spanish for main functions +- **Imports**: Ordered with isort (black profile) -## Arquitectura Clave +## Key Architecture -### Módulos Core +### Core Modules -1. **State** (`core/state.py`): Gestión centralizada del estado de sesión -2. **Repo** (`core/repo.py`): Indexación y filtrado de archivos del repositorio -3. **Metadata** (`core/metadata.py`): Clasificación y extracción de metadatos de archivos -4. **Retriever** (`core/retriever.py`): Selección de contexto relevante para LLM -5. **Agent** (`core/agent.py`): Normalización de interacciones con LLM +1. **State** (`core/state.py`): Centralized session state management +2. **Repo** (`core/repo.py`): Repository file indexing and filtering +3. **Metadata** (`core/metadata.py`): File classification and metadata extraction +4. **Retriever** (`core/retriever.py`): Relevant context selection for LLM +5. **Agent** (`core/agent.py`): LLM interaction normalization -### Patrones de Diseño +### Design Patterns -- **Metadata-Driven**: Metadatos pre-computados reducen costos de tokens -- **Supervised Automation**: Flujo Preview → Review → Apply -- **Multi-Model Support**: Capa de abstracción para múltiples proveedores LLM -- **Cost Transparency**: Tracking completo de tokens/costos +- **Metadata-Driven**: Pre-computed metadata reduces token costs +- **Supervised Automation**: Preview → Review → Apply flow +- **Multi-Model Support**: Abstraction layer for multiple LLM providers +- **Cost Transparency**: Complete token/cost tracking -### Flujo de Trabajo +### Workflow -1. Usuario selecciona repositorio en Home -2. Generación de metadatos (o carga de caché) -3. Aplicación de filtros por página -4. Fase de preview con estimación de costos -5. Generación ejecuta LLM con contexto seleccionado -6. Review y selección de outputs -7. Aplicación escribe artefactos al filesystem -8. Tracking registra uso y costos +1. User selects repository on Home +2. Metadata generation (or cache loading) +3. Apply filters per page +4. Preview phase with cost estimation +5. Generation executes LLM with selected context +6. Review and output selection +7. Application writes artifacts to filesystem +8. Tracking records usage and costs -## Datos de Runtime +## Runtime Data -Todos los datos de ejecución se almacenan en `.codeas/`: -- `metadata.json`: Metadatos cacheados -- `filters.json`: Patrones de filtrado por página -- `outputs/`: Artefactos generados -- `usage.json`: Tracking de costos +All execution data is stored in `.codeas/`: +- `metadata.json`: Cached metadata +- `filters.json`: Filtering patterns per page +- `outputs/`: Generated artifacts +- `usage.json`: Cost tracking -## Casos de Uso Principales +## Main Use Cases -1. **Documentation**: Genera 8 secciones de documentación automática -2. **Deployment**: Análisis de infraestructura y generación de Terraform -3. **Testing**: Estrategias de test y casos de prueba -4. **Refactoring**: Identificación de mejoras y generación de diffs +1. **Documentation**: Generates 8 automatic documentation sections +2. **Deployment**: Infrastructure analysis and Terraform generation +3. **Testing**: Test strategies and test cases +4. **Refactoring**: Improvement identification and diff generation -## Variables de Entorno +## Environment Variables ```bash -OPENAI_API_KEY=sk-... # API key de OpenAI -ANTHROPIC_API_KEY=sk-ant-... # API key de Anthropic -GOOGLE_API_KEY=... # API key de Google Gemini +OPENAI_API_KEY=sk-... # OpenAI API key +ANTHROPIC_API_KEY=sk-ant-... # Anthropic API key +GOOGLE_API_KEY=... # Google Gemini API key ``` -## Consideraciones para Desarrollo +## Development Considerations -- El proyecto usa emojis en nombres de archivos UI (ej: `🏠_Home.py`) -- Los prompts están centralizados en `configs/prompts.py` (~28KB) -- La documentación principal está en español -- Persistencia de estado mediante archivos JSON en `.codeas/` -- No modificar directamente `metadata.json` - se regenera automáticamente +- The project uses emojis in UI file names (e.g.: `🏠_Home.py`) +- Prompts are centralized in `configs/prompts.py` (~28KB) +- Main documentation is in Spanish +- State persistence through JSON files in `.codeas/` +- Do not modify `metadata.json` directly - it regenerates automatically