Skip to content

Conversation

edmundmiller
Copy link
Member

@edmundmiller edmundmiller commented Sep 25, 2025

Summary

This PR enhances the Claude Code development hooks with advanced JSON reporting capabilities, transforming them from simple pass/fail indicators into intelligent development assistants.

Enhanced Features

🔧 EditorConfig Hook (format-editorconfig.py)

  • File Change Detection: SHA256 hash comparison to identify specific formatting changes
  • Performance Metrics: Execution timing and eclint version reporting
  • Auto-Installation Progress: Real-time feedback during eclint installation
  • Structured Error Handling: JSON errors with actionable suggestions
  • Rich Success Messages: Detailed reporting of actual changes applied

🧪 Test Runner Hook (run-tests.py)

  • Gradle Output Parsing: Structured analysis of build results and timing
  • XML Test Result Parsing: Detailed test statistics with method names and execution times
  • Intelligent Failure Analysis: Pattern matching for NPEs, assertions, timeouts
  • Context-Aware Suggestions: File-specific recommendations based on failure patterns
  • Comprehensive Success Reporting: Test method details and build metrics

📊 Advanced JSON Output

  • Strategic Control Fields: Proper use of decision: "block", suppressOutput: true, systemMessage
  • Structured Metadata: Rich data for Claude's intelligent decision-making
  • Performance Insights: Timing metrics and progress indication
  • Enhanced Error Context: Detailed diagnostics with actionable fix suggestions

Benefits

  1. Better Claude Decision-Making: Structured data enables intelligent responses to test failures
  2. Actionable Feedback: Specific suggestions help Claude fix issues automatically
  3. Rich Context: Detailed results provide better understanding of changes and impacts
  4. Performance Insights: Timing and metrics help optimize development workflow
  5. User Experience: Clean, informative messages with progress indication

@edmundmiller edmundmiller requested review from a team as code owners September 25, 2025 10:26
Copy link

netlify bot commented Sep 25, 2025

Deploy Preview for nextflow-docs-staging ready!

Name Link
🔨 Latest commit f856d9f
🔍 Latest deploy log https://app.netlify.com/projects/nextflow-docs-staging/deploys/68d53702a583c000084912da
😎 Deploy Preview https://deploy-preview-6429--nextflow-docs-staging.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@edmundmiller edmundmiller marked this pull request as draft September 25, 2025 10:28
@edmundmiller edmundmiller self-assigned this Sep 25, 2025
@edmundmiller edmundmiller force-pushed the claude-hooks-enhanced branch 2 times, most recently from 500782e to 41d9944 Compare September 25, 2025 10:35
edmundmiller and others added 2 commits September 25, 2025 14:34
- Add EditorConfig enforcement hook using eclint
- Add smart test runner that maps source files to test classes  
- Support all modules (nextflow, nf-commons, nf-lang, etc.) and plugins
- Configure hooks to run on Edit/Write/MultiEdit operations
- Add comprehensive documentation in .claude/README.md
- Hooks provide immediate feedback on code formatting and test results

Signed-off-by: Edmund Miller <edmund.miller@seqera.io>
- Add file change detection and performance metrics to format-editorconfig.py
- Add detailed test result parsing and intelligent error analysis to run-tests.py
- Implement structured JSON output with rich metadata for Claude decision-making
- Add timing metrics, progress indication, and actionable error suggestions
- Transform hooks from simple pass/fail into intelligent development assistants

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Edmund Miller <edmund.miller@seqera.io>
@edmundmiller edmundmiller marked this pull request as ready for review September 25, 2025 12:40
elif 'assertionerror' in failure_type or 'assertion' in failure_message:
suggestions.append(f"Review test assertions in {failed_test['name']}")
elif 'comparionfailure' in failure_type:
suggestions.append(f"Check expected vs actual values in {failed_test['name']}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Typo in Function Name Causes JUnit Exception Handling Failure

The generate_failure_suggestions function contains a typo, comparionfailure, which should be comparisonfailure. This prevents the hook from correctly identifying JUnit ComparisonFailure exceptions, so specific suggestions for these test failures aren't generated.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant