From 6d37f8241f4a339ddac0561be1f9d09814931379 Mon Sep 17 00:00:00 2001 From: nidangavali Date: Sat, 3 Jan 2026 07:43:45 +0530 Subject: [PATCH] CLID-515: claude: add slash command for testcase generation for PR --- .claude/commands/README.md | 297 +++++++++++++++++++++++++++++++++++ .claude/commands/pr-tests.md | 222 ++++++++++++++++++++++++++ 2 files changed, 519 insertions(+) create mode 100644 .claude/commands/pr-tests.md diff --git a/.claude/commands/README.md b/.claude/commands/README.md index 7604d6899..2e14c8a83 100644 --- a/.claude/commands/README.md +++ b/.claude/commands/README.md @@ -136,6 +136,303 @@ The command will: - For operators, you can specify entire catalogs (`full: true`) or individual packages - Use digests instead of tags for additional images when you need reproducible mirrors +### `/pr-tests` + +Intelligent testing recommendations for your Pull Requests, powered by Claude Code. + +An advanced Claude Code skill that analyzes GitHub Pull Requests and generates comprehensive, context-aware testing strategies tailored to your code changes. + +### Why Use This? + +- **QE-Focused Analysis** - Tailored for Quality Engineering. +- **No Code Output** - Shows only line numbers and high-level descriptions, no code snippets +- **Comprehensive Test Strategy** - Integration, E2E, regression, manual, and performance testing recommendations +- **Risk Assessment** - Identifies high-risk areas requiring critical testing +- **Impact Analysis** - Understands scope, dependencies, and user workflows affected +- **Multi-Language** - Supports Go, Python, JavaScript/TypeScript, Java, Ruby, Rust, and more + + +## Key Features + +| Feature | Description | +|---------|-------------| +| **Zero Configuration** | Works out of the box as an instruction-based skill | +| **QE Test Strategy** | Focuses on integration, E2E, regression, and manual testing | +| **No Code Output** | Test approach descriptions only, no test implementations | +| **Comprehensive Analysis** | PR overview, change impact, risk assessment, and scope detection | +| **Smart Prioritization** | Identifies P0/P1/P2 test scenarios based on risk | +| **Framework Detection** | Automatically identifies language and testing frameworks | +| **Multi-Language Support** | Handles Go, Python, JS/TS, Java, Ruby, Rust and more | +| **Edge Case Discovery** | Suggests edge cases like concurrent access, permission issues, error conditions | +| **Test Environment Guidance** | Setup requirements, test data needs, and configuration | + +## Installation + +### Prerequisites + + +1. Authenticate GitHub CLI: +```bash +gh auth login +``` + +## Usage + +### Basic Usage + +1. Navigate to your project directory: +```bash +cd /path/to/your/project +``` + +2. Start Claude Code: +```bash +claude-code +``` + +3. Run the analysis: +```bash +/pr-tests https://github.com/owner/repo/pull/123 +``` + +### Command Format + +``` +/pr-tests +``` + +Where `` is the full GitHub pull request URL. + +## Example Output + +### Input + +```bash +/pr-tests https://github.com/openshift/oc-mirror/pull/1289 +``` + +### Output + +```markdown +โ— ๐Ÿงช QE PR Testing Strategy Analysis + + PR Overview + + - PR Number: #1289 + - Title: OCPBUGS-56398: Removes the clean up of logs directory + - Author: Alex Guidi (@aguidirh) + - Branch: ocpbugs-56398 + - Change Type: Bug fix + - Language: Go + - File Classification: โœ… Source file (needs testing) + + Changed Lines Summary + + v2/internal/pkg/cli/executor.go: + - v2/internal/pkg/cli/executor.go:1063-1064 (removed) + + The change removes 2 lines that performed automatic cleanup of the logs directory. + + Change Analysis + + v2/internal/pkg/cli/executor.go + + What Changed: + - Removed automatic deletion of the logs directory during initialization + - The setupLogsLevelAndDir() function no longer calls os.RemoveAll(o.LogsDir) + - Logs directory is now preserved between executions + + Impact and Scope: + - CRITICAL behavioral change: Log persistence across multiple oc-mirror operations + - Affects all oc-mirror commands: mirror, delete, list, and any future commands + - Changes system behavior from "clean slate each run" to "accumulate logs over time" + - Impacts troubleshooting workflows (positive - historical logs retained) + - Impacts disk space management (logs accumulate until manually cleaned) + - May affect automated workflows expecting fresh logs directory + + Dependencies and Integrations Affected: + - Log aggregation or monitoring systems expecting specific log patterns + - Automation scripts that parse logs or expect clean log directories + - Disk space management and cleanup procedures + - Backup/restore operations involving working directory + - Any tooling that relies on log file naming conventions + + QE Testing Recommendations + + Test Scenarios by Changed Files + + Component: Log Management in oc-mirror Executor + + Suggested Test Scenarios (QE Perspective): + + 1. Verify logs persistence across multiple mirror operations + - Run multiple mirror operations sequentially + - Verify logs from each operation are preserved and accessible + - Confirm logs don't overwrite or conflict with each other + 2. Test logs accumulation over extended usage + - Perform 10+ consecutive oc-mirror operations (mirror, delete, list) + - Verify all log files are retained + - Check log directory structure and organization + 3. Verify log directory creation on first run + - Execute oc-mirror on fresh working directory + - Confirm logs directory is created with correct permissions + - Verify logs are written successfully + 4. Test behavior with pre-existing logs directory + - Execute oc-mirror with existing logs directory containing previous logs + - Verify existing logs are NOT deleted + - Verify new logs are added alongside old logs + 5. Verify log file naming and uniqueness + - Run multiple operations in quick succession + - Confirm each operation generates unique log files + - Verify no log file overwrites or conflicts occur + 6. Test disk space impact from log accumulation + - Run oc-mirror operations over extended period + - Monitor disk space usage in logs directory + - Verify system behavior when disk approaches capacity + 7. Verify backward compatibility with existing workflows + - Test integration with existing automation/CI pipelines + - Verify log parsing tools still function correctly + - Confirm monitoring/alerting systems handle accumulated logs + 8. Test log retention in error scenarios + - Trigger failures during oc-mirror operations + - Verify error logs are preserved for debugging + - Confirm logs from failed operations remain accessible + 9. Verify multi-user/concurrent execution handling + - Run multiple oc-mirror instances with same working directory + - Verify log isolation or proper handling of concurrent writes + - Check for race conditions or corruption + 10. Test log directory permissions and access + - Verify logs directory created with correct permissions (0755) + - Test read access to accumulated logs + - Verify behavior with restricted permissions + + Edge Cases to Cover: + - Logs directory exists as a file instead of directory + - Logs directory is a symbolic link + - Working directory doesn't exist or is inaccessible + - Insufficient disk space for log accumulation + - Logs directory with thousands of existing files + - Concurrent writes to logs directory from multiple instances + - Logs directory with restricted permissions (read-only, no write) + - Very long running operations generating large log files + - System restart or crash during logging operation + + Error Conditions to Handle: + - Permission denied when accessing logs directory + - Disk full during log write operation + - Invalid working directory path + - Corrupted existing logs directory + - Network filesystem latency/failures (if working dir on network storage) + + Integration Points to Verify: + - Integration with oc-mirror mirror command + - Integration with oc-mirror delete command + - Integration with oc-mirror list command + - Integration with log level configuration (--log-level flag) + - Integration with working directory configuration + - Integration with CI/CD pipelines + - Integration with log monitoring tools + - Integration with backup/restore procedures + + User Workflows Affected: + - Debugging failures by reviewing historical logs + - Monitoring oc-mirror operations over time + - Troubleshooting recurring issues using accumulated logs + - Disk space management and cleanup procedures + - Automated log collection for support cases + - Log rotation and archival processes + + Test Strategy + + Test Levels: + - Regression Testing (CRITICAL): Verify the bug fix - logs are NOT deleted + - Integration Testing (HIGH): Test with all oc-mirror commands (mirror, delete, list) + - E2E Testing (HIGH): Full user workflows from installation through multiple operations + - Manual Testing (MEDIUM): Exploratory testing of edge cases and error scenarios + - Performance Testing (MEDIUM): Log accumulation impact over extended usage + + Coverage Goals: + - 100% coverage of all oc-mirror commands with the changed behavior + - Verify all supported platforms (Linux, macOS, Windows if applicable) + - Test both fresh installations and upgrades from previous versions + - Cover all error scenarios related to log directory operations + + Priority Areas: + 1. P0 - CRITICAL: Regression test - verify logs are preserved, not deleted + 2. P0 - CRITICAL: Verify backward compatibility with existing workflows + 3. P1 - HIGH: Multi-operation scenarios (3+ consecutive runs) + 4. P1 - HIGH: Integration with all oc-mirror commands + 5. P2 - MEDIUM: Disk space management and large-scale accumulation + 6. P2 - MEDIUM: Concurrent execution scenarios + 7. P3 - LOW: Edge cases with permissions and filesystem issues + + Risk Assessment: + - HIGH RISK: Breaking existing automation expecting clean logs + - HIGH RISK: Disk space exhaustion from uncontrolled log growth + - MEDIUM RISK: Log file conflicts or overwrites in concurrent scenarios + - MEDIUM RISK: Performance degradation with thousands of log files + - LOW RISK: Backward compatibility issues with older oc-mirror versions + + Test Scenario Descriptions + + Additional scenarios to consider: + + - "Verify end-to-end mirror workflow preserves logs from previous runs" + - "Test system behavior when logs directory reaches gigabytes of data" + - "Verify error messages guide users to retained logs for troubleshooting" + - "Test upgrade scenario from version that deleted logs to new version" + - "Verify log retention across system restarts and crashes" + - "Test integration with log rotation tools and practices" + - "Verify documentation and user guidance updated for new behavior" + + QE Testing Approach Guidance + + Test Environment Considerations + + Required Test Environment Setup: + - Multiple test environments: fresh install, upgrade from previous version + - Different operating systems: RHEL, Fedora, Ubuntu, macOS + - Various storage configurations: local disk, NFS, cloud storage + - Different filesystem types: ext4, xfs, NTFS + - Sufficient disk space for log accumulation testing + + Test Data Requirements: + - Sample image sets for mirror operations + - Pre-existing logs directory with various file counts (0, 10, 100, 1000 files) + - Test working directories with different permission configurations + - Large and small mirror operations to generate varying log sizes + + External Dependencies and Integrations: + - GitHub CLI (gh) for PR analysis + - Container registries for mirror operations + - File system monitoring tools for disk space tracking + - Log parsing/analysis tools used in production + + Configuration Needed for Testing: + - Multiple working directory configurations + - Different log level settings (debug, info, warning, error) + - Various oc-mirror command configurations + - CI/CD pipeline integration configurations + + Files Not Requiring Tests + + N/A - Only one source file was modified and it requires comprehensive QE testing due to the behavioral change in log management. + + --- + Summary + + This is a bug fix with CRITICAL impact on operational behavior. The removal of automatic log cleanup at v2/internal/pkg/cli/executor.go:1063-1064 fundamentally changes how oc-mirror manages logs - from ephemeral (deleted each run) to persistent (accumulated over time). + + Key QE Focus Areas: + 1. Regression testing to confirm the fix works as intended + 2. Backward compatibility with existing automation and workflows + 3. Disk space management implications + 4. Multi-operation scenarios to verify log accumulation works correctly + 5. Integration testing across all oc-mirror commands + + This change requires thorough QE validation before release due to its impact on debugging workflows, disk space usage, and existing automation that may depend on the previous behavior. +``` + ## Adding New Commands To add a new slash command: diff --git a/.claude/commands/pr-tests.md b/.claude/commands/pr-tests.md new file mode 100644 index 000000000..50a5aa8fb --- /dev/null +++ b/.claude/commands/pr-tests.md @@ -0,0 +1,222 @@ +--- +argument-hint: +description: Analyze a GitHub Pull Request and provide QE testing approach and recommendations (line numbers only, no code) +model: sonnet +--- + +# ๐Ÿงช AI PR Testing Strategy Analyzer + +**Allowed Tools**: `Bash(gh:*)`, `Bash(git:*)` +**Target PR**: `$ARGUMENTS` + +--- + +## ๐Ÿ“‹ Step 1: Parse PR Information + +Extract the PR details from the provided GitHub PR URL: + +- **Expected format**: `https://github.com/owner/repo/pull/number` +- **Example**: `https://github.com/openshift/oc-mirror/pull/1284` + +The `gh` CLI tool accepts full PR URLs directly, making parsing straightforward. + +```bash +!gh pr view $ARGUMENTS --json number,title,author,headRefName,files +``` + +--- + +## ๐Ÿ” Step 2: Fetch PR Diff + +Get the detailed diff for the PR to understand what changed: + +```bash +!gh pr diff $ARGUMENTS +``` + +**IMPORTANT**: Extract and display ONLY the line numbers that were changed from the diff. DO NOT show the actual code content. Parse the diff to identify: +- Files changed +- Line numbers added (marked with +) +- Line numbers removed (marked with -) +- Line ranges modified + +Present this as: `filename:line_number` or `filename:start_line-end_line` + +--- + +## ๐Ÿ”ฌ Step 3: Analyze Changes + +Based on the PR diff line numbers and metadata, analyze the following: + +### 3.1 Language/Framework Detection + +Identify the primary programming language and testing framework used in the project. + +### 3.2 Change Type Analysis + +Determine the nature of the change: + +| Change Type | QE Testing Requirement | +|-------------------|------------------------------------------------------------------| +| **New feature** | Integration, E2E, and manual testing with happy path + edge cases | +| **Bug fix** | Regression testing targeting the specific bug and related workflows | +| **Refactor** | Regression and integration testing ensuring behavior unchanged | +| **Performance** | Performance, load, and benchmark testing | +| **Documentation** | Minimal/no testing needed | +| **Test-only** | No additional QE tests needed | + +### 3.3 File Classification + +For each file in the PR, categorize: + +- โœ… **Source files** (need testing) +- ๐Ÿงช **Test files** (already tests) +- โš™๏ธ **Config/documentation files** (skip) +- ๐Ÿ“ฆ **Generated/binary files** (skip) + +--- + +## ๐ŸŽฏ Step 4: Identify Testing Needs + +For each modified source file, identify: + +### 4.1 Code Change Analysis + +Understand from the diff: + +- What functions/methods were added or modified +- What the expected behavior is +- What edge cases exist +- What errors could occur +- What dependencies are involved + +### 4.2 QE Test Coverage Recommendations + +Recommend what should be tested from a QE perspective: + +#### For Bug Fixes + +- Regression test scenarios that reproduce the original bug +- End-to-end verification that the fix works in production-like environment +- Related edge cases and user workflows +- Impact on existing functionality + +#### For New Features + +- Integration testing with existing features +- End-to-end user workflows with valid inputs +- Edge cases (empty, null, boundary values) +- Error conditions (invalid inputs, system failures) +- Performance and load testing if applicable + +#### For Refactors + +- Functional testing ensuring behavior hasn't changed +- Integration testing with dependent components +- Performance and regression testing + +### 4.3 Suggested Test Scenarios + +List the specific test scenarios that should be covered, including: + +- Test scenario descriptions (what to verify) +- Edge cases to consider +- Mock/stub requirements + +**DO NOT provide code examples or test function implementations** + +--- + +## ๐Ÿ“Š Step 5: Present Testing Strategy + +Display a comprehensive testing approach summary with the following sections: + +### PR Overview + +- PR number, title, and author +- Change type (feature, bug fix, refactor, etc.) +- Files modified and their purpose +- Language and testing framework detected + +### Changed Lines Summary + +**CRITICAL**: Display ONLY the line numbers that were modified, NOT the code content: + +For each file, show: +- File path +- Line numbers added: `filename:line_number` format +- Line numbers removed: `filename:line_number` format +- Line ranges modified: `filename:start_line-end_line` format + +Example format: +``` +src/handler.go:45-67 (modified) +src/handler.go:102 (added) +src/handler.go:89-91 (removed) +``` + +### Change Analysis + +For each modified source file: + +- High-level description of what changed (NO code snippets) +- Impact and scope of changes +- Dependencies and integrations affected +- Reference to line numbers from Changed Lines Summary above + +### QE Testing Recommendations + +#### Test Scenarios by Changed Files + +For each file that needs testing, provide: + +- Suggested test scenarios (list 5-10 specific descriptions of what to test from QE perspective) +- Edge cases to cover +- Error conditions to handle +- Integration points to verify +- User workflows affected + +**DO NOT include any code examples or test implementations** + +#### Test Strategy + +- Test levels (integration, e2e, manual, regression, performance) +- Coverage goals +- Priority areas (critical paths first) +- Risk assessment + +#### Test Scenario Descriptions + +Provide descriptive test scenario descriptions like: + +- "Verify end-to-end workflow with valid user input" +- "Test system behavior with null/empty input" +- "Test system behavior with boundary values" +- "Verify error handling when external dependency fails" +- "Test integration with dependent services/components" +- "Verify backward compatibility with existing features" + +**DO NOT write actual test code or function names** + +### QE Testing Approach Guidance + +#### Test Environment Considerations + +- Required test environment setup +- Test data requirements +- External dependencies and integrations +- Configuration needed for testing + +### Files Not Requiring Tests + +List files that don't need testing and why: + +- Test files (already tests) +- Configuration files +- Documentation +- Minor changes covered by existing tests + +--- + +**Note**: Present ONLY the QE testing approach and strategy from a quality engineering perspective. Focus on integration, e2e, regression, and manual testing scenarios. DO NOT include any code examples, snippets, test implementations, or unit test recommendations. Reference changes by line numbers only. +