-
Notifications
You must be signed in to change notification settings - Fork 3
Description
Phase 4.0: Meta Implementation Plan for SolarWindPy Plotting Refactor
Executive Summary
This document captures the complete meta implementation plan for executing the SolarWindPy plotting module refactoring documented in Phases 4.1-4.6. It serves as the execution strategy and tracking framework to ensure systematic implementation of all 15 architectural decisions, 50+ code transformations, and 30 metrics.
Key Outcomes:
- 20.6% LOC reduction (10,583 → 8,400 lines)
- 62.5% complexity reduction (8/10 → 3/10)
- 100% backward compatibility maintained
- 4.1x ROI within first year
Navigation Using Phase 4.1 as Master Reference
How Phase 4.1 Drives Complete Implementation
Phase 4.1 (Issue #366) serves as the master control document with three key dashboards:
-
Architectural Decisions Dashboard (AD-001 to AD-015)
- Each decision maps to specific code transformations
- Risk assessments guide implementation order
- Impact metrics define success criteria
-
Code Transformations Dashboard (CT-001 to CT-050)
- Detailed LOC impact for each transformation
- Risk levels determine staging
- Module assignments for parallel work
-
Metrics Dashboard (M-001 to M-030)
- Quantifiable targets for validation
- Performance benchmarks
- Quality gates
Daily Implementation Workflow
def daily_implementation_workflow():
"""How to use Phase 4.1 every day"""
# 1. Morning: Check current stage in Implementation Sequence
current_stage = get_current_stage() # e.g., "Stage 2.1"
# 2. Identify today's CTs from dashboard
todays_tasks = phase_4_1.get_transformations(current_stage)
# e.g., ["CT-016", "CT-017", "CT-018"]
# 3. For each CT, check dependencies via Cross-Reference
for ct in todays_tasks:
dependencies = phase_4_1.get_dependencies(ct)
verify_dependencies_complete(dependencies)
# 4. Implement using details from Issues #367-370
for ct in todays_tasks:
implementation = get_implementation_details(ct)
implement_transformation(implementation)
# 5. Run validation from Implementation Commands
run_stage_tests(current_stage)
# 6. Check metrics achieved from Metrics Dashboard
metrics = phase_4_1.get_stage_metrics(current_stage)
verify_metrics_achieved(metrics)
# 7. Update QA Checklist items
phase_4_1.mark_complete(todays_tasks)
# 8. Report progress on GitHub
update_issue_progress("#366", todays_tasks)Documentation Strategy (Hybrid Approach)
Structure Overview
Phase 4.0 Issue (Dashboard) - GitHub Issue #XXX
├── Links to repo files
├── Current status
└── Progress tracking
Repository Files (Details) - tmp/phase4/execution/
├── META_PLAN.md (this file)
├── implementation_tracker.py
├── progress_tracker.csv
└── visual_regression.py
Phase 4.1 Enhancement - Issue #366
└── Navigation Guide with search instructions
Value Analysis
| Criterion | Issue Only | Repo Only | Hybrid (Selected) |
|---|---|---|---|
| Discoverability | ⭐⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐⭐ |
| Maintainability | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Token Efficiency | ⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Automation | ⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Team Coordination | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Progress Visibility | ⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐⭐⭐ |
Implementation Tracker Code
Python Implementation Tracker
# implementation_tracker.py
"""
SolarWindPy Plotting Module Refactoring Tracker
Focus: Code quality and visual consistency
"""
import pandas as pd
from pathlib import Path
import subprocess
from typing import Dict, Tuple
import matplotlib.pyplot as plt
import numpy as np
class PlottingRefactorTracker:
"""Track plotting module refactoring progress"""
def __init__(self):
self.base_path = Path("solarwindpy/plotting")
self.progress_file = Path("tmp/phase4/execution/progress_tracker.csv")
# Module targets (pure refactoring, no physics)
self.modules = {
"base": {"current": 1007, "target": 600},
"hist_plot": {"current": 893, "target": 700},
"scatter": {"current": 667, "target": 500},
"spiral": {"current": 558, "target": 450},
"contour": {"current": 455, "target": 350},
"vector": {"current": 412, "target": 320},
}
def measure_code_metrics(self) -> Dict[str, Dict]:
"""Measure code quality metrics"""
metrics = {}
for module in self.modules:
module_path = self.base_path / f"{module}.py"
if module_path.exists():
# Line count
lines = len(module_path.read_text().splitlines())
# Duplication check
duplication_cmd = ["radon", "cc", str(module_path), "-s"]
result = subprocess.run(duplication_cmd, capture_output=True, text=True)
metrics[module] = {
"lines": lines,
"target": self.modules[module]["target"],
"reduction": f"{(1 - lines/self.modules[module]['current'])*100:.1f}%",
"complexity": self._parse_complexity(result.stdout)
}
return metrics
def _parse_complexity(self, output: str) -> str:
"""Parse complexity score from radon output"""
# Extract average complexity from radon output
for line in output.split('\n'):
if 'Average complexity' in line:
return line.split(':')[-1].strip()
return "Unknown"
def check_api_compatibility(self) -> bool:
"""Ensure public API hasn't changed"""
# Run existing tests to verify API
test_cmd = ["pytest", "tests/test_plotting/", "-v"]
result = subprocess.run(test_cmd, capture_output=True)
return result.returncode == 0
def verify_visual_output(self) -> Dict[str, bool]:
"""Compare plot outputs before/after refactoring"""
results = {}
test_modules = ["hist_plot", "scatter", "spiral", "contour"]
for module in test_modules:
test_file = f"tests/test_plotting/test_{module}.py"
if Path(test_file).exists():
# Run visual regression test
cmd = ["pytest", test_file, "-k", "visual"]
result = subprocess.run(cmd, capture_output=True)
results[module] = (result.returncode == 0)
return results
def generate_progress_report(self) -> str:
"""Generate simple progress report"""
report = ["# Plotting Refactor Progress\n"]
# Code metrics
report.append("## Code Metrics")
metrics = self.measure_code_metrics()
total_current = sum(m["current"] for m in self.modules.values())
total_actual = sum(m["lines"] for m in metrics.values() if "lines" in m)
total_target = sum(m["target"] for m in self.modules.values())
report.append(f"Overall: {total_actual}/{total_target} LOC ")
report.append(f"({(1-total_actual/total_current)*100:.1f}% reduction)\n")
for module, data in metrics.items():
status = "✅" if data["lines"] <= data["target"] else "🔄"
report.append(f"- {module}: {data['lines']}/{data['target']} "
f"({data['reduction']}) {status}")
# API Compatibility
report.append("\n## Compatibility")
api_ok = self.check_api_compatibility()
report.append(f"- API Tests: {'✅ PASS' if api_ok else '❌ FAIL'}")
# Visual Regression
report.append("\n## Visual Regression")
visual_results = self.verify_visual_output()
for module, passed in visual_results.items():
report.append(f"- {module}: {'✅ PASS' if passed else '❌ FAIL'}")
# Test Coverage
report.append("\n## Test Coverage")
coverage_cmd = ["pytest", "--cov=solarwindpy.plotting", "--cov-report=term"]
result = subprocess.run(coverage_cmd, capture_output=True, text=True)
for line in result.stdout.splitlines():
if "TOTAL" in line:
report.append(f"- Overall: {line.split()[-1]}")
break
return "\n".join(report)
# Usage
if __name__ == "__main__":
tracker = PlottingRefactorTracker()
print(tracker.generate_progress_report())Visual Regression Testing
# visual_regression.py
"""Visual regression testing for plotting refactor"""
import matplotlib.pyplot as plt
import numpy as np
from pathlib import Path
import hashlib
from PIL import Image
import imagehash
class VisualRegressionTester:
"""Compare plot outputs before and after refactoring"""
def __init__(self):
self.baseline_dir = Path("tests/baseline_plots")
self.output_dir = Path("tests/output_plots")
self.output_dir.mkdir(exist_ok=True)
self.tolerance = 5 # Perceptual hash difference tolerance
def generate_test_plot(self, plot_class, data, filename):
"""Generate a test plot and save it"""
fig, ax = plt.subplots()
plot = plot_class(ax=ax)
plot.plot(data)
output_path = self.output_dir / filename
fig.savefig(output_path, dpi=100)
plt.close(fig)
return output_path
def compare_plots(self, baseline_path, test_path):
"""Compare two plot images using perceptual hashing"""
# Load images
baseline_img = Image.open(baseline_path)
test_img = Image.open(test_path)
# Calculate perceptual hashes
baseline_hash = imagehash.average_hash(baseline_img)
test_hash = imagehash.average_hash(test_img)
# Compare hashes (lower is more similar)
difference = baseline_hash - test_hash
return difference <= self.tolerance
def run_regression_tests(self):
"""Run all visual regression tests"""
results = {}
# Test each plot type
test_cases = [
("scatter", "scatter_test.png"),
("hist", "hist_test.png"),
("spiral", "spiral_test.png"),
("contour", "contour_test.png"),
]
for plot_type, filename in test_cases:
baseline = self.baseline_dir / filename
if baseline.exists():
# Generate new plot with refactored code
test_data = self.get_test_data(plot_type)
test_path = self.generate_test_plot(
plot_type, test_data, filename
)
results[plot_type] = self.compare_plots(baseline, test_path)
else:
results[plot_type] = None # No baseline
return results
def get_test_data(self, plot_type):
"""Generate test data for each plot type"""
np.random.seed(42) # Reproducible data
if plot_type == "scatter":
return {
'x': np.random.randn(100),
'y': np.random.randn(100)
}
elif plot_type == "hist":
return {
'data': np.random.randn(1000)
}
elif plot_type == "spiral":
theta = np.linspace(0, 4*np.pi, 100)
return {
'r': theta,
'theta': theta
}
elif plot_type == "contour":
x = np.linspace(-3, 3, 50)
y = np.linspace(-3, 3, 50)
X, Y = np.meshgrid(x, y)
Z = np.sin(X) * np.cos(Y)
return {'X': X, 'Y': Y, 'Z': Z}
return {}
# Usage
if __name__ == "__main__":
tester = VisualRegressionTester()
results = tester.run_regression_tests()
for plot_type, passed in results.items():
status = "✅" if passed else "❌" if passed is False else "⚠️"
print(f"{plot_type}: {status}")Progress Tracking Infrastructure
CSV Progress Tracker
ID,Type,Module,Description,Status,LOC_Impact,Breaking,Notes
AD-001,Decision,all,Template Method Pattern,pending,,NO,Main refactoring approach
AD-002,Decision,all,Composition Over Inheritance,pending,,NO,Reduce complexity
AD-003,Decision,all,Mixin Reduction,pending,,NO,Eliminate mixins
AD-004,Decision,all,Abstract Method Consolidation,pending,,NO,Reduce to 3-4
AD-005,Decision,all,Strategy Pattern Usage,pending,,NO,For variations
AD-006,Decision,all,Service Class Extraction,pending,,NO,Common operations
AD-007,Decision,all,Label System Centralization,pending,,NO,Unified handling
CT-001,Transform,base,Base.set_path() template,pending,150→30,NO,80% reduction
CT-002,Transform,base,Base.set_labels() consolidation,pending,125→45,NO,64% reduction
CT-003,Transform,base,Base._format_axis() standardization,pending,80→30,NO,62% reduction
CT-004,Transform,base,PlotWithZdata.make_plot() extraction,pending,200→80,NO,60% reduction
CT-016,Transform,services,PlottingService class,pending,+150,NO,New service
CT-017,Transform,services,LabelService class,pending,+200,NO,Label factory
CT-018,Transform,services,ColorbarService class,pending,+100,NO,Colorbar mgmt
CT-031,Transform,base,DataLimFormatter integration,pending,-43,NO,Remove mixin
CT-032,Transform,base,CbarMaker service conversion,pending,-129,NO,To service
CT-041,Transform,all,Duplicate __str__() removal,pending,-196,NO,89% reduction
CT-045,Transform,utils,PathBuilder extraction,pending,180→60,NO,67% reduction
CT-046,Transform,utils,DataClipper strategy,pending,75→25,NO,67% reduction
M-001,Metric,all,Total LOC reduction,pending,20.6%,NO,10583→8400
M-002,Metric,base,Base class LOC,pending,40%,NO,1007→600
M-005,Metric,labels,Label system LOC,pending,35%,NO,2011→1300
M-007,Metric,base,Abstract methods,pending,50%,NO,6→3-4
M-011,Metric,all,Inheritance depth,pending,50%,NO,4→2 levels
M-012,Metric,all,Multiple inheritance,pending,100%,NO,4→0
M-017,Metric,all,MRO complexity,pending,62.5%,NO,8/10→3/10
M-025,Metric,all,Code duplication,pending,70%,NO,15%→5%
L-001,Component,labels,Registration system,pending,+50,NO,Runtime extensibility
L-003,Component,labels,MCS validation,pending,+43,NO,Helpful errors
L-017,Component,labels,Enhanced with_units,pending,+12,NO,Template method
Implementation Commands Script
#!/bin/bash
# implementation_commands.sh
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Paths
PROGRESS_FILE="tmp/phase4/execution/progress_tracker.csv"
PHASE4_DIR="tmp/phase4"
case "$1" in
--today)
echo -e "${GREEN}Today's Tasks (Stage $2):${NC}"
if [ -z "$2" ]; then
echo "Usage: $0 --today <stage_number>"
exit 1
fi
grep "pending" $PROGRESS_FILE | grep "CT-" | head -5
;;
--validate)
stage=$2
echo -e "${YELLOW}Validating Stage $stage...${NC}"
# Run stage-specific tests
case "$stage" in
1)
pytest tests/test_plotting/test_base.py -v
;;
2)
pytest tests/test_plotting/test_services.py -v
;;
3)
pytest tests/test_plotting/test_labels.py -v
;;
4)
pytest tests/test_plotting/ --cov=solarwindpy.plotting
;;
*)
echo -e "${RED}Unknown stage: $stage${NC}"
exit 1
;;
esac
;;
--report)
echo -e "${GREEN}Progress Report:${NC}"
total=$(grep -c "^CT-\|^AD-\|^M-\|^L-" $PROGRESS_FILE)
completed=$(grep -c "completed" $PROGRESS_FILE)
pending=$(grep -c "pending" $PROGRESS_FILE)
blocked=$(grep -c "blocked" $PROGRESS_FILE)
echo "Total items: $total"
echo -e "${GREEN}Completed: $completed${NC}"
echo -e "${YELLOW}Pending: $pending${NC}"
echo -e "${RED}Blocked: $blocked${NC}"
# Calculate percentage
if [ $total -gt 0 ]; then
percent=$((completed * 100 / total))
echo "Progress: ${percent}%"
fi
;;
--update)
# Update status of a specific item
if [ -z "$2" ] || [ -z "$3" ]; then
echo "Usage: $0 --update <ID> <status>"
echo "Status: pending|in_progress|completed|blocked"
exit 1
fi
# Update CSV (this is simplified - real implementation would be more robust)
sed -i "s/^$2,\([^,]*,[^,]*,[^,]*,\)[^,]*/\1$3/" $PROGRESS_FILE
echo "Updated $2 to $3"
;;
--visual)
echo -e "${YELLOW}Running visual regression tests...${NC}"
python tmp/phase4/execution/visual_regression.py
;;
--tracker)
echo -e "${YELLOW}Generating tracker report...${NC}"
python tmp/phase4/execution/implementation_tracker.py
;;
*)
echo "SolarWindPy Plotting Refactor Implementation Commands"
echo ""
echo "Usage: $0 [command] [options]"
echo ""
echo "Commands:"
echo " --today <stage> Show today's tasks for given stage"
echo " --validate <stage> Run validation tests for stage"
echo " --report Generate progress report"
echo " --update <ID> <status> Update item status"
echo " --visual Run visual regression tests"
echo " --tracker Generate full tracker report"
echo ""
echo "Examples:"
echo " $0 --today 1"
echo " $0 --validate 2"
echo " $0 --update CT-001 completed"
;;
esacPhase 4.0 GitHub Issue Template
# Phase 4.0: SolarWindPy Plotting Refactor Execution Control Center
## 🎯 Purpose
This issue serves as the execution dashboard for implementing the plotting module refactoring documented in Phases 4.1-4.6 (#366-#371).
## 📊 Current Status
### Week 1 Progress
```mermaid
pie title Implementation Progress
"Completed" : 0
"In Progress" : 0
"Pending" : 50
"Blocked" : 0Today's Focus (Stage 1.1)
- CT-001: Base.set_path() template method
- CT-002: Base.set_labels() consolidation
- CT-003: Base._format_axis() standardization
- CT-004: PlotWithZdata.make_plot() extraction
📋 Key Resources
- 📖 Master Implementation Plan - Phase 4.1 with all references
- 📊 Metrics & Analysis - Target metrics and calculations
- 🏗️ Architectural Decisions - AD-001 to AD-015
- 🔧 Code Transformations - CT-001 to CT-050
- 🏷️ Label System - L-001 to L-020
- ✅ Closeout Plan - Final validation
🔧 Quick Commands
```bash
Check today's tasks
./tmp/phase4/execution/implementation_commands.sh --today 1
Validate current stage
./tmp/phase4/execution/implementation_commands.sh --validate 1
Generate progress report
./tmp/phase4/execution/implementation_commands.sh --report
Run visual regression
./tmp/phase4/execution/implementation_commands.sh --visual
```
📈 Metrics Dashboard
| Metric | Current | Target | Status |
|---|---|---|---|
| Total LOC | 10,583 | 8,400 | 🔄 |
| Duplication | 15% | 5% | 🔄 |
| Inheritance Depth | 4 | 2 | 🔄 |
| Abstract Methods | 6 | 3-4 | 🔄 |
| Test Coverage | 92% | ≥95% | 🔄 |
🚦 Stage Gates
✅ Stage 1: Foundation (Week 1)
- Template methods implemented
- Utilities extracted
- DataLimFormatter integrated
- All tests passing
⏳ Stage 2: Services (Week 2)
- PlottingService created
- LabelService created
- ColorbarService created
- CbarMaker converted
⏳ Stage 3: Consolidation (Week 3)
- Method duplications removed
- Label system enhanced
- Duplication < 5%
⏳ Stage 4: Architecture (Week 4)
- Inheritance simplified
- Performance optimized
- All metrics achieved
🔗 Links
This issue will be updated daily with progress. Last update: [DATE]
---
## Validation Gates
```yaml
# validation_gates.yaml
gates:
stage_1:
name: "Foundation (Low Risk)"
duration: "7 hours"
checks:
- description: "All template methods implemented"
test: "pytest tests/test_plotting/test_base.py -v"
must_pass: true
- description: "Utilities operational"
test: "pytest tests/test_plotting/test_utils.py -v"
must_pass: true
- description: "No breaking changes"
test: "pytest tests/test_plotting/ -v"
must_pass: true
- description: "Abstract methods reduced"
metric: "M-007"
current: 6
target: 4
stage_2:
name: "Services (Low-Medium Risk)"
duration: "9 hours"
checks:
- description: "All services operational"
test: "pytest tests/test_plotting/test_services.py -v"
must_pass: true
- description: "CbarMaker converted"
test: "grep -c CbarMaker solarwindpy/plotting/"
expected: 0
- description: "Multiple inheritance removed"
metric: "M-012"
current: 4
target: 0
stage_3:
name: "Consolidation (Medium Risk)"
duration: "7 hours"
checks:
- description: "Duplication reduced"
metric: "M-025"
current: "15%"
target: "5%"
- description: "Label system working"
test: "pytest tests/test_plotting/test_labels.py -v"
must_pass: true
- description: "Visual regression pass"
test: "./implementation_commands.sh --visual"
must_pass: true
stage_4:
name: "Architecture (High Risk)"
duration: "9 hours"
checks:
- description: "Inheritance depth reduced"
metric: "M-011"
current: 4
target: 2
- description: "Performance improved"
metric: "M-026"
target: "-20%"
- description: "All tests passing"
test: "pytest tests/test_plotting/ --cov=solarwindpy.plotting"
coverage_target: 95
Success Metrics
success_metrics = {
"code_quality": {
"loc_reduction": 20.6, # percent minimum
"duplication": 5, # percent maximum
"complexity": 3, # maximum score
"inheritance_depth": 2, # maximum levels
},
"compatibility": {
"api_changes": 0, # none allowed
"test_failures": 0, # none allowed
"visual_changes": 0, # none allowed (within tolerance)
},
"performance": {
"import_time": -20, # percent improvement
"memory_usage": -30, # percent improvement
"plot_generation": 0, # no degradation allowed
},
"quality": {
"test_coverage": 95, # percent minimum
"documentation": "complete",
"type_hints": "added where missing",
}
}Risk Mitigation
Primary Risks and Mitigations
-
Risk: Breaking API compatibility
- Mitigation: Run full test suite after each change
- Mitigation: Keep old method signatures with deprecation warnings
-
Risk: Visual output changes
- Mitigation: Visual regression tests with tolerance
- Mitigation: Baseline plots saved before refactoring
-
Risk: Performance degradation
- Mitigation: Benchmark before/after each stage
- Mitigation: Profile hot paths
-
Risk: Incomplete implementation
- Mitigation: CSV tracking with daily updates
- Mitigation: Stage gates prevent progression without completion
Value Propositions
Why This Meta Plan Matters
- Execution Certainty: Transforms 2,500 lines of documentation into systematic execution
- Progress Visibility: Real-time tracking via GitHub and CSV
- Quality Assurance: Gates ensure each stage meets criteria
- Risk Management: Staged approach with clear rollback points
- Team Coordination: Clear task assignment and dependencies
- Automation: Scripts reduce manual tracking overhead
ROI Analysis
- Investment: ~100 minutes to set up meta plan
- Saves: 20-40 hours of confusion and rework
- Result: 4.1x ROI on refactoring effort within first year
Conclusion
This meta plan ensures the SolarWindPy plotting module refactoring proceeds systematically from Phase 4 documentation to working code. By combining GitHub issue tracking, repository-based tools, and automation scripts, we create a robust execution framework that maximizes the probability of successful implementation while maintaining visibility and quality throughout the process.
Next Steps:
- Create Phase 4.0 GitHub issue using template above
- Set up execution directory with tracker files
- Generate baseline plots for regression testing
- Begin Stage 1 implementation with CT-001
Document created: 2024-12-08
Last updated: 2024-12-08
Auto-compact preservation: Complete