diff --git a/JIRA_INTEGRATION.md b/JIRA_INTEGRATION.md new file mode 100644 index 0000000..9d97b71 --- /dev/null +++ b/JIRA_INTEGRATION.md @@ -0,0 +1,223 @@ +# Jira Cloud Integration + +This document describes how to set up and use the Jira Cloud integration with Wellcode CLI as an alternative to Linear for issue tracking metrics. + +## Overview + +The Jira Cloud integration provides comprehensive issue tracking analytics including: + +- **Issue Flow Metrics**: Creation, completion, and in-progress tracking +- **Issue Type Analysis**: Bugs, Stories, Tasks, and Epics breakdown +- **Cycle Time Metrics**: Time from creation to resolution +- **Estimation Accuracy**: Story points vs actual time analysis +- **Project Performance**: Per-project metrics and health indicators +- **Assignee Performance**: Individual contributor metrics +- **Priority Distribution**: Issue priority analysis +- **Component & Version Tracking**: Component and fix version metrics + +## Prerequisites + +1. **Jira Cloud Instance**: You need access to a Jira Cloud instance (*.atlassian.net) +2. **API Token**: Generate an API token from your Atlassian account +3. **Permissions**: Read access to projects and issues you want to analyze + +## Setup Instructions + +### 1. Generate Jira API Token + +1. Go to [Atlassian Account Security](https://id.atlassian.com/manage-profile/security/api-tokens) +2. Click "Create API token" +3. Give it a descriptive name (e.g., "Wellcode CLI") +4. Copy the generated token (you won't be able to see it again) + +### 2. Configure Wellcode CLI + +Run the configuration command: + +```bash +wellcode-cli config +``` + +When prompted for Jira configuration, provide: + +- **Domain**: Your Jira domain (e.g., `mycompany` for `mycompany.atlassian.net`) +- **Email**: Your Atlassian account email address +- **API Token**: The token you generated in step 1 + +The CLI will test the connection and save your configuration if successful. + +### 3. Verify Setup + +Test your configuration by running: + +```bash +wellcode-cli review +``` + +You should see Jira metrics alongside your GitHub metrics. + +## Usage + +### Basic Usage + +```bash +# Review last 7 days (default) +wellcode-cli review + +# Review specific date range +wellcode-cli review --start-date 2024-01-01 --end-date 2024-01-31 + +# Review specific user's issues +wellcode-cli review --user "john.doe@company.com" +``` + +### Filtering Options + +- `--user`: Filter by assignee (use email address or Jira username) +- `--start-date`: Start date for analysis (YYYY-MM-DD format) +- `--end-date`: End date for analysis (YYYY-MM-DD format) + +## Metrics Explained + +### Issue Flow Metrics + +- **Issues Created**: Total issues created in the time period +- **Issues Completed**: Issues moved to "Done" status +- **Completion Rate**: Percentage of created issues that were completed +- **Issue Types**: Breakdown by Bugs, Stories, Tasks, and Epics + +### Cycle Time Metrics + +- **Average Cycle Time**: Mean time from creation to resolution +- **Median Cycle Time**: 50th percentile cycle time +- **95th Percentile**: 95th percentile cycle time (helps identify outliers) +- **Resolution Time**: Time to close/resolve issues + +### Estimation Accuracy + +- **Accuracy Rate**: Percentage of estimates within 25% of actual time +- **Underestimates**: Issues that took longer than estimated +- **Overestimates**: Issues that took less time than estimated +- **Variance**: Average percentage difference between estimate and actual + +### Project Performance + +- **Completion Rate**: Per-project completion percentage +- **Issue Distribution**: Breakdown by issue types per project +- **Assignee Involvement**: Number of people working on each project +- **Project Lead**: Project lead information +- **Project Type**: Software, Business, etc. + +## Customization + +### Story Points Field + +The integration looks for story points in the `customfield_10016` field by default. If your Jira instance uses a different field for story points, you can modify this in the code: + +```python +# In src/wellcode_cli/jira/models/metrics.py +story_points = fields.get("customfield_XXXXX") # Replace XXXXX with your field ID +``` + +To find your story points field ID: +1. Go to Jira Settings โ†’ Issues โ†’ Custom Fields +2. Find your Story Points field +3. Note the field ID (usually in the format `customfield_XXXXX`) + +### Time Estimation + +The integration supports both: +- **Story Points**: Converted to hours (1 point = 4 hours by default) +- **Time Estimates**: Original time estimates in Jira + +## Troubleshooting + +### Common Issues + +1. **Authentication Failed** + - Verify your email address is correct + - Ensure your API token is valid and not expired + - Check that your domain is correct (without .atlassian.net) + +2. **No Issues Found** + - Verify the date range includes issues + - Check that you have read permissions for the projects + - Ensure issues exist in the specified time period + +3. **Missing Metrics** + - Some metrics require specific Jira configurations (story points, time tracking) + - Ensure your Jira instance has the required fields enabled + +### Debug Mode + +Enable debug logging to troubleshoot issues: + +```bash +export WELLCODE_DEBUG=1 +wellcode-cli review +``` + +### API Rate Limits + +Jira Cloud has API rate limits: +- 300 requests per minute for most endpoints +- The integration uses pagination to handle large datasets efficiently + +## Security + +- API tokens are stored locally in `~/.wellcode/config.json` +- Tokens are transmitted over HTTPS only +- No data is sent to external services except Jira Cloud + +## Comparison with Linear + +| Feature | Jira Cloud | Linear | +|---------|------------|--------| +| Issue Types | Bugs, Stories, Tasks, Epics | Issues with Labels | +| Projects | Native project support | Team-based organization | +| Time Tracking | Built-in time tracking | Estimation-based | +| Custom Fields | Extensive customization | Limited custom fields | +| Workflow | Configurable workflows | Fixed workflow states | +| API Rate Limits | 300/minute | 1000/hour | + +## Advanced Configuration + +### Environment Variables + +You can also configure Jira using environment variables: + +```bash +export JIRA_DOMAIN="mycompany" +export JIRA_EMAIL="user@company.com" +export JIRA_API_KEY="your-api-token" +``` + +### JQL Customization + +The integration uses JQL (Jira Query Language) to fetch issues. The default query is: + +```jql +created >= 'YYYY-MM-DD' AND created <= 'YYYY-MM-DD' +``` + +For advanced users, you can modify the JQL in `src/wellcode_cli/jira/jira_metrics.py`. + +## Support + +For issues with the Jira integration: + +1. Check the troubleshooting section above +2. Enable debug mode for detailed logs +3. Verify your Jira permissions and configuration +4. Create an issue in the Wellcode CLI repository with debug logs + +## Contributing + +To contribute to the Jira integration: + +1. Fork the repository +2. Create a feature branch +3. Add tests for new functionality +4. Submit a pull request + +The Jira integration follows the same patterns as other integrations in the codebase for consistency and maintainability. \ No newline at end of file diff --git a/README.md b/README.md index d031f1a..d78aa15 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ Engineering Metrics Powered by AI

- Free, open-source CLI tool that integrates with GitHub, Linear, and Split.io to gather and analyze engineering team metrics. + Free, open-source CLI tool that integrates with GitHub, Linear, Jira Cloud, and Split.io to gather and analyze engineering team metrics.

## ๐Ÿš€ Installation @@ -27,6 +27,7 @@ wellcode-cli config This will guide you through: - GitHub App installation for your organization - Optional Linear integration +- Optional Jira Cloud integration - Optional Split.io integration - Optional Anthropic integration (for AI-powered insights) @@ -88,6 +89,7 @@ wellcode-cli ### Optional Integrations - **Linear**: Issue tracking metrics +- **Jira Cloud**: Issue tracking metrics (alternative to Linear) - **Split.io**: Feature flag analytics - **Anthropic**: AI-powered insights diff --git a/requirements.txt b/requirements.txt index 11a6eec..124a7c6 100644 --- a/requirements.txt +++ b/requirements.txt @@ -10,6 +10,7 @@ rich>=13.3.5 plotly markdown cryptography>=43.0.1 +requests types-requests types-python-dateutil pandas-stubs diff --git a/src/wellcode_cli/commands/config.py b/src/wellcode_cli/commands/config.py index 009c2a5..730503d 100644 --- a/src/wellcode_cli/commands/config.py +++ b/src/wellcode_cli/commands/config.py @@ -9,6 +9,7 @@ from ..github.app_config import WELLCODE_APP from ..github.auth import clear_user_token, get_user_token from ..github.client import GithubClient +from ..jira.jira_metrics import test_jira_connection console = Console() CONFIG_FILE = Path.home() / ".wellcode" / "config.json" @@ -111,13 +112,19 @@ def config(): # Optional integrations with secret masking optional_configs = { "Linear": ("LINEAR_API_KEY", "Enter your Linear API key"), + "Jira": ("JIRA_API_KEY", "Enter your Jira API key"), "Split.io": ("SPLIT_API_KEY", "Enter your Split.io API key"), "Anthropic": ("ANTHROPIC_API_KEY", "Enter your Anthropic API key"), } for name, (key, prompt) in optional_configs.items(): console.print(f"\n[bold cyan]{name} Configuration[/]") - handle_sensitive_config(config_data, name, key, prompt) + + # Special handling for Jira to get additional required fields + if name == "Jira": + handle_jira_config(config_data) + else: + handle_sensitive_config(config_data, name, key, prompt) # Save configuration CONFIG_FILE.parent.mkdir(parents=True, exist_ok=True) @@ -134,8 +141,14 @@ def config(): console.print("[green]โœ“ GitHub App installed and configured[/]") for name, (key, _) in optional_configs.items(): - status = "โœ“" if key in config_data else "โœ—" - color = "green" if key in config_data else "red" + if name == "Jira": + # Special check for Jira which requires multiple fields + has_jira = all(k in config_data for k in ["JIRA_DOMAIN", "JIRA_EMAIL", "JIRA_API_KEY"]) + status = "โœ“" if has_jira else "โœ—" + color = "green" if has_jira else "red" + else: + status = "โœ“" if key in config_data else "โœ—" + color = "green" if key in config_data else "red" console.print(f"[{color}]{status} {name}[/]") console.print("\nโœ… [green]Configuration saved successfully![/]") @@ -184,3 +197,67 @@ def handle_sensitive_config(config_data, name, key, prompt_text): value = Prompt.ask(prompt_text) if value: config_data[key] = value + + +def handle_jira_config(config_data): + """Handle Jira configuration with domain, email, and API key""" + has_jira_config = all(key in config_data for key in ["JIRA_DOMAIN", "JIRA_EMAIL", "JIRA_API_KEY"]) + + if has_jira_config: + console.print("[yellow]Jira integration is already configured[/]") + choice = Prompt.ask( + "Would you like to reconfigure Jira?", + choices=["y", "n", "clear"], + default="n", + ) + + if choice == "y": + configure_jira_details(config_data) + elif choice == "clear": + for key in ["JIRA_DOMAIN", "JIRA_EMAIL", "JIRA_API_KEY"]: + if key in config_data: + del config_data[key] + console.print("[yellow]Jira configuration cleared[/]") + else: + if Confirm.ask("Would you like to configure Jira integration?", default=False): + configure_jira_details(config_data) + + +def configure_jira_details(config_data): + """Configure Jira domain, email, and API key""" + console.print("\n[bold]Jira Cloud Configuration[/]") + console.print("You'll need:") + console.print("1. Your Jira domain (e.g., 'mycompany' for mycompany.atlassian.net)") + console.print("2. Your email address") + console.print("3. An API token from https://id.atlassian.com/manage-profile/security/api-tokens") + + # Get domain + current_domain = config_data.get("JIRA_DOMAIN", "") + domain = Prompt.ask("Enter your Jira domain", default=current_domain) + if not domain: + console.print("[red]Domain is required for Jira integration[/]") + return + + # Get email + current_email = config_data.get("JIRA_EMAIL", "") + email = Prompt.ask("Enter your email address", default=current_email) + if not email: + console.print("[red]Email is required for Jira integration[/]") + return + + # Get API key + api_key = Prompt.ask("Enter your Jira API token") + if not api_key: + console.print("[red]API token is required for Jira integration[/]") + return + + # Test the connection + console.print("\n[yellow]Testing Jira connection...[/]") + if test_jira_connection(domain, email, api_key): + config_data["JIRA_DOMAIN"] = domain + config_data["JIRA_EMAIL"] = email + config_data["JIRA_API_KEY"] = api_key + console.print("[green]โœ“ Jira configuration saved successfully![/]") + else: + console.print("[red]โœ— Jira connection failed. Configuration not saved.[/]") + console.print("Please check your domain, email, and API token.") diff --git a/src/wellcode_cli/commands/review.py b/src/wellcode_cli/commands/review.py index fdbf5ce..b70ddc1 100644 --- a/src/wellcode_cli/commands/review.py +++ b/src/wellcode_cli/commands/review.py @@ -10,6 +10,7 @@ from ..config import ( get_anthropic_api_key, get_github_org, + get_jira_api_key, get_linear_api_key, get_split_api_key, ) @@ -18,6 +19,8 @@ from ..github.github_display import display_github_metrics from ..github.github_format_ai import format_ai_response, get_ai_analysis from ..github.github_metrics import get_github_metrics +from ..jira.jira_display import display_jira_metrics +from ..jira.jira_metrics import get_jira_metrics from ..linear.linear_display import display_linear_metrics from ..linear.linear_metrics import get_linear_metrics from ..split_metrics import display_split_metrics, get_split_metrics @@ -121,6 +124,18 @@ def review(start_date, end_date, user, team): else: console.print("[yellow]โš ๏ธ Linear integration not configured[/]") + # Jira metrics + if get_jira_api_key(): + status.update("Fetching Jira metrics...") + jira_metrics = get_jira_metrics(start_date, end_date, user) + if jira_metrics: + all_metrics["jira"] = jira_metrics + display_jira_metrics(jira_metrics) + else: + console.print("[red]Error: Failed to fetch Jira metrics[/]") + else: + console.print("[yellow]โš ๏ธ Jira integration not configured[/]") + # Split metrics if get_split_api_key(): status.update("Fetching Split metrics...") diff --git a/src/wellcode_cli/config.py b/src/wellcode_cli/config.py index 239fc82..072495e 100644 --- a/src/wellcode_cli/config.py +++ b/src/wellcode_cli/config.py @@ -46,3 +46,15 @@ def get_anthropic_api_key() -> Optional[str]: def get_split_api_key() -> Optional[str]: return get_config_value("SPLIT_API_KEY") + + +def get_jira_api_key() -> Optional[str]: + return get_config_value("JIRA_API_KEY") + + +def get_jira_domain() -> Optional[str]: + return get_config_value("JIRA_DOMAIN") + + +def get_jira_email() -> Optional[str]: + return get_config_value("JIRA_EMAIL") diff --git a/src/wellcode_cli/github/github_format_ai.py b/src/wellcode_cli/github/github_format_ai.py index 877c373..21f4ab7 100644 --- a/src/wellcode_cli/github/github_format_ai.py +++ b/src/wellcode_cli/github/github_format_ai.py @@ -101,6 +101,10 @@ def get_ai_analysis(all_metrics): if "linear" in all_metrics: metrics_summary["linear"] = all_metrics["linear"] + # Jira metrics + if "jira" in all_metrics: + metrics_summary["jira"] = all_metrics["jira"] + # Split metrics if "split" in all_metrics: metrics_summary["split"] = all_metrics["split"] diff --git a/src/wellcode_cli/jira/__init__.py b/src/wellcode_cli/jira/__init__.py new file mode 100644 index 0000000..1d8b0c4 --- /dev/null +++ b/src/wellcode_cli/jira/__init__.py @@ -0,0 +1 @@ +# Jira Cloud integration package \ No newline at end of file diff --git a/src/wellcode_cli/jira/jira_display.py b/src/wellcode_cli/jira/jira_display.py new file mode 100644 index 0000000..5d2de22 --- /dev/null +++ b/src/wellcode_cli/jira/jira_display.py @@ -0,0 +1,255 @@ +import statistics +from datetime import datetime, timezone + +from rich.box import ROUNDED +from rich.console import Console +from rich.panel import Panel + +console = Console() + + +def format_time(hours: float) -> str: + """Format time in hours to a human-readable string""" + if hours < 1: + return f"{hours * 60:.0f}m" + elif hours < 24: + return f"{hours:.1f}h" + else: + days = hours / 24 + return f"{days:.1f}d" + + +def display_jira_metrics(org_metrics): + """Display Jira metrics with a modern UI using Rich components.""" + # Header with organization info and time range + now = datetime.now(timezone.utc) + console.print( + Panel( + "[bold cyan]Jira Engineering Analytics[/]\n" + + f"[dim]Organization: {org_metrics.name}[/]\n" + + f"[dim]Report Generated: {now.strftime('%Y-%m-%d %H:%M')} UTC[/]", + box=ROUNDED, + style="cyan", + ) + ) + + # 1. Core Issue Metrics with health indicators + total_issues = org_metrics.issues.total_created + completed_issues = org_metrics.issues.total_completed + completion_rate = (completed_issues / total_issues * 100) if total_issues > 0 else 0 + + health_indicator = ( + "๐ŸŸข" if completion_rate > 80 else "๐ŸŸก" if completion_rate > 60 else "๐Ÿ”ด" + ) + + console.print( + Panel( + f"{health_indicator} [bold green]Issues Created:[/] {total_issues}\n" + + f"[bold yellow]Issues Completed:[/] {completed_issues} ({completion_rate:.1f}% completion rate)\n" + + f"[bold red]Bugs Created:[/] {org_metrics.issues.bugs_created}\n" + + f"[bold blue]Stories Created:[/] {org_metrics.issues.stories_created}\n" + + f"[bold magenta]Tasks Created:[/] {org_metrics.issues.tasks_created}\n" + + f"[bold cyan]Epics Created:[/] {org_metrics.issues.epics_created}", + title="[bold]Issue Flow", + box=ROUNDED, + ) + ) + + # 2. Time Metrics with visual indicators + cycle = org_metrics.cycle_time + avg_cycle_time = statistics.mean(cycle.cycle_times) if cycle.cycle_times else 0 + cycle_health = ( + "๐ŸŸข" if avg_cycle_time < 24 else "๐ŸŸก" if avg_cycle_time < 72 else "๐Ÿ”ด" + ) + + console.print( + Panel( + f"{cycle_health} [bold]Average Cycle Time:[/] {format_time(avg_cycle_time)}\n" + + f"[bold]Median Cycle Time:[/] {format_time(statistics.median(cycle.cycle_times) if cycle.cycle_times else 0)}\n" + + f"[bold]95th Percentile:[/] {format_time(cycle.get_stats()['p95_cycle_time'])}\n" + + f"[bold]Average Resolution Time:[/] {format_time(statistics.mean(cycle.resolution_times) if cycle.resolution_times else 0)}", + title="[bold blue]Time Metrics", + box=ROUNDED, + ) + ) + + # 3. Estimation Accuracy + est = org_metrics.estimation + if est.total_estimated > 0: + accuracy_rate = est.accurate_estimates / est.total_estimated * 100 + accuracy_health = ( + "๐ŸŸข" if accuracy_rate > 80 else "๐ŸŸก" if accuracy_rate > 60 else "๐Ÿ”ด" + ) + + console.print( + Panel( + f"{accuracy_health} [bold]Estimation Accuracy:[/] {accuracy_rate:.1f}%\n" + + f"[bold green]Accurate Estimates:[/] {est.accurate_estimates}\n" + + f"[bold red]Underestimates:[/] {est.underestimates}\n" + + f"[bold yellow]Overestimates:[/] {est.overestimates}\n" + + f"[bold]Average Variance:[/] {statistics.mean(est.estimation_variance) if est.estimation_variance else 0:.1f}%", + title="[bold yellow]Estimation Health", + box=ROUNDED, + ) + ) + + # 4. Project Performance + if org_metrics.projects: + project_panels = [] + for project_key, project in org_metrics.projects.items(): + completion_rate = ( + (project.completed_issues / project.total_issues * 100) + if project.total_issues > 0 + else 0 + ) + project_health = ( + "๐ŸŸข" if completion_rate > 80 else "๐ŸŸก" if completion_rate > 60 else "๐Ÿ”ด" + ) + + project_panels.append( + f"{project_health} [bold cyan]{project.name} ({project_key})[/]\n" + + f"Issues: {project.total_issues} total, {project.completed_issues} completed ({completion_rate:.1f}%)\n" + + f"Bugs: {project.bugs_count} | Stories: {project.stories_count} | Tasks: {project.tasks_count} | Epics: {project.epics_count}\n" + + f"Assignees: {len(project.assignees_involved)}\n" + + f"Lead: {project.lead or 'Not set'} | Type: {project.project_type or 'Unknown'}" + ) + + console.print( + Panel( + "\n\n".join(project_panels), + title="[bold magenta]Project Health", + box=ROUNDED, + ) + ) + + # 5. Priority Distribution + if org_metrics.issues.by_priority: + display_priority_distribution(org_metrics.issues.by_priority) + + # 6. Assignee Performance + if org_metrics.issues.by_assignee: + display_assignee_performance(org_metrics.issues.by_assignee, org_metrics.cycle_time.by_assignee) + + # 7. Component and Version Distribution + if org_metrics.component_counts or org_metrics.version_counts: + display_component_version_summary(org_metrics.component_counts, org_metrics.version_counts) + + +def display_priority_distribution(priority_counts): + """Display a visual summary of issue priorities.""" + if not priority_counts: + return + + # Sort priorities by count in descending order + sorted_priorities = sorted(priority_counts.items(), key=lambda x: x[1], reverse=True) + + # Calculate the maximum count for scaling + max_count = max(count for _, count in sorted_priorities) + max_bar_length = 30 # Maximum length of the bar in characters + + # Create the priority summary + priority_lines = [] + for priority, count in sorted_priorities: + # Calculate bar length proportional to count + bar_length = int((count / max_count) * max_bar_length) + bar = "โ–ˆ" * bar_length + + # Choose color based on priority name + color = ( + "red" + if "highest" in priority.lower() or "critical" in priority.lower() + else ( + "yellow" + if "high" in priority.lower() + else "blue" if "medium" in priority.lower() else "green" + ) + ) + + priority_lines.append(f"[{color}]{priority:<15}[/] {bar} ({count})") + + console.print( + Panel( + "\n".join(priority_lines), title="[bold cyan]Priority Distribution", box=ROUNDED + ) + ) + + +def display_assignee_performance(assignee_counts, assignee_cycle_times): + """Display assignee performance metrics.""" + if not assignee_counts: + return + + # Sort assignees by issue count in descending order + sorted_assignees = sorted(assignee_counts.items(), key=lambda x: x[1], reverse=True) + + # Take top 10 assignees + top_assignees = sorted_assignees[:10] + + assignee_lines = [] + for assignee, count in top_assignees: + avg_cycle_time = 0 + if assignee in assignee_cycle_times and assignee_cycle_times[assignee]: + avg_cycle_time = statistics.mean(assignee_cycle_times[assignee]) + + # Performance indicator based on cycle time + performance_indicator = ( + "๐ŸŸข" if avg_cycle_time < 24 else "๐ŸŸก" if avg_cycle_time < 72 else "๐Ÿ”ด" + ) + + assignee_lines.append( + f"{performance_indicator} [bold]{assignee:<20}[/] Issues: {count:>3} | Avg Cycle: {format_time(avg_cycle_time)}" + ) + + console.print( + Panel( + "\n".join(assignee_lines), + title="[bold green]Top Assignee Performance", + box=ROUNDED, + ) + ) + + +def display_component_version_summary(component_counts, version_counts): + """Display a summary of components and versions.""" + panels = [] + + if component_counts: + # Sort components by count in descending order + sorted_components = sorted(component_counts.items(), key=lambda x: x[1], reverse=True) + top_components = sorted_components[:5] # Top 5 components + + component_lines = [] + for component, count in top_components: + component_lines.append(f"[cyan]{component:<25}[/] ({count})") + + panels.append( + Panel( + "\n".join(component_lines), + title="[bold cyan]Top Components", + box=ROUNDED, + ) + ) + + if version_counts: + # Sort versions by count in descending order + sorted_versions = sorted(version_counts.items(), key=lambda x: x[1], reverse=True) + top_versions = sorted_versions[:5] # Top 5 versions + + version_lines = [] + for version, count in top_versions: + version_lines.append(f"[magenta]{version:<25}[/] ({count})") + + panels.append( + Panel( + "\n".join(version_lines), + title="[bold magenta]Top Fix Versions", + box=ROUNDED, + ) + ) + + # Display panels side by side if both exist + if len(panels) == 2: + from rich.columns import Columns + console.print(Columns(panels)) + elif panels: + console.print(panels[0]) \ No newline at end of file diff --git a/src/wellcode_cli/jira/jira_metrics.py b/src/wellcode_cli/jira/jira_metrics.py new file mode 100644 index 0000000..3f19771 --- /dev/null +++ b/src/wellcode_cli/jira/jira_metrics.py @@ -0,0 +1,303 @@ +import logging +import base64 +from datetime import datetime, timedelta +from typing import Optional + +import requests +from rich.console import Console + +from ..config import get_jira_api_key, get_jira_domain, get_jira_email +from .models.metrics import JiraOrgMetrics, ProjectMetrics + +console = Console() + +logger = logging.getLogger(__name__) + + +def get_jira_metrics(start_date, end_date, user_filter=None) -> Optional[JiraOrgMetrics]: + """Get Jira metrics for the specified date range""" + + # Get configuration + api_key = get_jira_api_key() + domain = get_jira_domain() + email = get_jira_email() + + if not all([api_key, domain, email]): + logger.error("Jira configuration incomplete. Missing API key, domain, or email.") + return None + + # Create authentication header + auth_string = f"{email}:{api_key}" + auth_bytes = auth_string.encode('ascii') + auth_b64 = base64.b64encode(auth_bytes).decode('ascii') + + headers = { + "Authorization": f"Basic {auth_b64}", + "Accept": "application/json", + "Content-Type": "application/json" + } + + base_url = f"https://{domain}.atlassian.net/rest/api/3" + + org_metrics = JiraOrgMetrics(name=domain) + + try: + # Build JQL query for date range + start_date_str = start_date.strftime("%Y-%m-%d") + end_date_str = end_date.strftime("%Y-%m-%d") + + jql_query = f"created >= '{start_date_str}' AND created <= '{end_date_str}'" + + # Add user filter if specified + if user_filter: + jql_query += f" AND assignee = '{user_filter}'" + + # Get all issues with pagination + all_issues = [] + start_at = 0 + max_results = 100 + total_issues = None + + while total_issues is None or start_at < total_issues: + search_url = f"{base_url}/search" + params = { + "jql": jql_query, + "startAt": start_at, + "maxResults": max_results, + "fields": [ + "summary", + "status", + "issuetype", + "priority", + "assignee", + "project", + "created", + "resolutiondate", + "components", + "fixVersions", + "customfield_10016", # Story Points (common field ID) + "timeoriginalestimate", + "timespent", + "worklog" + ] + } + + response = requests.get(search_url, headers=headers, params=params, timeout=30) + + if response.status_code != 200: + logger.error(f"Jira API error: {response.status_code} - {response.text}") + return None + + data = response.json() + + if total_issues is None: + total_issues = data.get("total", 0) + console.print(f"Found {total_issues} issues to process...") + + issues = data.get("issues", []) + all_issues.extend(issues) + + start_at += max_results + + if len(issues) < max_results: + break + + console.print(f"Processing {len(all_issues)} issues...") + + # Process all issues + for issue in all_issues: + # Update issue metrics + org_metrics.issues.update_from_issue(issue) + + # Update cycle time metrics + org_metrics.cycle_time.update_from_issue(issue) + + # Calculate actual time for estimation metrics + actual_time = calculate_actual_time(issue) + if actual_time > 0: + org_metrics.estimation.update_from_issue(issue, actual_time) + + # Update project metrics + project_data = issue.get("fields", {}).get("project", {}) + if project_data: + project_key = project_data.get("key") + project_name = project_data.get("name", "") + + if project_key not in org_metrics.projects: + # Get additional project details + project_details = get_project_details(base_url, headers, project_key) + org_metrics.projects[project_key] = ProjectMetrics( + key=project_key, + name=project_name, + lead=project_details.get("lead"), + project_type=project_details.get("projectTypeKey") + ) + + org_metrics.projects[project_key].update_from_issue(issue) + + # Update component metrics + components = issue.get("fields", {}).get("components", []) + for component in components: + component_name = component.get("name", "") + if component_name: + if component_name not in org_metrics.component_counts: + org_metrics.component_counts[component_name] = 0 + org_metrics.component_counts[component_name] += 1 + + # Update version metrics + fix_versions = issue.get("fields", {}).get("fixVersions", []) + for version in fix_versions: + version_name = version.get("name", "") + if version_name: + if version_name not in org_metrics.version_counts: + org_metrics.version_counts[version_name] = 0 + org_metrics.version_counts[version_name] += 1 + + # Aggregate metrics after processing all issues + org_metrics.aggregate_metrics() + + return org_metrics + + except requests.exceptions.RequestException as e: + logger.error(f"Network error while fetching Jira metrics: {str(e)}") + return None + except Exception as e: + logger.error(f"Unexpected error while fetching Jira metrics: {str(e)}") + return None + + +def get_project_details(base_url: str, headers: dict, project_key: str) -> dict: + """Get additional project details from Jira API""" + try: + project_url = f"{base_url}/project/{project_key}" + response = requests.get(project_url, headers=headers, timeout=30) + + if response.status_code == 200: + project_data = response.json() + return { + "lead": project_data.get("lead", {}).get("displayName"), + "projectTypeKey": project_data.get("projectTypeKey"), + "description": project_data.get("description", ""), + } + except Exception as e: + logger.warning(f"Could not fetch project details for {project_key}: {str(e)}") + + return {} + + +def calculate_actual_time(issue: dict) -> float: + """Calculate actual time spent on an issue in hours""" + fields = issue.get("fields", {}) + + # Try to get time spent from the issue + time_spent = fields.get("timespent") # Time in seconds + if time_spent: + return time_spent / 3600 # Convert to hours + + # If no time spent recorded, try to estimate from worklogs + try: + # Note: This would require additional API call to get worklogs + # For now, we'll use a simple estimation based on resolution time + created = fields.get("created") + resolved = fields.get("resolutiondate") + + if created and resolved: + created_dt = datetime.fromisoformat(created.replace("Z", "+00:00")) + resolved_dt = datetime.fromisoformat(resolved.replace("Z", "+00:00")) + + # Calculate business hours between dates (rough estimation) + total_hours = (resolved_dt - created_dt).total_seconds() / 3600 + + # Estimate actual work time as 25% of total time (accounting for weekends, etc.) + estimated_work_hours = total_hours * 0.25 + + return max(0.5, min(estimated_work_hours, 40)) # Cap between 0.5 and 40 hours + + except (ValueError, TypeError): + pass + + return 0 + + +def calculate_work_hours(start_date: datetime, end_date: datetime) -> float: + """Calculate work hours between two dates, excluding weekends""" + if not start_date or not end_date: + return 0 + + total_hours = 0 + current_date = start_date + + while current_date < end_date: + if current_date.weekday() < 5: # Monday to Friday + day_end = min( + current_date.replace(hour=17, minute=0, second=0, microsecond=0), + end_date, + ) + day_start = max( + current_date.replace(hour=9, minute=0, second=0, microsecond=0), + start_date, + ) + + if day_end > day_start: + work_hours = (day_end - day_start).total_seconds() / 3600 + total_hours += min(8, work_hours) # Cap at 8 hours per day + + current_date = current_date.replace( + hour=9, minute=0, second=0, microsecond=0 + ) + timedelta(days=1) + + return total_hours + + +def get_jira_projects(domain: str, email: str, api_key: str) -> list: + """Get list of accessible Jira projects""" + auth_string = f"{email}:{api_key}" + auth_bytes = auth_string.encode('ascii') + auth_b64 = base64.b64encode(auth_bytes).decode('ascii') + + headers = { + "Authorization": f"Basic {auth_b64}", + "Accept": "application/json" + } + + try: + url = f"https://{domain}.atlassian.net/rest/api/3/project" + response = requests.get(url, headers=headers, timeout=30) + + if response.status_code == 200: + return response.json() + else: + logger.error(f"Failed to fetch projects: {response.status_code}") + return [] + + except Exception as e: + logger.error(f"Error fetching Jira projects: {str(e)}") + return [] + + +def test_jira_connection(domain: str, email: str, api_key: str) -> bool: + """Test Jira connection with provided credentials""" + auth_string = f"{email}:{api_key}" + auth_bytes = auth_string.encode('ascii') + auth_b64 = base64.b64encode(auth_bytes).decode('ascii') + + headers = { + "Authorization": f"Basic {auth_b64}", + "Accept": "application/json" + } + + try: + url = f"https://{domain}.atlassian.net/rest/api/3/myself" + response = requests.get(url, headers=headers, timeout=10) + + if response.status_code == 200: + user_data = response.json() + console.print(f"[green]โœ“ Connected to Jira as {user_data.get('displayName', email)}[/]") + return True + else: + console.print(f"[red]โœ— Jira connection failed: {response.status_code}[/]") + return False + + except Exception as e: + console.print(f"[red]โœ— Jira connection error: {str(e)}[/]") + return False \ No newline at end of file diff --git a/src/wellcode_cli/jira/models/__init__.py b/src/wellcode_cli/jira/models/__init__.py new file mode 100644 index 0000000..37d72b6 --- /dev/null +++ b/src/wellcode_cli/jira/models/__init__.py @@ -0,0 +1 @@ +# Jira models package \ No newline at end of file diff --git a/src/wellcode_cli/jira/models/metrics.py b/src/wellcode_cli/jira/models/metrics.py new file mode 100644 index 0000000..034b6dd --- /dev/null +++ b/src/wellcode_cli/jira/models/metrics.py @@ -0,0 +1,519 @@ +import json +import statistics +from collections import defaultdict +from dataclasses import dataclass, field +from datetime import datetime +from typing import Dict, List, Set, Optional + + +class MetricsJSONEncoder(json.JSONEncoder): + def default(self, obj): + if isinstance(obj, datetime): + return obj.isoformat() + if isinstance(obj, set): + return list(obj) + if isinstance(obj, defaultdict): + return dict(obj) + if callable(obj): + return None + if hasattr(obj, "__dict__"): + return { + k: v + for k, v in obj.__dict__.items() + if not k.startswith("_") and not callable(v) + } + try: + return super().default(obj) + except Exception: + return str(obj) + + +@dataclass +class BaseMetrics: + def to_dict(self): + def convert(obj): + if isinstance(obj, datetime): + return obj.isoformat() + if isinstance(obj, set): + return list(obj) + if isinstance(obj, defaultdict): + return dict(obj) + if callable(obj): + return None + if hasattr(obj, "to_dict"): + return obj.to_dict() + if hasattr(obj, "__dict__"): + return { + k: convert(v) + for k, v in obj.__dict__.items() + if not k.startswith("_") and not callable(v) + } + return obj + + return { + k: convert(v) + for k, v in self.__dict__.items() + if not k.startswith("_") and not callable(v) + } + + +@dataclass +class IssueMetrics(BaseMetrics): + total_created: int = 0 + total_completed: int = 0 + total_in_progress: int = 0 + bugs_created: int = 0 + bugs_completed: int = 0 + stories_created: int = 0 + stories_completed: int = 0 + tasks_created: int = 0 + tasks_completed: int = 0 + epics_created: int = 0 + epics_completed: int = 0 + by_priority: Dict[str, int] = field(default_factory=lambda: defaultdict(int)) + by_status: Dict[str, int] = field(default_factory=lambda: defaultdict(int)) + by_assignee: Dict[str, int] = field(default_factory=lambda: defaultdict(int)) + by_project: Dict[str, Dict] = field( + default_factory=lambda: defaultdict( + lambda: { + "total": 0, + "bugs": 0, + "stories": 0, + "tasks": 0, + "epics": 0, + "completed": 0, + "in_progress": 0, + } + ) + ) + + def get_stats(self) -> Dict: + completion_rate = ( + (self.total_completed / self.total_created * 100) + if self.total_created > 0 + else 0 + ) + bug_rate = ( + (self.bugs_created / self.total_created * 100) + if self.total_created > 0 + else 0 + ) + + return { + "total_issues": self.total_created, + "completion_rate": completion_rate, + "bug_rate": bug_rate, + "stories_to_bugs_ratio": ( + self.stories_created / self.bugs_created + if self.bugs_created > 0 + else 0 + ), + "in_progress_rate": ( + (self.total_in_progress / self.total_created * 100) + if self.total_created > 0 + else 0 + ), + "priority_distribution": dict(self.by_priority), + "status_distribution": dict(self.by_status), + "assignee_distribution": dict(self.by_assignee), + "project_metrics": dict(self.by_project), + } + + def update_from_issue(self, issue: dict): + self.total_created += 1 + + # Get issue type and status + issue_type = issue.get("fields", {}).get("issuetype", {}).get("name", "").lower() + status_name = issue.get("fields", {}).get("status", {}).get("name", "Unknown") + status_category = issue.get("fields", {}).get("status", {}).get("statusCategory", {}).get("key", "") + + # Update status metrics + self.by_status[status_name] += 1 + + # Update completion status based on status category + if status_category == "done": + self.total_completed += 1 + elif status_category == "indeterminate": + self.total_in_progress += 1 + + # Update issue type metrics + if "bug" in issue_type: + self.bugs_created += 1 + if status_category == "done": + self.bugs_completed += 1 + elif "story" in issue_type: + self.stories_created += 1 + if status_category == "done": + self.stories_completed += 1 + elif "task" in issue_type: + self.tasks_created += 1 + if status_category == "done": + self.tasks_completed += 1 + elif "epic" in issue_type: + self.epics_created += 1 + if status_category == "done": + self.epics_completed += 1 + + # Update priority metrics + priority = issue.get("fields", {}).get("priority", {}) + if priority: + priority_name = priority.get("name", "Unknown") + self.by_priority[priority_name] += 1 + + # Update assignee metrics + assignee = issue.get("fields", {}).get("assignee", {}) + if assignee: + assignee_name = assignee.get("displayName", "Unassigned") + self.by_assignee[assignee_name] += 1 + else: + self.by_assignee["Unassigned"] += 1 + + # Update project metrics + project = issue.get("fields", {}).get("project", {}) + if project: + project_key = project.get("key") + if project_key: + self.by_project[project_key]["total"] += 1 + if "bug" in issue_type: + self.by_project[project_key]["bugs"] += 1 + elif "story" in issue_type: + self.by_project[project_key]["stories"] += 1 + elif "task" in issue_type: + self.by_project[project_key]["tasks"] += 1 + elif "epic" in issue_type: + self.by_project[project_key]["epics"] += 1 + + if status_category == "done": + self.by_project[project_key]["completed"] += 1 + elif status_category == "indeterminate": + self.by_project[project_key]["in_progress"] += 1 + + +@dataclass +class CycleTimeMetrics(BaseMetrics): + cycle_times: List[float] = field(default_factory=list) + time_to_start: List[float] = field(default_factory=list) + time_in_progress: List[float] = field(default_factory=list) + time_in_review: List[float] = field(default_factory=list) + resolution_times: List[float] = field(default_factory=list) + by_assignee: Dict[str, List[float]] = field(default_factory=lambda: defaultdict(list)) + by_priority: Dict[str, List[float]] = field( + default_factory=lambda: defaultdict(list) + ) + by_issue_type: Dict[str, List[float]] = field( + default_factory=lambda: defaultdict(list) + ) + + def get_stats(self) -> Dict: + def safe_mean(lst: List[float]) -> float: + return statistics.mean(lst) if lst else 0 + + def safe_median(lst: List[float]) -> float: + return statistics.median(lst) if lst else 0 + + def safe_p95(lst: List[float]) -> float: + if not lst: + return 0 + sorted_list = sorted(lst) + index = int(0.95 * len(sorted_list)) + return sorted_list[min(index, len(sorted_list) - 1)] + + return { + "avg_cycle_time": safe_mean(self.cycle_times), + "median_cycle_time": safe_median(self.cycle_times), + "p95_cycle_time": safe_p95(self.cycle_times), + "avg_time_to_start": safe_mean(self.time_to_start), + "avg_time_in_progress": safe_mean(self.time_in_progress), + "avg_time_in_review": safe_mean(self.time_in_review), + "avg_resolution_time": safe_mean(self.resolution_times), + "assignee_cycle_times": { + assignee: safe_mean(times) for assignee, times in self.by_assignee.items() + }, + "priority_cycle_times": { + priority: safe_mean(times) + for priority, times in self.by_priority.items() + }, + "issue_type_cycle_times": { + issue_type: safe_mean(times) + for issue_type, times in self.by_issue_type.items() + }, + } + + def update_from_issue(self, issue: dict): + fields = issue.get("fields", {}) + created = fields.get("created") + resolved = fields.get("resolutiondate") + + if not created: + return + + try: + created_dt = datetime.fromisoformat(created.replace("Z", "+00:00")) + + if resolved: + resolved_dt = datetime.fromisoformat(resolved.replace("Z", "+00:00")) + cycle_time = (resolved_dt - created_dt).total_seconds() / 3600 # hours + self.cycle_times.append(cycle_time) + self.resolution_times.append(cycle_time) + + # Track by assignee + assignee = fields.get("assignee", {}) + if assignee: + assignee_name = assignee.get("displayName", "Unassigned") + self.by_assignee[assignee_name].append(cycle_time) + + # Track by priority + priority = fields.get("priority", {}) + if priority: + priority_name = priority.get("name", "Unknown") + self.by_priority[priority_name].append(cycle_time) + + # Track by issue type + issue_type = fields.get("issuetype", {}) + if issue_type: + type_name = issue_type.get("name", "Unknown") + self.by_issue_type[type_name].append(cycle_time) + + except (ValueError, TypeError) as e: + # Skip issues with invalid date formats + pass + + +@dataclass +class EstimationMetrics(BaseMetrics): + total_estimated: int = 0 + accurate_estimates: int = 0 + underestimates: int = 0 + overestimates: int = 0 + estimation_variance: List[float] = field(default_factory=list) + by_assignee: Dict[str, Dict] = field( + default_factory=lambda: defaultdict( + lambda: {"total": 0, "accurate": 0, "under": 0, "over": 0, "variance": []} + ) + ) + by_issue_type: Dict[str, Dict] = field( + default_factory=lambda: defaultdict( + lambda: {"total": 0, "accurate": 0, "under": 0, "over": 0, "variance": []} + ) + ) + + def get_stats(self) -> Dict: + def safe_mean(lst: List[float]) -> float: + return statistics.mean(lst) if lst else 0 + + accuracy_rate = ( + (self.accurate_estimates / self.total_estimated * 100) + if self.total_estimated > 0 + else 0 + ) + + return { + "total_estimated": self.total_estimated, + "accuracy_rate": accuracy_rate, + "underestimate_rate": ( + (self.underestimates / self.total_estimated * 100) + if self.total_estimated > 0 + else 0 + ), + "overestimate_rate": ( + (self.overestimates / self.total_estimated * 100) + if self.total_estimated > 0 + else 0 + ), + "avg_variance": safe_mean(self.estimation_variance), + "assignee_accuracy": { + assignee: { + "accuracy_rate": ( + (stats["accurate"] / stats["total"] * 100) + if stats["total"] > 0 + else 0 + ), + "avg_variance": safe_mean(stats["variance"]), + } + for assignee, stats in self.by_assignee.items() + }, + "issue_type_accuracy": { + issue_type: { + "accuracy_rate": ( + (stats["accurate"] / stats["total"] * 100) + if stats["total"] > 0 + else 0 + ), + "avg_variance": safe_mean(stats["variance"]), + } + for issue_type, stats in self.by_issue_type.items() + }, + } + + def update_from_issue(self, issue: dict, actual_time: float): + fields = issue.get("fields", {}) + + # Try to get story points or time estimate + story_points = fields.get("customfield_10016") # Common story points field + original_estimate = fields.get("timeoriginalestimate") # Time estimate in seconds + + estimate_hours = None + if story_points: + # Convert story points to hours (assuming 1 point = 4 hours) + estimate_hours = story_points * 4 + elif original_estimate: + # Convert seconds to hours + estimate_hours = original_estimate / 3600 + + if not estimate_hours or actual_time <= 0: + return + + variance_percent = ((actual_time - estimate_hours) / estimate_hours) * 100 + + self.total_estimated += 1 + self.estimation_variance.append(variance_percent) + + # Categorize accuracy (within 25% is considered accurate) + if abs(variance_percent) <= 25: + self.accurate_estimates += 1 + elif variance_percent > 25: + self.underestimates += 1 + else: + self.overestimates += 1 + + # Track by assignee + assignee = fields.get("assignee", {}) + if assignee: + assignee_name = assignee.get("displayName", "Unassigned") + assignee_stats = self.by_assignee[assignee_name] + assignee_stats["total"] += 1 + assignee_stats["variance"].append(variance_percent) + if abs(variance_percent) <= 25: + assignee_stats["accurate"] += 1 + elif variance_percent > 25: + assignee_stats["under"] += 1 + else: + assignee_stats["over"] += 1 + + # Track by issue type + issue_type = fields.get("issuetype", {}) + if issue_type: + type_name = issue_type.get("name", "Unknown") + type_stats = self.by_issue_type[type_name] + type_stats["total"] += 1 + type_stats["variance"].append(variance_percent) + if abs(variance_percent) <= 25: + type_stats["accurate"] += 1 + elif variance_percent > 25: + type_stats["under"] += 1 + else: + type_stats["over"] += 1 + + +@dataclass +class ProjectMetrics(BaseMetrics): + key: str + name: str + total_issues: int = 0 + completed_issues: int = 0 + bugs_count: int = 0 + stories_count: int = 0 + tasks_count: int = 0 + epics_count: int = 0 + avg_cycle_time: float = 0 + assignees_involved: Set[str] = field(default_factory=set) + estimation_accuracy: float = 0 + lead: Optional[str] = None + project_type: Optional[str] = None + + def get_stats(self) -> Dict: + completion_rate = ( + (self.completed_issues / self.total_issues * 100) + if self.total_issues > 0 + else 0 + ) + return { + "key": self.key, + "name": self.name, + "total_issues": self.total_issues, + "completed_issues": self.completed_issues, + "completion_rate": completion_rate, + "bugs_count": self.bugs_count, + "stories_count": self.stories_count, + "tasks_count": self.tasks_count, + "epics_count": self.epics_count, + "avg_cycle_time": self.avg_cycle_time, + "assignees_involved": list(self.assignees_involved), + "estimation_accuracy": self.estimation_accuracy, + "lead": self.lead, + "project_type": self.project_type, + } + + def update_from_issue(self, issue: dict): + self.total_issues += 1 + + fields = issue.get("fields", {}) + status_category = fields.get("status", {}).get("statusCategory", {}).get("key", "") + + if status_category == "done": + self.completed_issues += 1 + + # Update issue type counts + issue_type = fields.get("issuetype", {}).get("name", "").lower() + if "bug" in issue_type: + self.bugs_count += 1 + elif "story" in issue_type: + self.stories_count += 1 + elif "task" in issue_type: + self.tasks_count += 1 + elif "epic" in issue_type: + self.epics_count += 1 + + # Track assignee involvement + assignee = fields.get("assignee", {}) + if assignee: + assignee_name = assignee.get("displayName") + if assignee_name: + self.assignees_involved.add(assignee_name) + + +@dataclass +class JiraOrgMetrics(BaseMetrics): + name: str + issues: IssueMetrics = field(default_factory=IssueMetrics) + projects: Dict[str, ProjectMetrics] = field(default_factory=dict) + cycle_time: CycleTimeMetrics = field(default_factory=CycleTimeMetrics) + estimation: EstimationMetrics = field(default_factory=EstimationMetrics) + component_counts: Dict[str, int] = field(default_factory=dict) + version_counts: Dict[str, int] = field(default_factory=dict) + + def get_stats(self) -> Dict: + return { + "name": self.name, + "projects": { + key: project.get_stats() for key, project in self.projects.items() + }, + "issues": self.issues.get_stats(), + "cycle_time": self.cycle_time.get_stats(), + "estimation": self.estimation.get_stats(), + "component_distribution": self.component_counts, + "version_distribution": self.version_counts, + } + + def aggregate_metrics(self): + """Aggregate metrics across all projects""" + if self.projects: + # Calculate average cycle time across projects + project_cycle_times = [ + p.avg_cycle_time for p in self.projects.values() if p.avg_cycle_time > 0 + ] + if project_cycle_times: + avg_cycle_time = statistics.mean(project_cycle_times) + for project in self.projects.values(): + if project.avg_cycle_time == 0: + project.avg_cycle_time = avg_cycle_time + + # Calculate estimation accuracy across projects + project_accuracies = [ + p.estimation_accuracy for p in self.projects.values() if p.estimation_accuracy > 0 + ] + if project_accuracies: + avg_accuracy = statistics.mean(project_accuracies) + for project in self.projects.values(): + if project.estimation_accuracy == 0: + project.estimation_accuracy = avg_accuracy \ No newline at end of file diff --git a/test_jira_integration.py b/test_jira_integration.py new file mode 100644 index 0000000..f247f48 --- /dev/null +++ b/test_jira_integration.py @@ -0,0 +1,126 @@ +#!/usr/bin/env python3 +""" +Test script for Jira Cloud integration +""" + +import sys +import os +from datetime import datetime, timedelta + +# Add the src directory to the path +sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'src')) + +from wellcode_cli.jira.jira_metrics import test_jira_connection, get_jira_metrics +from wellcode_cli.jira.jira_display import display_jira_metrics + + +def test_jira_integration(): + """Test the Jira integration with sample data""" + print("๐Ÿงช Testing Jira Cloud Integration") + print("=" * 50) + + # Test connection function + print("\n1. Testing connection function...") + + # These would be real credentials in actual use + test_domain = "example" + test_email = "test@example.com" + test_api_key = "fake_api_key" + + print(f"Domain: {test_domain}") + print(f"Email: {test_email}") + print(f"API Key: {'*' * len(test_api_key)}") + + # This will fail with fake credentials, but tests the function structure + try: + result = test_jira_connection(test_domain, test_email, test_api_key) + print(f"Connection test result: {result}") + except Exception as e: + print(f"Expected connection failure with fake credentials: {e}") + + print("\n2. Testing metrics collection structure...") + + # Test date range + end_date = datetime.now() + start_date = end_date - timedelta(days=7) + + print(f"Date range: {start_date.date()} to {end_date.date()}") + + # This will also fail without real credentials, but tests the structure + try: + metrics = get_jira_metrics(start_date, end_date) + if metrics: + print("โœ… Metrics collection structure is working") + display_jira_metrics(metrics) + else: + print("โŒ No metrics returned (expected with fake credentials)") + except Exception as e: + print(f"Expected metrics failure with fake credentials: {e}") + + print("\n3. Testing data models...") + + # Test the data models with sample data + from wellcode_cli.jira.models.metrics import JiraOrgMetrics, IssueMetrics, ProjectMetrics + + # Create sample metrics + org_metrics = JiraOrgMetrics(name="Test Organization") + + # Sample issue data (mimicking Jira API response structure) + sample_issue = { + "key": "TEST-123", + "fields": { + "summary": "Test issue", + "issuetype": {"name": "Story"}, + "status": { + "name": "Done", + "statusCategory": {"key": "done"} + }, + "priority": {"name": "High"}, + "assignee": {"displayName": "John Doe"}, + "project": {"key": "TEST", "name": "Test Project"}, + "created": "2024-01-01T10:00:00.000Z", + "resolutiondate": "2024-01-02T15:00:00.000Z", + "components": [{"name": "Frontend"}], + "fixVersions": [{"name": "v1.0.0"}] + } + } + + # Test updating metrics with sample data + org_metrics.issues.update_from_issue(sample_issue) + org_metrics.cycle_time.update_from_issue(sample_issue) + + # Add sample project + org_metrics.projects["TEST"] = ProjectMetrics( + key="TEST", + name="Test Project" + ) + org_metrics.projects["TEST"].update_from_issue(sample_issue) + + # Test component and version tracking + org_metrics.component_counts["Frontend"] = 1 + org_metrics.version_counts["v1.0.0"] = 1 + + print("โœ… Data models are working correctly") + print(f" - Issues created: {org_metrics.issues.total_created}") + print(f" - Issues completed: {org_metrics.issues.total_completed}") + print(f" - Stories created: {org_metrics.issues.stories_created}") + print(f" - Projects tracked: {len(org_metrics.projects)}") + print(f" - Components tracked: {len(org_metrics.component_counts)}") + + print("\n4. Testing display functionality...") + try: + display_jira_metrics(org_metrics) + print("โœ… Display functionality is working") + except Exception as e: + print(f"โŒ Display error: {e}") + + print("\n" + "=" * 50) + print("๐ŸŽ‰ Jira integration test completed!") + print("\nTo use with real data:") + print("1. Run: wellcode-cli config") + print("2. Configure Jira with your domain, email, and API token") + print("3. Run: wellcode-cli review") + + +if __name__ == "__main__": + test_jira_integration() \ No newline at end of file