Data to help leaders answer these questions:
- Which devs are outperforming/underperforming expectations?
- Which devs is the team most reliant on? Is the team becoming more/less balanced?
- Which parts of our codebase are costliest to maintain? Which are becoming more costly?
- What areas of code should we focus on to reduce bugs/regressions?
Which devs are outperforming/underperforming expectations? With AI writing more code, these metrics focus on collaboration to assess performance.
Which devs is the team most reliant on? Is the team becoming more/less balanced? Who might be at silent risk of burnout/attrition?
What share of PRs in each repo are bugfixes?
Which parts of our codebase are costliest to maintain? Which are becoming more costly?
- The more consistent in size/complexity your PRs, the more useful this metric
What areas of code are most brittle? Where should we focus first to reduce bugs/regressions?
- Likely more useful for LLM-generated or old code you/your team are less familiar with
- Clone the repository and install dependencies:
npm install-
Generate a GitHub Personal Access Token with access to the repos you want to analyze.
-
Copy the
.env.examplefile and add your GitHub token:
GITHUB_TOKEN=your_github_personal_access_token- Run the development server:
npm run devOn first launch, configure the dashboard for your team:
- Repositories - GitHub repositories your team contributes to
- Authors - GitHub usernames for your devs (e.g. helpful to filter monorepos with multiple teams)
- Bugfix Query - GitHub search query to identify bugfix PRs
- e.g. replace
fix in:titlewithlabel:bugif more accurate for your team - Yes, this query will likely be incomplete
- Total PRs Query - GitHub search query for all PRs created by your team
Configuration is automatically saved to config.json for persistence.
This project is licensed under the MIT License - see the LICENSE file for details.




