prac·tice (noun): a defined and repeatable way of working (like a play in a team's playbook) that is used to align teammates and improve software delivery.
Welcome to Pragmint's Open Practices repository. Here, you will find the library of practices that Pragmint's Co-Dev Coaches draw from when helping client software and data engineering teams level up. These practices aren’t rigid rules or one-size-fits-all solutions; they’re approaches that have proven effective for Pragmint in many situations. Every team and context is different, but understanding the practices that usually work provides a solid starting point. Think of this repo as a playbook: a set of plays you can adapt, remix, or experiment with to fit your own game.
Each of these practices maps to one or more DORA capabilities listed below. We use the DORA research as a backbone because it’s the most widely validated body of evidence connecting engineering practices to measurable delivery performance. It provides a common language for assessing where teams stand today and clarity on which improvements actually move the needle. By linking our practices to DORA's capabilities, we make it easier to see not just what to try, but also why it matters and how it contributes to the bigger picture of high-performing software delivery.
Material in this repository supports Pragmint's cyclical S.T.E.P. framework:
-
Survey: Use our open-source assessment to measure your team's maturity against the DORA Capabilities.
-
Target: Identify Capabilities where there are significant gaps in adoption, and prioritize improving on those that will deliver the highest impact.
-
Experiment: Play around with supported practices to enhance targeted Capabilities. Select one or two high-impact experiments, commit to them, and give the team time to integrate them into their regular workflow.
-
Polish or Pitch: Gather feedback and reflect on how experimenting with one or more practices affected the team's or system's performance. Review Metrics & Signals, included in each practice (example), to determine whether an experiment is making a positive impact. Polish and adopt practices that are working or showing promise, pitch those that are not, then take the next S.T.E.P. As you polish successful practices, build in mechanisms to ensure continued adoption, such as CI checks that enforce test coverage thresholds or PR checklists that verify adherence to established patterns.
- AI-accessible Internal Data
- Clear and Communicated AI Stance
- Code Maintainability
- Continuous Delivery
- Continuous Integration
- Customer Feedback
- Database Change Management
- Deployment Automation
- Documentation Quality
- Empowering Teams To Choose Tools
- Flexible Infrastructure
- Generative Organizational Culture
- Healthy Data Ecosystems
- Job Satisfaction
- Learning Culture
- Loosely Coupled Teams
- Monitoring and Observability
- Monitoring Systems to Inform Business Decisions
- Pervasive Security
- Platform Engineering
- Proactive Failure Notification
- Streamlining Change Approval
- Team Experimentation
- Test Automation
- Test Data Management
- Transformational Leadership
- Trunk-Based Development
- User-Centric Focus
- Version Control
- Visibility of Work in the Value Stream
- Visual Management
- Well-Being
- Work in Process Limits
- Working in Small Batches
Our repository is always evolving. You can add to it by reviewing our contributors guide then raising an issue or submitting a pull request. Given this repository is meant to represent the opinions of Pragmint, our maintainers reserve the right to approve or reject any and all suggestions. However, we welcome contributions as they represent opportunities to broaden our horizons and interact with the broader community. Any contributions to this repository are subject to the Creative Commons License so that anyone in the community can benefit from the ideas contained within it.