How to Measure Rework on Your Engineering Team (Jira, Linear, GitHub)
Updated 17 April 2026
Every "how to measure rework" article tells you to measure it. None of them show you the actual Jira JQL query, the Linear filter configuration, or the GitHub Actions label script. This page does all three. Copy-paste the recipes below, run them against your last quarter of data, and you have a baseline.
You do not need all four metrics listed here to start. The three-metric starter pack at the bottom of the page is the right entry point for most teams.
The Four Metrics That Matter
Change Failure Rate (CFR)
The percentage of production deployments that cause a service degradation requiring remediation.
CFR = (deployments_causing_incidents / total_deployments) x 100Target range: DORA Elite: <5% | High: <10% | Medium: <15% | Low: >30%
Source: DORA State of DevOps 2024
Defect Escape Rate (DER)
The percentage of all defects that reach production, bypassing all pre-release detection.
DER = (defects_found_in_production / total_defects_found) x 100Target range: Elite: <5% | Average: 12-18% | Worst quartile: >35%
Source: Capers Jones, Applied Software Measurement (2008)
Sprint Rework Ratio
The share of sprint story points spent on rework (bug fixes, hotfixes, regressions) rather than new feature work.
SRR = (rework_story_points / total_story_points) x 100Target range: Elite: <10% | Average: 20-30% | Warning: >40%
Source: Derived from NIST 2002 benchmarks; applied to sprint-level data
Defect Removal Efficiency (DRE)
The percentage of all defects removed before software is released. The inverse of escape rate.
DRE = (defects_found_before_release / total_defects_found) x 100Target range: Best-in-class: 95%+ | Typical: 85% | Warning: <75%
Source: Capers Jones, Applied Software Measurement (2008)
Jira Implementation
The following JQL queries assume your team uses consistent labels. If you do not have a labelling convention yet, adopt the set below and apply it starting from your next sprint. Retroactive tagging is possible but tedious.
Recommended label set for rework tracking:
reworkhotfixregressionbug-fixtech-debt-fixrewriteincident-responsespec-correctionQuery 1: Sprint rework ratio (current sprint)
project = "YOUR_PROJECT" AND sprint in openSprints() AND labels in (rework, hotfix, regression, bug-fix, rewrite) AND status != "Cancelled" ORDER BY created DESC
Sum story points from this result. Divide by total sprint story points. Multiply by 100 for your sprint rework ratio.
Query 2: Quarterly rework trend
project = "YOUR_PROJECT" AND issuetype in (Bug, Task, Story) AND labels in (rework, hotfix, regression, bug-fix, rewrite) AND created >= startOfQuarter(-1) AND created < startOfQuarter() AND status in (Done, Closed) ORDER BY created ASC
Run this for the last 4 quarters sequentially to see your rework trend. Decreasing story points per quarter is a positive signal.
Query 3: Defect escape tracking (production bugs)
project = "YOUR_PROJECT" AND issuetype = Bug AND labels in (production, escaped, incident-response, customer-reported) AND created >= startOfYear() ORDER BY priority DESC, created DESC
Count this result against total bugs found in the same period (including pre-release QA bugs) to calculate your defect escape rate.
Query 4: High-cost rework tickets (story points above threshold)
project = "YOUR_PROJECT" AND labels in (rework, hotfix, regression) AND story_points >= 5 AND created >= startOfQuarter() ORDER BY story_points DESC
High-story-point rework tickets are your most expensive events. Review the top 10 in retrospective. What is the pattern?
Linear Implementation
Linear does not have JQL but does support label filters and custom views. The key setup is creating a dedicated label set and a saved "Rework" view that engineering leads review weekly.
Step 1: Create rework labels
In Linear: Settings > Labels. Create: rework, hotfix, regression, bug-fix, production-incident. Use a burnt-orange colour to make rework tickets visually distinct in the cycle view.
Step 2: Create the Rework saved view
In your team's Issues view: Filter by Labels (select all rework labels), Group by Cycle, Sort by Priority. Save as 'Rework Tracker'. Pin it to the sidebar for quick access.
Step 3: Weekly metrics pull
At end of each sprint/cycle: count rework issue points vs total completed points. Linear's Insights tab shows cycle completion data; filter by your rework labels to extract the subset. Record the ratio in a shared doc for trending.
Step 4: Monthly summary to leadership
Use Linear's export (CSV) filtered to rework labels for the rolling 30 days. Import to a spreadsheet to calculate sprint rework ratio and compare to the DORA benchmarks on this page.
GitHub Implementation
GitHub's native issue tracking is weaker than Jira or Linear for rework measurement, but if your team manages all work in GitHub Issues, the following approach works. The key is consistent PR labels applied at merge time.
PR label conventions
Add labels to every merged PR at one of: feature, bug-fix, hotfix, refactor, chore. Enforce via a required GitHub Actions check.
GitHub CLI query for monthly rework PRs
# Count merged rework PRs in the last 30 days gh pr list \ --state merged \ --label "bug-fix,hotfix" \ --json number,title,mergedAt \ --jq '[.[] | select(.mergedAt > (now - 30*24*3600 | todate))] | length' # Compare to total merged PRs in same period gh pr list \ --state merged \ --json number,mergedAt \ --jq '[.[] | select(.mergedAt > (now - 30*24*3600 | todate))] | length'
Anti-Patterns: What Goes Wrong
Goodhart's Law: measuring without thinking
Once rework rate becomes a team metric that management watches, engineers game it. Bug tickets get labelled 'feature enhancement' to avoid the rework label. Sprint velocity excludes rework tickets so the ratio looks better. Measure rework for diagnosis, not for performance review. Teams that tie rework rate to performance reviews will produce measurement fraud, not rework reduction.
Counting only tickets, not time
A 1-point bug ticket and a 13-point architectural rework ticket both count as 'one rework event'. Always measure rework in story points or hours, not ticket count. The ticket count metric systematically undervalues large rework events, which are the most important ones to understand.
Measuring the label, not the cause
Knowing your sprint rework ratio is 28% is useful. Knowing that 20 of those 28 points came from unclear acceptance criteria on three specific tickets is actionable. Rework metrics need a root cause column. Even a simple taxonomy (requirements defect / test gap / communication failure / tech debt) transforms the metric from a report card to a tool.
Not separating planned from unplanned rework
Tech debt paydown sprints, planned refactors, and post-launch UX improvements are sometimes tagged as rework. They should not be. Planned investment in code quality is not a failure; it is the opposite. Mixing it into rework metrics gives a misleading picture and demoralises the team members doing the right thing.
The 3-Metric Starter Pack
If you are starting from zero, do not try to track all four metrics at once. Begin with these three, in this order:
- Sprint Rework Ratio -- run the Jira Query 1 above at the end of your next three sprints. You now have a baseline rework ratio and a trend direction.
- Change Failure Rate -- pull deployment count from your CI/CD system and incident count from your monitoring tool. Divide. Compare to DORA tiers on the benchmarks page.
- Root Cause Tagging -- for each rework ticket, add one of four cause labels: requirements / testing / tech-debt / communication. After a quarter, the distribution tells you which lever to pull first.
Add DRE (Metric 4) once you have pre-release defect tracking in place. That requires your QA process to log all defects found before release, which many teams do not do consistently.
Frequently Asked Questions
How do you track rework in Jira?▼
The most reliable Jira approach: apply consistent labels (rework, hotfix, regression, bug-fix) and run the JQL queries above at end of sprint. Sum rework story points, divide by total completed story points, and track the ratio over time. Requires label discipline from the team.
What is change failure rate and how do I calculate it?▼
Change failure rate (CFR) = (deployments causing incidents / total deployments) x 100. Pull deployment counts from your CI/CD system (GitHub Actions, Jenkins, etc.) and incident counts from your monitoring tool. DORA 2024 elite teams maintain CFR below 5%; low performers exceed 30%.
What is defect removal efficiency (DRE)?▼
DRE = (defects found before release / total defects) x 100. Measures how effectively your pre-release processes catch bugs. Elite teams achieve 95%+ DRE through unit tests, integration tests, code review, and QA. Average commercial teams are at 85%, meaning 15% of defects escape to production.
Continue reading:
Sources
- Google DORA. State of DevOps Report 2024. DORA Research Program, 2024. (Change failure rate definitions and tiers)
- Jones, C. Applied Software Measurement. 3rd ed. McGraw-Hill, 2008. (Defect removal efficiency framework)
- Forsgren, N., Humble, J., Kim, G. Accelerate. IT Revolution, 2018. (Four key DORA metrics explained)
- Atlassian. Jira Advanced Roadmaps and Reporting documentation. 2024.