What Is Rework in Software Engineering? A Clean Definition
Updated 17 April 2026
"Rework is work that has to be redone because the first attempt failed to meet a requirement, specification, or quality standard that was knowable at the time."
Adapted from ISO 9000:2015 and PMI PMBOK Guide, 7th ed.
In manufacturing, rework has a crisp ISO definition: "action on a nonconforming product to make it conform to requirements" (ISO 9000:2015). In software engineering, the concept is equally precise but more frequently misapplied. Teams confuse rework with refactoring, iteration, and learning. Getting the definition right is the prerequisite for measuring it -- and you cannot reduce what you cannot measure.
The critical phrase in the definition is "knowable at the time". This distinguishes rework -- a process failure -- from legitimate iteration -- a response to genuinely new information. A team that revises a feature because a customer need changed is not doing rework in the waste sense. A team that revises a feature because the original spec was ambiguous and nobody clarified it is doing preventable rework.
Rework, Revision, Iteration, Bug Fix, Refactor: A Comparison
Engineering teams use these five terms loosely and often interchangeably. The distinctions matter because they have different causes, different costs, and different remedies.
| Concept | Definition | Trigger | Who Pays | Waste? |
|---|---|---|---|---|
| Rework | Doing work again because the first attempt failed a knowable requirement | Spec failure, defect, missed requirement | Team, often silently | Yes -- preventable |
| Revision | Updating work in response to new information or stakeholder feedback | New business requirement, user research, market change | Product and business | Sometimes -- depends on whether the need was knowable |
| Iteration | Deliberately building in small increments to learn and adapt | Agile process design, uncertainty | Product -- built into the cost model | No -- by design |
| Bug Fix | Correcting a specific defect in working code | Test failure, production error report | Engineering -- time and reputation | Yes, if the defect was preventable |
| Refactor | Improving internal code structure without changing external behaviour | Tech debt paydown, code review, future feature prep | Engineering -- planned investment | No -- investment in maintainability |
The Three-Question Waste Test
Before classifying any additional work as rework for measurement purposes, ask these three questions in sequence. If you answer "yes" to all three, it is rework.
1. Was the requirement knowable at the time?
If the business need changed after the work was committed -- genuine market shift, new regulatory requirement, unexpected competitive move -- the additional work may be revision rather than rework. If the requirement existed and was not captured or communicated, it is rework.
2. Did we commit to meeting it?
Exploratory work (research spikes, prototypes, proof-of-concept code) is not expected to meet production requirements. If the team explicitly scoped the work as exploratory, redoing it as production code is not rework -- it is the planned next step.
3. Did the failure originate from our process?
If the work failed because of a process problem -- ambiguous spec, missed test case, inadequate design review, miscommunication between teams -- it is rework. If it failed because of genuinely unprecedented complexity or an external dependency that changed without notice, classification is more nuanced.
What Rework Is Not
Over-classifying as rework is as problematic as under-classifying. If an engineering team tags every sprint carryover as "rework" in Jira, the metric becomes meaningless and demotivating. These common scenarios are frequently mislabelled as rework but are not:
- Deliberate iteration in research spikes. A spike that produces a working prototype that then gets rewritten as production code is not rework -- the rewrite was always the plan.
- Responding to genuinely new information. A competitor releases an API that changes your integration approach. The additional integration work is revision, not rework.
- Refactoring to enable future features. Restructuring code to make a new feature feasible is an investment, not a failure.
- Performance optimisation after initial delivery. Unless the performance requirement was specified and missed, post-launch optimisation is product improvement, not rework.
- A/B test result-driven changes. Changing a UI based on experiment data is evidence-driven product development, not rework.
Worked Example: A Feature That Fails in Production
Consider a feature that ships, fails in production, gets hotfixed, then gets rewritten. Which of these are rework?
- Sprint 1: Feature built and shipped. Not rework -- first attempt.
- Sprint 2: Production error causes hotfix. The hotfix is rework. The defect was either in spec (knowable) or in implementation (preventable with better testing). The incident response hours and the fix hours both count.
- Sprint 3: Post-mortem reveals the architecture is wrong for the load pattern. The rewrite is rework if the load pattern was specified or estimable. If the traffic genuinely exceeded all reasonable forecasts, classification is ambiguous -- partial rework at most.
- Sprint 4: New UX research changes the feature flow. The UX revision is not rework -- it is product improvement driven by new user data.
In this scenario, sprints 2 and 3 (partially) are rework. Sprint 4 is revision. Most teams would call all four "bug work" in their retrospective and miss the distinction that matters for reduction.
Why the Definition Matters for Measurement
Precise definition enables precise measurement. If your team's rework metric is "bugs closed this sprint", you are measuring the wrong thing and you will optimise toward the wrong target (closing bugs fast rather than preventing them). If your metric is "story points on tickets that passed all acceptance criteria at first review", you are measuring something closer to rework prevention effectiveness.
The DORA metrics (change failure rate, mean time to recovery) are the closest thing software engineering has to a standardised rework measurement framework. Change failure rate -- the percentage of deployments that cause a production failure requiring remediation -- captures the most expensive kind of rework: the kind that reaches customers.
Capers Jones, whose research in Applied Software Measurement (2008) covers data from thousands of software projects, shows that organisations with formal defect-prevention programs have rework rates roughly half those of organisations without them. The first step in every such program is agreeing on a definition of rework that the whole team can apply consistently.
Rework in the Cost of Poor Quality Framework
The Cost of Poor Quality (COPQ) framework, originally developed by Joseph Juran in the 1950s, divides quality costs into four categories: prevention, appraisal, internal failure, and external failure. Rework sits in two of these:
- Internal failure costs include rework caught before release: re-coding, re-testing, debugging. These are expensive but bounded.
- External failure costs include rework triggered by production failures: hotfixes, incident response, customer support escalation, reputation damage. These are often 10-100x the internal cost of the same defect.
The NIST 2002 study found that $59.5 billion is lost annually in the US economy to inadequate software testing, with the majority of that figure attributable to external failure costs -- rework that escaped to production. Understanding where your rework falls in this framework shapes which interventions have the highest ROI. See the formula page for a worked COPQ calculation.
Frequently Asked Questions
What is the definition of rework in software engineering?▼
Rework is work that has to be redone because the first attempt failed to meet a requirement, specification, or quality standard that was knowable at the time. The ISO 9000:2015 definition -- 'action on a nonconforming product to make it conform to requirements' -- applies directly to software when a 'product' is understood as a feature, module, or service.
What is the difference between rework and refactoring?▼
Refactoring improves internal code structure without changing external behaviour -- it is a deliberate investment in maintainability, not a failure correction. Rework corrects work that failed to meet a requirement. Both cost time, but they have different causes: refactoring is planned; rework is typically unplanned. Refactoring prevents future rework; it is not itself rework.
Is all rework waste?▼
Not strictly. Some additional work is an expected outcome of genuine learning -- research spikes, prototype exploration, A/B experiments. The waste classification applies when the requirement was knowable in advance and the first attempt failed to meet it due to a process failure: unclear spec, inadequate testing, poor communication. The three-question waste test above provides a practical classification framework.
Where does the term rework come from?▼
The formal definition traces to ISO 9000:2015 and its predecessors, which apply the term to manufacturing non-conformance. The Project Management Institute's PMBOK Guide adopted it for project quality management. In software specifically, Capers Jones and Barry Boehm both used the term in their foundational 1980s-1990s work on software economics, giving it its modern connotation in engineering contexts.
Continue reading:
Sources
- ISO 9000:2015. Quality management systems -- Fundamentals and vocabulary. ISO, 2015.
- Project Management Institute. A Guide to the Project Management Body of Knowledge (PMBOK Guide). 7th ed. PMI, 2021.
- Jones, C. Applied Software Measurement. 3rd ed. McGraw-Hill, 2008.
- Juran, J. Juran on Quality by Design. Free Press, 1992. (COPQ framework)
- NIST Planning Report 02-3. The Economic Impacts of Inadequate Infrastructure for Software Testing. RTI International, 2002.
- Google DORA. State of DevOps Report 2024.