—  BEN BLOCK  —



Case Study
 

Quality Measurement: Return Rate

The Problem

While CTO at W3, as our team grew, it became more and more difficult to hit our target release dates within our sprint cycles. We were running agile and I believed we were being too optimistic in our velocity assumptions. I kept asking myself, can’t we just drop some of the work, to hit our targets? This is often the trade-off when making a release date guarantee. Teams can either commit to the scope or commit to the date, but you can’t promise both. I always preferred date over scope.

Upon further analysis it became clear that the problem wasn’t getting through the scope of the sprint from a development perspective. It turned out, the bottleneck was in the QA cycle after the features were developed. Quality of the dev output was slipping as the team grew, and the QA cycle was taking longer and longer because QA kept sending issues back into dev with bugs found or requirements missing. I needed a way to measure the quality the dev team was delivering so as to assure the QA cycle didn’t exceed the allocated amount of time in each sprint and therefore didn’t threaten overall deploy dates as a result.

The Solution

I had recently hired a new Product Manager who had a novel idea. He suggested we start to measure the return rate in each sprint cycle. We defined return rate as the number of times a QA analyst returned an issue to a developer with either bugs found or incomplete functionality. This wasn’t easy as it required a fair amount of additional analysis after each sprint, but it gave us a pretty good stat to measure quality coming from the dev team within each sprint.

When we first measured return rate we found it to be an astounding 4.3. Meaning, on average, an issue when back and forth between dev and QA over 4 times before the QA team marked the issue clear for deployment. I immediately spoke to the dev team and told them we’d be tracking this stat each sprint and I expected it to improve. While I didn’t advertise individual return rates, only the collective rate of the entire team, I did generate return rates for each developer and spoke individually to the primary offenders when necessary.

The Result

Within 3 sprint cycles we got the collective return rate under 2 and after 5 cycles we got it under 1. This dramatically increased our ability to hit the sprint deadlines and scope very rarely had to be sacrificed in order to do so. Needless to say, this thrilled everyone in the organization who became very frustrated by the frequency of release delays. Simply tracking and publicizing our new quality measurement, return rate, gave the development team insight into their own work and the incentive to produce higher quality output. A more efficient production cycle emerged from tracking this one statistic.

Get in Touch
 

info@
benblock.com

New York,
New York

linkedin.com
/in/bblock