Software development project teams need to understand if their work is producing quality output at the right rate to meet the project goals. To provide this picture we need to look at a few simple measures. A simple set of initial and key measures we have chosen are quality confidence, work throughput and release burn-up. You could continue to add many more measures but we feel that using these three simple measures will provide a core set of performance indicators for the teams.
It’s not enough to know how fast a team is producing product without knowing if the product is any good or not. To tie this all together we can track Quality Confidence. Quality Confidence is a simple indicator of how sure we are that what’s being produced is of good, stable quality. Calculated as a simple percentage value it can be used in a Red, Amber, Green gauge or tracked over time to show a trend in Quality Confidence.
I intend to cover a metric that sets out to enable the project to derive a level of confidence in the quality of the end product. In choosing this measure I will use requirements as the seed but instead of simply measuring requirements progress through the project I will focus on test coverage. This way we can build up relationships between testing, requirements and requirements coverage. The measure will then be a true reflection of confidence in the whole value chain.
The key to ensuring the relevance of the quality confidence measure is to ensure that the requirements under test have a complete set of acceptance criteria which in turn are covered by a set of tests. Additionally, we should only account for requirements being delivered, although this constraint can be relaxed if you need to account for in sprint performance.
The idea is to measure the performance of the entire test for a given requirement, at each test run. This not only records the pass or fail of the test but also provides an audit trail of past test run performance.
To generate the measure there are a number of simple arithmetic steps needed.
- For each requirement we measure the average of the tests run for that requirement. So if there are 6 tests that cover a particular requirement we are interested in the number of passes out of the 6.
- Next we divide by a fraction of the total test runs completed. So for example if this was the third test run and we were evaluating the first, second and third test runs by 1//3, 2/3 and 3/3 respectively. This is necessary to allow older tests to play a reducing role in subsequent test runs.
- Finally we take a moving average of the test run results calculated above. The number of test results in the moving average can be altered to arrive at an appropriate level of smoothing for your project.
We simply repeat the above process for all requirements and then average all these results to give the quality confidence figure for the current test run.
If we repeat this process each time we complete a test run we will be able to plot a trend line showing the changing maturity/stability of the requirements set as the project progresses. This is the trend in quality confidence.
From this start it is an easy step to quickly add visual cues of trend to allow the project to appreciate how their requirements are aging. As can be seen from the representative charts below natural aging follows a typical “S” curve. Using this idealised curve as a guide allows the project to evaluate performance and identify potential problems as the actual curve deviated from the idealised.
Using simple, targeted measures such quality confidence, described in this discussion, will provide appropriate, timely project information. An added bonus is that this approach minimises the impact of data gathering placed on project development staff because it minimised the amount of unique information needed and reduced duplication of collection effort. In later discussions I will look at burn-up charts and the use of Statistical Process Control charts as mechanisms for looking at historical project performance and using this to forecast future behaviour.
Test Run Aging: Aging test run charts showing how actual requirements aging closely follows the “S” curve. Any deviation will be easily spotted alerting you to a possible problem area for investigation.
Overall Quality Confidence: Averaging all the individual requirement confidence figures shows how the requirements set is aging so you can assess when “Done” has been reached.