Hi folks!
I've just taken over leading a team of BAs for a software company and my manager has tasked me with developing some KPIs to monitor quality and productivity.
Productivity isn't too hard, I'm going to sort something out based on whether specs are being delivered inline with the plan but quality is giving me some headaches. Does anyone else work in an environment where this is done and if so, what do you measure?s
My initial thoughts are to look at the number of bugs being raised by the QA teams that can be attributed to the spec being incorrect/inaccurate as well as introducing 4 Eyes checks of specs before they go out and recording the number of changes made through that process.
Can anyone suggest anything else that has worked for them?
I'm curious how you plan to measure productivity. There are actually pitfalls to avoid here. I once worked for a team that would write BRDs and then break those into manageable development units (for the sake of protecting the innocent I'll call these DUs). Then every quarter they would show how many BRDs and DUs were completed. If they number went up, they declared victory. So over time, the DUs became smaller and smaller pieces of work (naturally) since this was seen as higher productivity. Nothing was really improving.
We typically consider something to be of quality when it meets the needs of the client/users. That implies that the requirements were documented well and that it was coded well. So you probably want to focus on User Acceptance Tests to measure quality. If you pass acceptance tests then the requirement authoring and development must have been completed properly. But not all requirements are the same. Some requirement a great deal of coding while others are much smaller. So you need to normalize by the number of hours needed to complete the work too.
Consider the following example with user stories (you can adjust for other methods):
Each user story has a serious of Acceptance Tests that ensure it's completed properly. Let's say the the Acceptance Tests for User Story #1 and #3 pass but some of the tests for User Story #2 fails.
Quality Score = (Total Dev Hours - Failed Dev Hours) / (Total Dev Hours)
Quality Score = (140-40)/(140) = 71.4%
There are probably other ways that you can measure quality. Having a few different quality scores can help ensure there's no way to "game" the system and that you are accurately measuring quality.
Thanks for that, it's useful stuff.
I like your suggestion for measuring quality, I'll speak with the QA manager and confirm exactly how their process works and whether that's something I can adopt. Though I'd have to filter out time where failures were caused by something other than the specs such as coding errors.
Productivity feels simpler to me though I can see how the pitfalls you mention occur. My plan for this is to report the percentage of specs that are on-track for deliver, percentage that have been delivered and the percentage that aren't on track. That should get the problem of people breaking them down into multiple smaller specs in order to game the system.
Hmmm. I dislike the idea of applying formulae to something as subjective as BRDs. You're the manager, you should be able to judge quality without applying a formulae. For instance, I think a good process diagram is one that the business can understand without much explanation. Not sure how you apply a formula to that.
Then, when you look at a BRD, do basic things like sure the requirements are atomic, the BRD covers everything it needs to cover, it uses simple language, it accurately reflects what its trying to model, its consistent, no duplicates, etc. Again how do you apply a formula to that?
Kimbo
brought to you by enabling practitioners & organizations to achieve their goals using: