In the recent ITSMF conference in Perth (Lead IT 2011), there were few presentations on balanced scorecard and reporting. During one of the presentation there was a question from audience, “All the metrics you are discussing are operational metrics. Have you ever considered metrics across the life cycle?” I thought it was a very valid question. Unfortunately the speaker could not provide a convincing reply.
Later I was discussing with a vendor who specializes in providing business analytics and reporting solutions. I asked him whether his solution considers life cycle metrics. He said, his job is to collect and display from different sources. It is up to the organization to choose the metrics they want to see. Fair enough.
In this article I propose and approach to designing a Service Management Life cycle score card.
To design a performance score card across IT Service Management lifecycle: Strategy, Design, Transition and Operations
The Lifecycle scorecard is classified into two broad categories:
- Life Cycle Score Card– Gate Keeper Metrics
- Life Cycle Score Card – Performance Indicators
The Gate Keeper metrics answer the question – are we performing process activities in each of the life cycle phases?
The Performance metrics answer the questions – how is the performance of each of our life cycle phases?
Life Cycle Gate Keeper Metrics
These are basic metrics that indicate whether we are performing key activities required in each of the life cycle phases. The gate keeper activities could be a review, quality audit, governance board meeting etc. The gate keeper metrics will monitor that these activities are carried out as planned.
|Gate Keeper Metrics|
Are we performing strategic planning activities?
Are we performing service design activities?
Are we performing service transition activities?
Are we performing operations activities?
Examples of gate keeper metrics:
- Capital budget planning and review meetings – planned versus actual
- Opex planning and review – planned versus actual
- Strategic project governance board meeting – planned versus actual
- SLA review meeting – planned versus actual
- Number of project prerelease reviews – number of releases versus number of release reviews
- post mortem reviews – number of releases versus number of release reviews
- training sessions / release
- no of major incidents versus major incident review meetings
- problem management meetings
The gate keeper score card will not show the performance of each phase. On its own, gate keeper metrics score card will not add any value. It needs to be used in conjunction with the performance indicator scorecard.
If you are organization is familiar with ISO audit, monitoring the gate keeper metric will be one of the responsibilities of the auditor who will check and “tick” the compliance of each activity. Unfortunately, ISO audits can be reduced to “just a tick in the box” as it does not evaluate the performance of the activities. To avoid this problem, we need to create a complementary Performance Indicator score board.
Life Cycle Performance metrics
In the performance indicator view we will present the actual performance of each of the life cycle phases.
How is our strategic planning performing?
How is our design phase performing?
How is our transition performing?
How is our operation performing?
Examples of performance metrics:
- CAPEX –budget performance – planned, YTD, forecast
- OPEX – budget performance – planned, YTD, forecast
- Demand for next year, next 3 years
- Performance of strategic projects
- Portfolio view of services
- Service Catalogue – services offered/delivered
- SLA performance – promised versus delivered
- Customer Satisfaction
This phase deals with handing over a development system to operations. The key performance indicator is the number of post release defects. However, the conventional wisdom states that there will be always some defects. So, how do we know whether the development team has handed over a quality product to operations?
The answer to this question lies in the metric, Defect Containment Effectiveness (DCE). (http://www.isixsigma.com/index.php?option=com_k2&view=item&id=1396:six-sigma-software-metrics-part-1&Itemid=&tmpl=component&print=1)
DCE is indicator of the effectiveness of an in-phase quality activity to capture the defect without escaping to the next phase. In Service Management context we have to measure whether the testing activity is able to capture to bugs without escaping to the production.
DCE = No of errors found in testing / ( Number of errors found in testing + Number of post release defects)
The ideal value of DCE is 100% which indicates zero post release defect. Let us take two situations and compare the effectiveness.
Situation A: Testing team found 10 errors, and operations reported 3 post release defects
Situation B: Testing team found 10 errors, and operations reported 10 post release defects
In situation A, the DCE value is 10/ (10+3) = 77%
In situation B, the DCE value is: 10/ (10+10) = 50%
It is clear that in Situation B, the testing was not effective. While there is no magic numbers, when the DCE is analyzed over few months, the management will get an indication of the effectiveness of the transition.
This area is relatively mature in many organizations. This phase will present the incident volume, trend, problems, service requests, availability etc.
I have proposed an approach to create a scorecard based on service management life cycle. This scorecard can complement the organisational balanced scorecard. The advantages of this approach are:
- Clear distinction of gate-keeper activities versus performance – easy to analyse whether we are overdoing or omitting quality reviews. More importantly by keeping the same presentation view, we can correlate the performance of each of the phases against the gate keeper activities
- Provides a balanced view of all the service life cycle phases. It gives more clarity than the dual partition of strategy versus operations. Design and Transition phases provide much needed insight into these life cycle phases
- This scorecard proposes an elegant metric, Defect Containment Effectiveness to measure the transition effectiveness. It will encourage more rigor in testing and provide clarity on the post release defect analysis.
- Will be very helpful to the organization that is aligned or preparing for ISO/IEC 20000 certification
Let me know your views. Please feel free to contact me if you want to discuss further.