Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АрхитектураБиологияГеографияДругоеИностранные языки
ИнформатикаИсторияКультураЛитератураМатематика
МедицинаМеханикаОбразованиеОхрана трудаПедагогика
ПолитикаПравоПрограммированиеПсихологияРелигия
СоциологияСпортСтроительствоФизикаФилософия
ФинансыХимияЭкологияЭкономикаЭлектроника

Creating scorecards that align to strategies

Integration with the rest of the lifecycle stages and service management processes | Metrics and measurement | Service reporting | Reporting policy and rules | Objective | Developing a Service Measurement Framework | Different levels of measurement and reporting | Defining what to measure | Setting targets | Service management process measurement |


Читайте также:
  1. Creating a Return on Investment
  2. Teaching and Learning Strategies
  3. Атрибуты АLIGN и VALIGN

Reports and scorecards should be linked to overall strategy and goals. Using a Balanced Scorecard approach is one way to manage this alignment.

Figure 4.16 illustrates how the overall goals and objective s can be used to derive the measurements and metrics required to support the overall goals and objectives. The arrows point both ways because the strategy, goals and objectives will drive the identification of required KPIs and measurements, but it is also important to remember that the measures are input in KPIs and the KPIs support the goals in the Balanced Scorecard.

It is important to select the right measures and targets to be able to answer the ultimate question of whether the goals are being achieved and the overall strategy supported.

Figure 4.16 Deriving measurements and metrics from goals and objectives

The Balanced Scorecard is discussed in more detail in Chapter 5. A sample Balanced Scorecard is also provided in Chapter 5.

Creating reports

When creating reports it is important to know their purpose and the details that are required. Reports can be used to provide information for a single month, or a comparison of the current month with other months to provide a trend for a certain time period. Reports can show whether service level s are being met or breached.

Before starting the design of any report it is also important to know the following:

One of the first items to consider is who is the target audience. Most senior managers don’t want a report that is 50 pages long. They like to have a short summary report and access to supporting details if they are interested. Table 4.8 provides a suitable overview that will fit the needs of most senior managers. This report should be no longer than two pages but ideally a single page if that is achievable without sacrificing readability.

Report for the month of:
Monthly overview This is a summary of the service measurement for the month and discusses any trends over the past few months. This section can also provide input into...
Results This section outlines the key results for the month.
What led to the results Are there any issues/activities that contributed to the results for this month?
Actions to take What action have you taken or would like to take to correct any undesirable results? Major deficiencies may require CSI involvement and the creation of a service improvement plan.
Predicting the future Define what you think the future results will be.

Table 4.8 An example of a summary report format

It is also important to know what report format the audience prefers. Some people like text reports, some like charts and graphs with lots of colour, and some like a combination. Be careful about the type of charts and graphs that are used. They must be understandable and not open to different interpretations.

Many reporting tools today produce canned reports but these may not meet everyone’s business requirement s for reporting purposes. It is wise to ensure that a selected reporting tool has flexibility for creating different reports, that it will be linked or support the goals and objective s, that its purpose is clearly defined, and that its target audience is identified.

Reports can be set up to show the following:

Figure 4.17 shows the amount of outage minutes for a service. However, through analysis of the results, a direct relationship was discovered between failed changes and the amount of outage minutes. Seeing this information together convinced an organization that it really needed to improve its Change Management process.

Figure 4.17 Reported outage minutes for a service

Table 4.9 is another example of a service measurement report. The report clearly states an objective and also provides a YTD status. The report compares this year’s outage to last year’s outage. The report also addresses the actual customer impact. Depending on needs, this report format can be used for many reporting purposes such as performance, Service Level Agreement s, etc.

Actual outage minutes compared to goal
Objective 20% decrease in outages
Status 18% decrease year to date
Monthly report Month 1 Month 2 Month 3 Month 4 Month 5 Month 6
Previous year’s outage minutes            
This year’s outage minutes            
Running year to date reduction            
Monthly indicator Positive Negative Positive Positive Negative Positive
Reduction in customer impact
Objective % decrease in number of customers impacted
Status  
Next steps  

Table 4.9 Service report of outage minutes compared to goal

Table 4.10 shows Incident Management data in reference to number of incident tickets by priority and the success of meeting the Service Level Agreement for service restoration.

  Target Month 1 Month 2
Number of tickets % Number of tickets %
All incidents          
Within target   7,540 97.15 6,339 95.12
Missed target     2.85   4.88
Grand total   7,761   6,664  
Priority 1          
Within target 95% within 1 hour   77.42   77.28
Missed target   22.58   22.72
Grand total          
Priority 2          
Within target 90% within 4 hours   78.40   92.73
Missed target   21.60   7.27
Grand total          
Priority 3          
Within target 80% within 1 business day 2,532 89.66 2,176 88.92
Missed target   10.34   11.08
Grand total   1,064   1,081  
Priority 4          
Within target 70% within 2 business days 4,683 98.09 4,301 98.44
Missed target   1.91   1.56
Grand total   7,761   6,664  

Table 4.10 Percentage of incidents meeting target time for service restoration

Table 4.11 provides some sample KPIs for different process es. This is not an all-inclusive list but simply an example. Each organization will need to define what KPIs to report on.

Process KPI/Description Type Progress indicator
Incident Tickets resolved within target time Value Meets/exceeds target times
Incident % of incidents closed – first call Performance Service Desk only – target is 80%
Incident Abandon rate   Service Desk with ACD. 5% or less goal (after 24 seconds)
Incident Count of incidents submitted by support group Compliance Consistency in number of incidents – investigation is warranted for (1) rapid increase which may indicate infrastructure investigation, and (2) rapid decrease which may indicate compliance issues
Problem % of repeated problems over time Quality Problems that have been removed from the infrastructure and have re-occurred. Target: less than 1% over a 12-month rolling timeframe
Problem % root cause with permanent fix Quality Calculated from problem ticket start date to permanent fix found. This may not include implementation of permanent fix. Internal target: 90% of problems – within 40 days. External target: 80% of problems – within 30 days. Internal = BMO internal; External = 3rd party/vendor
Problem % and number of incidents raised to Problem Management Compliance Sorted by infrastructure (internal and external) and development (internal and external)
Change % of RFCs successfully implemented without backout or issues Quality Grouped by infrastructure/development
Change % of RFCs that are emergencies Performance Sort by infrastructure or development – and by emergency quick fix (service down) or business requirement
Config Number of CI additions or updates Compliance Configuration item additions or updates broken down by group – CMDB/change modules
Config Number of record s related to CI Performance Number of associations grouped by process
Release % of releases using exceptions Value Exceptions are criteria deemed mandatory – identify by groups
Release % of releases bypassing process Compliance Identify groups by passing release process
Capacity Action required Value Number of services that require action vs. total number of system s
Capacity Capacity-related problems Quality Number of problems caused by capacity issues sorted by group

Table 4.11 Sample key performance indicators


Дата добавления: 2015-10-02; просмотров: 87 | Нарушение авторских прав


<== предыдущая страница | следующая страница ==>
Interpreting metrics| CSI policies

mybiblioteka.su - 2015-2024 год. (0.008 сек.)