Студопедия
Случайная страница | ТОМ-1 | ТОМ-2 | ТОМ-3
АрхитектураБиологияГеографияДругоеИностранные языки
ИнформатикаИсторияКультураЛитератураМатематика
МедицинаМеханикаОбразованиеОхрана трудаПедагогика
ПолитикаПравоПрограммированиеПсихологияРелигия
СоциологияСпортСтроительствоФизикаФилософия
ФинансыХимияЭкологияЭкономикаЭлектроника

Interpreting metrics

Example of corrective action being implemented | Integration with the rest of the lifecycle stages and service management processes | Metrics and measurement | Service reporting | Reporting policy and rules | Objective | Developing a Service Measurement Framework | Different levels of measurement and reporting | Defining what to measure | Setting targets |


Читайте также:
  1. Metrics and measurement

When beginning to interpret the results it is important to know the data elements that make up the results, the purpose of producing the results and the expected normal ranges of the results.

Simply looking at some results and declaring a trend is dangerous. Figure 4.14 shows a trend that the Service Desk is opening fewer incident tickets over the last few months. One could believe that this is because there are fewer incident s or perhaps it is because the customer s are not happy with the service that is being provided, so they go elsewhere for their support needs. Perhaps the organization has implemented a self help knowledge base and some customers are now using this service instead of contacting the Service Desk. Some investigation is required to understand what is driving these metric s.

Figure 4.14 Number of incident tickets opened by Service Desk over time

One of the keys to proper interpretation is to understand whether there have been any changes to the service or if there were any issues that could have created the current results.

The chart can be interpreted in many ways so it would not be wise to share this chart without some discussion of the meaning of the results.

Figure 4.15 is another example of a Service Desk measurement. Using the same number of incident tickets we have now also provided the results of first contact resolution. The figure shows that not only are fewer incident tickets being opened, but the ability to restore service on first contact is also going down. Before jumping to all kinds of conclusions, some questions need to be asked.

Figure 4.15 Comparison between incident tickets opened and resolved on first contact by the Service Desk

In this particular case, the organization had implemented Problem Management. As the process matured and through the use of incident trend analysis, Problem Management was able to identify a couple of recurring incidents that created a lot of incident activity each month. Through root cause analysis and submitting a request for change, a permanent fix was implemented, thus getting rid of the recurring incidents. Through further analysis it was found that these few recurring incidents were able to be resolved on the first contact. By removing these incidents the opportunity to increase first contact resolution was also removed. During this time period the Service Desk also had some new hires.

Table 4.7 provides a current view and year-to-date (YTD) view for response time s for three services. The table provides a transaction count for each service, the minimum response time measured in seconds, the maximum response time measured in seconds and the average for the month. The table also provided the YTD average for each service. In order to understand if these numbers are good or not it is important to define the target for each service as well as the target for meeting the Service Level Agreement.

Service measurement response time
Service Response times in seconds    
Current month YTD Percent within SLA 99.5% is the target
Count Min Max Avg Monthly YTD %
Service 1 Target = 1.5 seconds 1,003,919 1.20 66.25 3.43 1.53 99.54% 98.76%
Service 2 Target = 1.25 seconds 815,339 0.85 21.23 1.03 1.07 98.44% 99.23%
Service 3 Target = 2.5 seconds 899,400 1.13 40.21 2.12 2.75 96.50% 94.67%

Table 4.7 Response times for three Service Desks

When looking at the results for the three services it may appear that Service 2 is the best and this might be because it handles fewer transaction s on a monthly basis than the other two services. Interpreting that Service 2 is the best by only looking at the numbers is dangerous. Investigations will find that Service 2 is a global service that is accessed 7 x 24 and the other two services have peak time utilization between 8 am and 7 pm Eastern Time. This is no excuse because the services are not hitting targets so further investigation needs to be conducted at the system and component levels to identify any issues that are creating the current response time results. It could be that the usage has picked up and this was not planned for and some fine tuning on components can improve the response time.

4.3.10 Using measurement and metrics

Metrics can be used for multiple purposes such as to:

Service measurements and metrics should be used to drive decisions. Depending on what is being measured the decision could be a strategic, tactical or operational decision. This is the case for CSI. There are many improvement opportunities but there is often only a limited budget to address the improvement opportunities, so decisions must be made. Which improvement opportunities will support the business strategy and goals, and which ones will support the IT goals and objective s? What are the Return on Investment and Value on Investment opportunities? These two items are discussed in more detail in section 4.4.

Another key use of measurement and metrics is for comparison purposes. Measures by themselves may tell the organization very little unless there is a standard or baseline against which to assess the data. Measuring only one particular characteristic of performance in isolation is meaningless unless it is compared with something else that is relevant. The following comparisons are useful:

Measures of quality allow for measuring trends and the rate of change over a period of time. Examples could be measuring trends against standard s that are set up either internally or externally and could include benchmark s, or it could be measuring trends with standards and targets to be established. This is often done when first setting up baselines.

A minor or short-term deviation from targets should not necessarily lead to an improvement initiative. It is important to set the criteria for the deviations before an improvement programme is initiated.

Comparing and analysing trends against service level target s or an actual Service Level Agreement is important as it allows for early identification of fluctuations in service delivery or quality. This is important not only for internal service provider s but also when services have been outsourced. It is important to identify any deviations and discuss them with the external service provider in order to avoid any supplier relationship problem s. Speed and efficiency of communication when there are missed targets is essential to the continuation of a strong relationship.

Using the measurements and metric s can also help define any external factors that may exist outside the control of the internal or external service provider. The real world needs to be taken into consideration. External factors could include anything from language barriers to governmental decisions.

Individual metrics and measures by themselves may tell an organization very little from a strategic or tactical point of view. Some types of metrics and measures are often more activity based than volume based, but are valuable from an operational perspective. Examples could be:

Each of these measures by themselves will provide some information that is important to IT staff including the technical managers who are responsible for Availability and Capacity Management as well as those who may be responsible for a technology domain such as a server farm, an application or the network, but it is the examination and use of all the measurements and metrics together that delivers the real value. It is important for someone to own the responsibility to not only look at these measurements as a whole but also to analyse trends and provide interpretation of the meaning of the metrics and measures.

4.3.11 Creating scorecards and reports

Service measurement information will be used for three main purposes: to report on the service to interested parties; to compare against targets; and also to identify improvement opportunities. Reports must be appropriate and useful for all those who use them.

There are typically three distinct audiences for reporting purposes.

Many organizations make the mistake of creating and distributing the same report to everyone. This does not provide value for everyone.


Дата добавления: 2015-10-02; просмотров: 71 | Нарушение авторских прав


<== предыдущая страница | следующая страница ==>
Service management process measurement| Creating scorecards that align to strategies

mybiblioteka.su - 2015-2024 год. (0.009 сек.)