Many organizations engage third-party suppliers and transform their service delivery models to help achieve their strategic priorities. Outsourcing said functions comes with many benefits, including the improvement of service delivery capabilities, the release of critical resources to focus on high-value activities, the reduction of overall operational costs, and/or the leverage of new technologies or processes that deliver enhanced flexibility and scalability.
To realize the full value of these relationships, a well-structured IT outsourcing service level agreement needs to be created with the supplier that aligns the interests of both parties to their desired outcomes. Much time and effort is spent defining contract terms that detail the scope of services, the responsibilities of each party, how to calculate the charges for these services, and commercial matters like allocation of risk and legal recourse in case of breach of contract.
One other important aspect of these agreements is how the quality of the services is measured—and what the consequences are when the supplier fails to perform as promised. While matters like scope, HR impacts, and business case are established as part of an up-front sourcing strategy, these performance targets are typically left undefined until well into an RFP process. In fact, suppliers are often asked to propose their own performance commitments as part of their bid submission! Managing service quality issues in this way is sub-optimal; there is the risk that either the services will fail to meet the true needs of downstream IT or business functions, or the client will pay more for an over-engineered solution with an unnecessarily high level of performance.
To help make enterprises more proactive and strategic about getting the right balance of quality with cost and scope of services, Wavestone is launching a new article series that explores service levels and key performance indicators (KPIs) with tips and top practices you can use to contract and manage of this key aspect of a managed services relationship.
Wavestone is presenting this discussion of service levels in a series of four articles that provide IT and procurement leaders with a comprehensive guide to structuring and executing an effective performance management regime. Those four topics are:
Ultimate Guide to Service Levels in Outsourcing Agreements: This is the article you are reading now, which will give you an overview of service levels, their purpose and key characteristics, and the differences between critical service levels and KPIs.
Creating Service Levels that Deliver Business Value: Find out how you can define a service level regime that provides value to business stakeholders and end users, while considering the importance of managing risk and avoiding the erosion of that value over time.
Key Concepts in Creating a Service Level Methodology: This is a guide to creating service levels that effectively manage supplier performance, enabling you get the service quality you expect and the means and incentives to achieve the supplier’s compliance.
Consequences of Failed Service Level Performance: Stuff happens! Here, we describe best practices for what your supplier must do when it fails to deliver the levels of quality promised and to avoid recurrence of the issue.
Article #1 – The Ultimate Guide to Service Levels in Outsourcing Agreements
The Purpose of an SLA Agreement:
Service levels have historically been used as a mechanism to measure performance, provide the data organizations need to identify existing or potential issues, and enable decisions about any appropriate action to be taken (proactive or corrective). Service levels can be used to measure the performance of internal service delivery groups or third-party suppliers.
It is a common error to view service levels as having the single purpose of measuring and punishing poor performance. You, as a client, need good service—period. Therefore, your SLA agreement must also focus on offering data that supports continuous improvement and serves as a mechanism to demonstrate the commitment made between both parties to share risk.
The Characteristics of an SLA Agreement:
During the development of a sourcing strategy and an RFP to create an IT outsourcing service level agreement for one or more third-party suppliers, organizations need to dedicate enough time to define the right performance expectations across the future state ecosystem of providers (internal and external), as well as the methodology used to collect raw data, generate information, monitor, report, and act.
For the IT outsourcing service level agreement to be useful, it should have the following characteristics:
- The SLA agreement needs to be relevant. The design of an SLA agreement must be associated with the risk and/or business impact caused by the lack of performance and the value delivered. Engaging the service recipient(s) in developing these performance metrics ensures they understand the consequences of a missed service level, thus enabling them to make decisions and react as appropriate.
- The SLA agreement needs to align to the target audience. An SLA agreement needs to consider the audience that will track and make decisions based on the results of the performance reported with accurate and consistent data. A common mistake is to create service level reports that provide only a partial view of the services being delivered to end users, potentially creating confusion and conflicts. Organizations should develop comprehensive performance targets directly impacted by the service provider and other quality metrics impacting the end users’ experience with technology services. This may require two different sets of reports:
- Service level reports that measure the performance of a service provider
- Service level reports that measure the holistic service delivered to end users from an end user perspective
- The SLA agreement needs to effectively balance risk. An IT outsourcing service level agreement should effectively reflect the responsibilities and risks assumed by each party. For example, end-to-end application performance is engineered by the client out of many technology components and services, whereas server and storage availability might be the responsibility of a single service provider. If poorly designed, a service level can be too broad, co-mingling the responsibilities of many parties. In this example, trying to hold a data center services supplier accountable for end-to-end application performance can inadvertently mitigate the risk for the supplier, allowing them to finger point, which leaves the client with the risk of limited resources in the case of a major service issue.
- The SLA agreement needs to be objective. Service levels should not be subject to interpretation or be based on subjective perception. They should be measured and reported based on factual data. Organizations need to be clear and agree on the source of the data and the formula used to calculate the metrics. In some situations, service level tracking may involve correlating data from multiple sources and reports.
- The SLA agreement needs to be realistic. Service levels must be aligned to the scope of responsibilities of the responsible party and the installed/contracted capabilities available for the delivery of services. Commitments should be made about the minimum acceptable performance, not aspirational targets. IT departments should design service levels to align with end user expectations and publish service commitments (through a service catalogue) to manage those expectations.
- The SLA agreement needs to be cost effective. If tracking a service level does not add value to someone in IT or the business, then it is ineffective and a source of value leakage. For service levels that are deemed worthwhile, the value of the information provided must be higher than the cost to measure, report and track those service levels. The investment required to support near-real-time reporting of complex service levels can be significant, therefore the effort required to track and report such service levels needs to be quantified by the provider so the client can determine whether “the view is worth the climb.” For example, perhaps a monthly report is all that is needed—not a slick-looking, real-time dashboard.
- The SLA agreement needs to trigger action. The information provided by an SLA agreement should trigger an action, otherwise it is useless. Actions taken can be preventive (to correct a negative trend), corrective (to fix a performance issue), or proactive (to define performance thresholds that could potentially increase risk if breached).
Critical Service Level vs. Key Performance Indicators:
Organizations can establish different types of performance metrics. This can be classified in two categories: Critical Service Levels (CSLs) and Key Performance Indicators (KPIs). The main difference is that CSLs typically have direct consequences, which can include contractual remedies such as service level credits, while KPIs do not. There are several elements organizations should consider when determining the proper classification of a service level:
|Critical Service Levels||Key Performance Indicators|
|Relevance||– Direct business impact||– Potential business impact|
(if no action taken)
|Audience||– Client performance management teams|
– Business users
|– Operational teams|
– Client performance management teams
|Risk||– Effectively balance client/supplier risks|
– Risk is shared between the parties
|– Limited or no risk shared by the parties|
|Objectivity||– Fact based; clear understanding of data sources and calculations||– Subjective elements may be included (if so, it cannot be promoted as a service level)|
|Realistic||– Aligned with installed capabilities|
– Aligned with service scope
|– Can be aspirational|
|Cost Effective||– Cost to monitor, report, and manage should be less than the potential business impact|
– Cost to comply should be less than cost associated with consequences
|– Limited or no cost to monitor, report, and manage|
|Trigger Actions||– Failure to meet a service level target or the development of a negative trend requires a Root Cause Analysis (RCA) and related action plan||– Failure to meet a service level target or the development of a negative trend requires an action plan|
|Consequences||– Service level credits are applied if supplier fails to meet the service level target|
– Alternative consequences are triggered when:
a. Supplier fails to meet multiple service level targets
b. Suppliers have recurrent failures on a single service level
|– No service level credits or alternative consequences apply|
To support changes in business needs or address any initial service level classification issues, organizations should have contractual language that allows for the reclassification of CSLs and KPIs after the contract is signed. Due to the complexity of defining service levels, reclassified KPIs may need to be restructured to make sure they meet CSL criteria before they can be promoted.
In this article, we have provided a high-level overview of the importance of service levels with third-party providers to measure performance, allocate shared risk, and serve as a continuous improvement tool. We also identified the key characteristics of service levels and provided general guidance on how to differentiate CSLs that have high impact or deliver incremental value to the business from KPIs, which focus more on the delivery of data to support continuous improvement efforts.
In the next article, we will discuss how to build value-driven service levels by establishing realistic service level targets that are aligned with business needs, target audiences, service scope and installed capabilities, leveraging industry market standards. We will also provide information about how to prevent value erosion over time in service level agreements with third-party suppliers.
Have a Question? Just Ask
Whether you're looking for practical advice or just plain curious, our experienced principals are here to help. Check back weekly as we publish the most interesting questions and answers right here.