Contracting for end user experience: 10 issues to consider in service level agreements
Service level agreements or “SLAs” can sometimes be the forgotten sibling of ICT procurement. While the primary contract (such as the services agreement) is fussed over by the lawyers, and the requirements document fussed over by analysts and procurement, the SLA can at times be an afterthought.
This in part can be due to the technical nature of service levels, and the various metrics used to measure service level performance – for example, availability, capacity, Quality of Service or “QoS”, response time, latency, loss, jitter, etc!
The issue is amplified in cloud services, where customers are often presented with generic, but seemingly impressive looking, service levels. While businesses and government departments look to the cloud as a means of achieving cost savings or efficiency gains – the ultimate experience of the end users needs to also be front of mind. The service level agreement is one of the primary legal tools available to effect a “good” experience from the end user perspective.
While the above service levels might appear to be impressive, will they actually result in happy users? We would argue that it will not. Availability is small component of user experience – if that is the only service level present in a cloud service agreement, then the service provider is only incentivised to ensure that the application is available – issues like responsiveness, latency could be contractually ignored.
For instance, your cloud service or application might be technically “available” (in that you can ping its servers), but it might be operating in a very slow or unresponsive manner – in which case, the above availability service level will provide no legal remedy.
Ultimately, the service levels need to reflect something that, if met, will result in a positive user experience. End users are the people most impacted by poor system performance – so their requirements should be front of mind when negotiating a service level agreement.
A “good” user experience is subjective, and can depend upon many factors – these factors will differ from organisation to organisation. Some of the relevant factors can include:
The other common practical problem in enforcing service level agreements is blame shifting. This is because cloud services ultimately are an outsourced system with many interdependencies. Some components of the system might be within the control of the customer, the service provider or a third party (such as a network provider).
Because of this, it is imperative that the service level agreement expressly deals with these issues and contractually allocates responsibility between those factors which are:
It is the final dot-point above which is most critical, as it is those factors which will be most commonly cited as being the cause of a service level breach. The service level agreement should proactively try to address these issues and allocate responsibility so as to create certainty and minimise blame shifting during the post-negotiation stages of the SLA lifecycle (depicted further below).
10 key points to address in service level agreements
We set out below 10 key factors to address when negotiating service level agreements. We emphasise that negotiation is just the first step of the SLA lifecycle. But if these issues are properly addressed during negotiation, it should make the ongoing management a more simple process.
a. those factors within the control of the customer;
b. those factors within the control of the service provider; and
c. the external factors (i.e. those factors which are not within the control of the service provider nor the customer).
2. Expressly allocate contractual responsibility for those factors in the service level agreement. Pay particular attention to documenting responsibility for the external factors.
3. Align the service levels to the specific needs of the business and the users. Do not simply rely on generic service levels.
4. Understand a baseline of your user expectations for performance. Measure that baseline.
5. Document that baseline into the service level agreement.
6. Ensure that the service levels and the data used to measure service levels are either:
a. inherently objective; or
b. based on agreed measurements which are objective.
(Some core metrics include availability, capacity, QoS, response time, latency, loss, jitter, etc.)
7. Carefully consider who is responsible for capturing service level data, providing reports & detecting non-compliance:
a. service provider (note that the service provider has a vested interest in ensuring that the data and reports show that the service levels have been satisfied);
b. customer (note that the customer has a vested interest in showing that service levels have not been satisfied and, on that basis, should be paid a service credit); or
c. a third party or a software-based monitoring tool?
8. Contractually agree on:
a. How service levels are measured.
b. Contract for a “solution” – not individual moving parts. Consider system inter-dependencies.
c. How subjective service levels can be objectively measured.
d. The tool(s) used to measure and report on service level performance. Avoid a “battle of the reports”.
9. Take legal advice to ensure that the service credit regimes are not found to be an unenforceable “penalty”.
10. Ensure an ongoing commitment to SLA lifecycle (negotiate, monitor, detect & enforce). Do not just “negotiate and forget”.
For more information or discussion, please contact our Intellectual Property and Technology team.