Why Service Level Management

Key to the success of any IT outsourcing process is the Service Level Management because, all other processes, people, and projects are measured on how well the services are performed; processes are adhered to/improved on and how they measure up against contractual agreement.

In order to ensure Customer follows significant number of service level parameters and ensure the measured service levels are used for improvements and align to the overall goals of Customer’s IT Infrastructure environment and business operations. Supplier recommends an industry standard Service Level Management Process which will help streamline the process performance.

Following diagram shows the industry standard Service Level Management Process which we propose to adopt.

Service Definition:

During the definition stage we will define and agree on the service levels, their current measurement process, their periodicity and how effectively they align to Customer’s IT goals and what will the measurement criteria be going forward. This will be documented as part of the plan.

Service Execution:

These service levels are mutually well understood and mechanisms of collection, their periodicity and reporting structure are agreed. These SLA’s are measured and periodically reported. The reports are reviewed and course-correction is applied, if deemed necessary and mutually agreeable.

Service Management:

The service Levels are periodically measured initially at a higher frequency. A Dash Board will be established to ensure that these measurements and metrics are visible on a continuous basis to the all the relevant project stakeholders.

Performance levels of SLA processes are reviewed. The suggestions are taken down the stream for consideration and implementation.

Service Control:

In the review meetings, not only the Service Levels are measured, but also the performance of the entire SLM process to ensure overall quality of the engagement.

The SLM process is iterative and can be effectively implemented and managed. It also provides a built-in risk mitigation process thereby ensuring high level of quality and value.

Reporting Plan

The table below is the indicative of the status reporting that would be followed by Supplier as part of the support engagement

Reporting Item

  • Weekly Project Status
  • Risk / Issues Status
  • Change Request Tracking
  • Deliverables Sign-off Report
  • Meeting Minutes/ Action Items
  • Monthly Status Report

Weekly reports showing progress against the baseline plan for the project will be provided to Customer. The report will contain a summary of issues encountered & resolved along with the detailed description of key issues encountered.

These reports will contain as a minimum:

  • Milestones completed the current week;
  • Milestones due next week;
  • Major activities due to start;
  • Key issues & status of actions to resolve;
  • Exceptions to plan, including;
  • Cause of exception
  • Remedial action proposed

The project progress will be monitored carefully against the baseline and analyze the causes of any slippage. We will endeavor to resolve the issues at a level where they are found, all issues encountered will be logged and formally raised in the monthly review meetings and responsibilities and timeframes for various actions recorded.

Indicative SLA and key performance indicators

The following are indicative SLAs.

Severity and Impact Indicators Response Time
1 – Critical High Visibility

  • Affects critical online business operation
  • Major loss of functionality
  • The problem cannot be bypassed
  • No viable or productive worworkaroundailable
To be resolved within the 2 hours window L1 Support period
2 – Significant Moderate visibility

  • Serious slow response time
  • Component continues to fail intermittently down for short periods, but repetitive
  • Problems may have a possible bypass but the bypass must be acceptable to the customer
  • Major access down but a partial backup exists
To be resolved within the 8 hours window L2, L3 Support period
3 – Minor Low to medium visibility

  • Single client device affected
  • Minimal loss of functionality
  • The problem may be bypassed or redundancy in place –bypass must be acceptable to the customer
  • Automated workaround in place and known –workaround must be acceptable to the customer
To be resolved within the 2-3 business days

The data will be tracked measure performance against service level agreements (SLAs) and key performance indicators, for example:


Bug-Fixing / Production Support Key Measures

Measures Unit of Measure
  • Average turnaround time
  • Adherence to turnaround time
  • Delivered Defects
  • Response Time
  • No. of days / work request
  • % work requests delivered within turnaround the time
  • % work requests delivered with defects
  • Time taken for problem acknowledgement

SLA will be mutually agreed between Customer before execution of engagement.


Performance Metrics

Supplier will provide consistent reporting to align performance measurement with Customer’s business drivers. Through the use of performance management tools and methodologies, we will deliver effective reporting of key performance indicators and promote continuous improvement of the service delivery program.

We propose to use dashboard-based performance monitoring and reporting mechanisms that require minimal manual effort. The dashboards will display performance metrics at the summary and detailed levels, and will include drill downs, traffic lights, chart/graph visualizations, and other data views, where appropriate. This data will be analyzed to trigger, as needed, any corrective and/or preventive actions, which will in turn help improve the overall efficiency of support activities.

The Support Team will deliver consistent, timely, and high-quality reports which will be fully engaged in managing customer satisfaction and in facilitating and promoting continuous improvement.


These metrics will be generated and dashboards and linked to standard offshore processes for tracking metrics on production support, which incl. bug fixing.

The metrics will be calculated differently for different categories of work requests, based on the complexity and/or priority of the request. They include:

  • Productivity (throughput), which describes productivity in terms of the number of work requests delivered per unit of effort
  • Average turnaround time, which is the average time in calendar days expended per work request
  • Adherence to turnaround time, which is the number of times that production fixes were delivered within the customer-specified turnaround time
  • Delivered defects, which is the number of times that a defective fix was delivered.

The metrics will be analyzed each month by the project team to ensure that they are meeting expectations for the engagement. As the project progresses, the performance goals against these metrics will be revised, resulting in continuous performance improvement. Tracking a mix of offshore and on-site efforts and associated performance metrics will also be an integral part of the process to drive improvements to the overall service delivery model.

Customer Satisfaction Management

We strongly believe in the value of customer satisfaction and the significance of the customers’ voice in achieving operational excellence. Our customer satisfaction plan will incorporate the input and involvement of the client team, including communication with Customer to set expectations and quarterly survey reviews.

The survey will be conducted via a structured process of collecting feedback from various levels of staff within the customer organization via e-mail.

The customer feedback will be analyzed and discussed during operational reviews. This setting will enable the customer to provide input on how to the support team can improve the service delivery process. Identifying areas of improvement and developing action plans based on the surveys will also strengthen the business relationship between Customer and Service provider.

Value Added Services

Collaborative Governance: Our governance model would be designed to ensure a collaborative, interactive relationship between Customer and our Support team. Through this model, we can jointly evaluate performance to ensure relationship parameters are met (and exceeded) and set strategic directions for the future.

Reusability – In order to reduce cost, we will make the utmost use of reusability in terms of components, Reusability of test cases, test-scripts, automation, etc.

Knowledgebase: Over a period of time we would maintain our knowledge base that would grow considerably from a technology and business perspective. This would be utilized as a knowledge portal.

Testing Methodology

The testing process would have appropriate test plans are required stage in the support process. These test plans would be reviewed and approved by the relevant stakeholders before they are implemented. Metrics will be collected as per the quality process which would include preparing a Defect Consolidation Log and a Defect Tracking Form. The metrics that will be captured are:

  • Total number of test cases
  • Total Number of Test cases Passed / Failed
  • Average time for completing
  • Number of Defects re-opened
  • Average response time
  • Number of defects

Types of Tests addressed

Type of Testing Description
Black Box Testing Installation Testing

This includes testing the installation, updates, and un-installation capabilities of the software product in different hardware / software configurations.

Platform Testing

Testing is performed to ensure that the application will function successfully in the various configurations of operating system / browser combinations.

Security Testing

Testing performed, at the user level, to guarantee that only users with the appropriate authority and access permissions are allowed to use the features.

Performance Testing Performance testing is aimed at analyzing the performance of the hardware/software setup.

Capacity Planning – Capacity planning is about planning the hardware, software, and network aspects of your application so that you will have sufficient capacity to meet the anticipated and unanticipated demand of client load.


Supplier will use knowledge repository to manage engagement related project documents.

  • The documents would cover modified / developed / implemented during the course of the engagement
  • Maintain a knowledge database of the Root cause Analysis, Solution and other necessary Details as part of the Documentation for the Incident resolution and Problem resolution as KEDB (Known Error Databases), FAQ etc
  • Test Plans—Test plan prepared for support activities would be maintained. Earlier test plans and results that are available also will be collected for the documentation.
  • Maintenance Procedures— the operational procedures for support, problem handling, and typical requests will be consolidated. These plans will be updated by the change management and configuration management procedures.
  • The document inaccuracies on account of error or change in processes will be modified by Supplier as part of normal day-to-day operational support.
  • Create / modify documentation for systems modified / implemented by Service Provider as part of the project documentation.

Coordinate to ensure that documentation is provided by the 3rd party in case of any dependencies as part of the support and update the documentation and escalate to Customer in case of non adherence..

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.