Posted
By

In Part 1, addressed managing and mitigating risks during the supplier selection process and in Part 2, I addressed the risks associated with contract negotiations. In this Part 3, I will discuss relationship management as a key component of successful outsourcings.

Successful outsourcing requires effective contract management. While you can mitigate the risks associated with outsourcing through having a robust outsourcing contract, it is the day-to-day management of the relationship with your supplier which is critical to the overall success of the deal. Problems will always arise in long term services arrangements: an inadequate agreement on requirements or specifications, delays, cost overruns and poor performance are not uncommon. Underlying most of these issues is a single recurrent theme – poor management and communication between the parties. If an effective management framework is established early and applied consistently throughout the deal, then the risk of an outsourcing deal failing is significantly reduced.

To assist in the creation of a culture of trust and partnership, you should foster the development of key individual relationships with your supplier so as to facilitate a constructive relationship. Strong relationships will benefit you just as much as your supplier and play a very significant role in ensuring the overall success of the deal and facilitating the development of institutional trust.

Posted
By

In Part 1 of Managing Risks in Outsourcing, I focused on managing and mitigating risks during the supplier selection process. I will now look at the risks associated with contract negotiations.

If poorly planned and executed, the negotiation of an outsourcing contract can be a long, tiring and frustrating affair. Over-populated meetings which continue for days with little progress being made on commercial points, over zealous legal advisers who focus on points which have no real impact on the deal fundamentals, costly delays and significant frustration are sadly a common experience. Negotiation of an outsourcing contract can also expose a customer to a variety of different types of risks which are all too often overlooked.

Common risks which you face during negotiations include an outsourcing contract which does not support your business needs and objectives, costly delays in the timetable and the realization of benefits from the outsourcing, and long term damage to the relationship with your supplier due to an adversarial approach being taken to negotiations.

Posted
By

Outsourcings can offer organizations significant commercial benefits but they also present challenges and risks throughout the outsourcing life-cycle for the outsourcing organization whether during the supplier selection process, in the course of contract negotiations, during the implementation and day to day operation of the outsourced services, and on exit from the outsourcing contract. Here are some practical tips for organizations who propose to outsource on how to manage and mitigate some of these risks. In Part 1, I will focus on supplier selection. In Part 2, I will cover the negotiation process. In Part 3, I will address relationship management.

Supplier selection is an important step in the outsourcing process which can give rise to a wide range of risks. These include a procurement outcome which does not support your needs and objectives, delays leading to increases in the overall deal costs, discontinuity in the supply of essential goods and services, loss of influence in relationships with your existing essential suppliers, damage to your reputation, exposure of your directors and officers to prosecution and litigation, unauthorized disclosures of your confidential information or confidential information belonging to a third party, and ‘misrepresentation type’ claims brought by the selected supplier or unsuccessful bidders arising from incorrect, misleading or deceptive statements or information provided or made during the selection process.

The success of a selection process in outsourcing deals is dependent upon undertaking sufficient due diligence, preparation and planning. You should conduct a baseline review to identify and assess your current services and systems and the current costs of providing those services and systems. The quality and depth of this analysis is key. Not only will it form benchmarks for performance and service levels but it can also be used to confirm that the outsourcing business case is economically viable.

Posted

On 7 September 2011, the UK privacy watchdog, the Information Commissioner’s Office (“ICO”), published a comprehensive guide (the “Guide”) to new European laws relating to, amongst other things, the measures a public electronic communications provider (“Service Provider”) should take to protect the security of its services, including the notification to the ICO of a personal data breach, and the ICO’s new audit powers.

The Guide includes useful commentary on the Privacy and Electronic Communications Regulations (SI 2426/2003) and the Privacy and Electronic Communications (EC Directive) (Amendment) Regulations (SI 2011/1208) (the “2011 Regulations”), which came into effect on 26 May 2011, made a number of amendments to earlier regulations and implement in the UK the amended European E-Privacy Directive (2002/58/EC).

The Guide on Security Of Services

Posted

In Part 1 of this blog post Time to Mind Your Ps and Qs we made the case that there is limited additional opportunity in continuing to pound on “P” in the P x Q = Total Price equation and that to achieve the next breakthrough the supplier community has to address Q. In Part 2, we addressed why more virtualization is not the real answer. Where are the next big benefits going to come from and who is willing to make the paradigm shift?

Continuing in our example from Part 2 where our Buyer was looking for $125M in savings over a five year term, if the virtualization dog won’t hunt (well enough) what dog might? Perhaps x86 hardware consolidation should be addressed in a different way in a sourced environment. What if instead of using 15,000 virtual images, applications could be stacked, like they are on other platforms like mainframes. While no application stacking effort would achieve 100% results, neither would virtualization. For simplicity in calculating the virtualization numbers we assumed 100% of the images could be virtualized and we will do so again for the application-stacking alternative. In both cases, what can be achieved in actual implementations will be less.

Let’s assume that each of the 15,000 O/S images runs one application instance. Then let’s take those applications and stack them inside let’s say three O/S images on each of 1,000 machines. We will still need the same amount of hardware, the same amount of virtualization software, which will cost $62.3M over the term, but then let’s stack the 15,000 application images in the resulting 3,000 O/S images. In that case our service fees would drop from $202.5M to $89.1M (15,000 * $225 for 18 months + 3,000 * $225 for 42 months) a projected savings of $113.4M over the term. The $113.4M is 90% of the buyer’s savings goal of $125M.

Posted

In Part 1 of this blog post (Time To Mind Your Ps and Qs), we made the case that there is limited additional opportunity in continuing to pound on “P” in the P x Q = Total Price equation, and that to achieve the next breakthrough the supplier community has to address “Q”. The current standard answer from suppliers on reducing Q is “virtualization”, but that won’t solve the problem, at least not entirely. Here’s why.

Assume we have a buyer with significant IT Infrastructure labor costs — say $125M per year. The buyer decides to go to market despite having a pretty good idea that its unit costs are roughly at parity with current managed services market pricing. The buyer’s objectives include, in addition to qualitative and risk mitigation goals, lopping $20M to $25M p.a. off the labor costs to manage its infrastructure. A five-year labor-only deal in the $500M TCV range is certainly going to attract plenty of interest in today’s marketplace. The buyer has made a strategic decision not to source hardware and software ownership to the supplier so, if necessary, they can “fire the maid without selling the house.” Furthermore, the buyer has decided to signal to the suppliers that its unit costs are near where it believes the market should be and winning this business is probably going to require a clever solution that addresses the Qs along with the Ps.

So, let’s first look at this from the supplier’s perspective. If you are the clever solution developer at a major supplier, you see a way out of this conundrum. You’ll propose a virtualization initiative for the buyer’s vast portfolio of x86 servers! And, since x86 services are typically priced by O/S images, you will still get the same amount of revenue regardless of the degree of virtualization, 15,000 images on 15,000 machines or 15,000 images on 1,000 servers — all the same to you, right? However, since this is a labor only deal and you will be reducing the quantities of something that isn’t in your scope, you have to change the way the buyer calculates benefits to include all the ancillary stuff they won’t buy from you anyway (i.e., floor space, energy, racks and, other than a couple of suppliers, the machines themselves). Starting right in the executive summary you will tell the buyer to think strategically, not tactically. That is, think about TCO, not just about this isolated deal when calculating benefits. You are still going to have to employ a lot of “weasel words” to deal with how virtualization will occur (and how fast) — but at least there’s a story to tell.

Posted

Traditionally, the mechanism for creating value in an IT Infrastructure sourcing has been to push down hard, real hard, on price — the “P” lever. The notion is that a sourcing will result in a lower unit cost for the labor needed to manage a device or deliver a service. The objective is to create as big a difference as possible between the buyer’s current unit cost and the suppliers proposed unit price. The reason for that is obvious: P * Q = Total Price.

To create value for the buyer by reducing the total price either P or Q has to change. Historically, P is what changes, because the buyer expects to have at least the same, if not a higher, quantity of things (devices, applications, project hours, etc.) as they have today. Like the last two decades, this remains the strategy behind most if not all IT Infrastructure managed services arrangements. Supplier’s value propositions are predicated on lower unit costs partially achieved through lower labor costs and partially achieved through improved productivity.

Yet, over the last several years the conventional alchemy has become less likely to create the benefit buyers are seeking. We are seeing a number of buyers whose unit costs are at or below the unit prices offered by suppliers. While it is hard to pin down exactly why buyers’ costs have declined, it is pretty clear that investments in technology and productivity or lower salaries are not the drivers. Generally, it appears to be the result of the weak economy and the constant pressure on IT budgets. IT Infrastructure organizations have been forced to reduce staff and leave open positions unfilled while the quantity of devices, storage and services have stayed the same or increased — reducing the unit cost. Service delivery risk has likely increased but we have yet to see a buyer quantify or price the added risk.

Posted
By

Given the great interest in “the cloud” from a business perspective, as well as Microsoft’s popularization of the concept with its “To the Cloud!” advertising campaign, it’s no wonder that many game providers are looking to the cloud as the next viable and profitable gaming platform. The cloud movement not only provides economic incentives through various subscription and pay-to-play models, but also helps defeat piracy by locking down game code and other intellectual property from potential thieves.

Cloud game providers have a lot to gain from virtualization, but moving to a cloud-based framework raises potential legal issues that should be considered.

LatencyThe first big issue for gaming providers considering moving to the cloud is both a practical one and a legal one – latency. Unlike digital downloads, streaming games require both down and upstream communications. Further, gaming often demands instant, real-time action, so any material latency will be noticed, especially for multi-player, FPS-type or other real-time games. Currently, some game providers have tried to satisfy gamers’ demand for real-time, low-latency play by operating in data centers that are physically close to the gamer. From a technical perspective, cloud gaming may present an issue because it could involve moving the game servers much farther away from the gamer, thus having the potential to lead to increased, or even significant latency. Another technical fix may be to use “tricks” similar to those used in non-cloud gaming to compensate for latency issues.

By
Posted In:
Posted
Updated:

Posted

With the same lack of fanfare that accompanied the April 13 release of the Reasonable Security Practices and Procedures and Sensitive Personal Information rules , today the Indian government released a clarification to those rules to address the most serious concerns arising from ambiguities in the original provisions.

As we noted in our previous post on the new rules, Pillsbury does not provide legal advice on Indian law, but we have been in contact with the Indian legal community and service providers with regard to the new rules.

The Press Note provided on the Indian government’s web site states:

Posted
By

Suppliers of IT outsourcing services limit their responsibility for paying damages arising from the loss of customers’ sensitive data (whether or not intentionally lost by the supplier). Only a few years ago, it was commonplace in an IT outsourcing agreement for a supplier to agree to be responsible for any losses of customer confidential information caused by the supplier. Today, however, due to the widespread increase of data breaches and the higher potential for large amounts of liability that can result from such breaches (see Zurich Insurance fine) suppliers are much less likely to agree to open-ended liability.

IT outsourcing suppliers have taken various approaches to capping their exposure to damages resulting from data breaches, both for amounts owed directly to the supplier’s outsourcing customer as well as the amounts owed to the customer’s clients.

Some suppliers will accept “enhanced” liability for some amount of money that is larger than the general limitation on damages recoverable for standard breaches of the contract; this enhanced amount of money is often set aside as a separate pool of money that cannot be replenished once it is “used up” to pay for the data losses. Some differentiate the amount of exposure they have to these breaches based upon whether the data in question is or should have been encrypted. Still others vary the amounts of exposure based upon whether data was merely lost or whether the data was actually misappropriated by the supplier.