Posted

In Part 1 of this blog post (Time To Mind Your Ps and Qs), we made the case that there is limited additional opportunity in continuing to pound on “P” in the P x Q = Total Price equation, and that to achieve the next breakthrough the supplier community has to address “Q”. The current standard answer from suppliers on reducing Q is “virtualization”, but that won’t solve the problem, at least not entirely. Here’s why.

Assume we have a buyer with significant IT Infrastructure labor costs — say $125M per year. The buyer decides to go to market despite having a pretty good idea that its unit costs are roughly at parity with current managed services market pricing. The buyer’s objectives include, in addition to qualitative and risk mitigation goals, lopping $20M to $25M p.a. off the labor costs to manage its infrastructure. A five-year labor-only deal in the $500M TCV range is certainly going to attract plenty of interest in today’s marketplace. The buyer has made a strategic decision not to source hardware and software ownership to the supplier so, if necessary, they can “fire the maid without selling the house.” Furthermore, the buyer has decided to signal to the suppliers that its unit costs are near where it believes the market should be and winning this business is probably going to require a clever solution that addresses the Qs along with the Ps.

So, let’s first look at this from the supplier’s perspective. If you are the clever solution developer at a major supplier, you see a way out of this conundrum. You’ll propose a virtualization initiative for the buyer’s vast portfolio of x86 servers! And, since x86 services are typically priced by O/S images, you will still get the same amount of revenue regardless of the degree of virtualization, 15,000 images on 15,000 machines or 15,000 images on 1,000 servers — all the same to you, right? However, since this is a labor only deal and you will be reducing the quantities of something that isn’t in your scope, you have to change the way the buyer calculates benefits to include all the ancillary stuff they won’t buy from you anyway (i.e., floor space, energy, racks and, other than a couple of suppliers, the machines themselves). Starting right in the executive summary you will tell the buyer to think strategically, not tactically. That is, think about TCO, not just about this isolated deal when calculating benefits. You are still going to have to employ a lot of “weasel words” to deal with how virtualization will occur (and how fast) — but at least there’s a story to tell.

Posted

Traditionally, the mechanism for creating value in an IT Infrastructure sourcing has been to push down hard, real hard, on price — the “P” lever. The notion is that a sourcing will result in a lower unit cost for the labor needed to manage a device or deliver a service. The objective is to create as big a difference as possible between the buyer’s current unit cost and the suppliers proposed unit price. The reason for that is obvious: P * Q = Total Price.

To create value for the buyer by reducing the total price either P or Q has to change. Historically, P is what changes, because the buyer expects to have at least the same, if not a higher, quantity of things (devices, applications, project hours, etc.) as they have today. Like the last two decades, this remains the strategy behind most if not all IT Infrastructure managed services arrangements. Supplier’s value propositions are predicated on lower unit costs partially achieved through lower labor costs and partially achieved through improved productivity.

Yet, over the last several years the conventional alchemy has become less likely to create the benefit buyers are seeking. We are seeing a number of buyers whose unit costs are at or below the unit prices offered by suppliers. While it is hard to pin down exactly why buyers’ costs have declined, it is pretty clear that investments in technology and productivity or lower salaries are not the drivers. Generally, it appears to be the result of the weak economy and the constant pressure on IT budgets. IT Infrastructure organizations have been forced to reduce staff and leave open positions unfilled while the quantity of devices, storage and services have stayed the same or increased — reducing the unit cost. Service delivery risk has likely increased but we have yet to see a buyer quantify or price the added risk.

Posted
By

Given the great interest in “the cloud” from a business perspective, as well as Microsoft’s popularization of the concept with its “To the Cloud!” advertising campaign, it’s no wonder that many game providers are looking to the cloud as the next viable and profitable gaming platform. The cloud movement not only provides economic incentives through various subscription and pay-to-play models, but also helps defeat piracy by locking down game code and other intellectual property from potential thieves.

Cloud game providers have a lot to gain from virtualization, but moving to a cloud-based framework raises potential legal issues that should be considered.

LatencyThe first big issue for gaming providers considering moving to the cloud is both a practical one and a legal one – latency. Unlike digital downloads, streaming games require both down and upstream communications. Further, gaming often demands instant, real-time action, so any material latency will be noticed, especially for multi-player, FPS-type or other real-time games. Currently, some game providers have tried to satisfy gamers’ demand for real-time, low-latency play by operating in data centers that are physically close to the gamer. From a technical perspective, cloud gaming may present an issue because it could involve moving the game servers much farther away from the gamer, thus having the potential to lead to increased, or even significant latency. Another technical fix may be to use “tricks” similar to those used in non-cloud gaming to compensate for latency issues.

By
Posted In:
Posted
Updated:

Posted

With the same lack of fanfare that accompanied the April 13 release of the Reasonable Security Practices and Procedures and Sensitive Personal Information rules , today the Indian government released a clarification to those rules to address the most serious concerns arising from ambiguities in the original provisions.

As we noted in our previous post on the new rules, Pillsbury does not provide legal advice on Indian law, but we have been in contact with the Indian legal community and service providers with regard to the new rules.

The Press Note provided on the Indian government’s web site states:

Posted
By

Suppliers of IT outsourcing services limit their responsibility for paying damages arising from the loss of customers’ sensitive data (whether or not intentionally lost by the supplier). Only a few years ago, it was commonplace in an IT outsourcing agreement for a supplier to agree to be responsible for any losses of customer confidential information caused by the supplier. Today, however, due to the widespread increase of data breaches and the higher potential for large amounts of liability that can result from such breaches (see Zurich Insurance fine) suppliers are much less likely to agree to open-ended liability.

IT outsourcing suppliers have taken various approaches to capping their exposure to damages resulting from data breaches, both for amounts owed directly to the supplier’s outsourcing customer as well as the amounts owed to the customer’s clients.

Some suppliers will accept “enhanced” liability for some amount of money that is larger than the general limitation on damages recoverable for standard breaches of the contract; this enhanced amount of money is often set aside as a separate pool of money that cannot be replenished once it is “used up” to pay for the data losses. Some differentiate the amount of exposure they have to these breaches based upon whether the data in question is or should have been encrypted. Still others vary the amounts of exposure based upon whether data was merely lost or whether the data was actually misappropriated by the supplier.

Posted

Once the ink is dry on a signed outsourcing contract, the real work begins for the customer and the service provider. Before the customer can start to realize any savings, efficiencies or service improvements, the parties must first complete the critical task of transitioning from the customer to the service provider the day to day responsibility for performing in-scope functions. This transition process can take several weeks or even months.

Each party has a strong incentive to complete the transition on time. Ordinarily, the customer wants to start reaping the benefits of the outsourcing as quickly as possible. Likewise, the service provider wants be in a position to start charging full freight fees for steady-state services as soon as the transition is complete. In addition, appropriately structured contracts often include an additional incentive for timely performance by the service provider: monetary credits to the customer if transition milestones are not completed on time.

Competing with this need for speed (or, at the very least, on time completion) is the customer’s desire to mitigate the operational risks associated with any transition. Complexities abound, especially if the transition involves multiple service towers and geographies, a transfer of personnel and assets, and a physical change in the location from which services are performed. The stakes are high for both parties. In the worse case scenario for the customer, a hasty transition can result in an interruption or degradation of a critical business activity.

Posted
By

A recent survey conducted by Duke University’s Fuqua School of Business and the American Marketing Association yielded some interesting findings, including:

  • Social marketing budgets are anticipated to increase significantly over the next few years, possibly reaching 18% of total marketing budgets by 2015; and
  • 72% of companies had outsourced some aspect of their marketing programs, and 41% of companies expected to outsource more in 2011.
By
Posted In:
Posted
Updated:

Posted

In part 1 of this discussion we described two front end challenges that, if not properly anticipated and addressed, can (and very often do) derail successful completion of enterprise projects. We’ll now turn to the downstream transactional considerations that can help position a project for success.

The Right Contract Architecture

Customer’s often grapple with how to develop the appropriate contract for enterprise projects. A statement of work alone is not sufficient to cover all the complexities of the project. Instead, customers should consider entering into a master service agreement (MSA) with the supplier. In addition to establishing a contractual framework (e.g., the form and process for developing statements of work) and the terms and conditions, the MSA should address the governing principles or “rules of engagement” for project delivery. Project delivery – especially in a multi-supplier environment – has its own rules which differ in many ways from those followed in typical managed service arrangements. Examples of rules of engagement might include:

Posted

Industry research firm Horses for Sources reported recently that 49% of the companies it surveyed were planning to outsource call center services for the first time, or expand the scope of their existing call center outsourcing, over the next year. With call center outsourcing on the rise, we wanted to share a few of the lessons Pillsbury has learned from negotiating these deals over the past 20+ years.

Baseline Data is Critical to Effective Pricing. Make sure you provide potential suppliers with detailed, accurate historical and projected workload volumes. The data should include:

  • Number of contacts broken down by type (call, email, web chat, fax, white mail)

Posted
By

It is one of those sayings that people just love to recite: “The best contracts are the ones that stay in the drawer.” In ten years of advising customers on their outsourcing agreements, I have heard this phrase uttered in just about every large negotiation that I have done (typically with a knowing nod of the head from others at the table, and sometimes with a disdainful look in my direction). And while it may just be a saying, it is a terribly misguided one; and, even as a guiding principle, it typically will produce the exact opposite result of what it is intending to achieve.

In short, the saying centers around the idea that a healthy long-term working “partnership” – especially one that requires trust, sacrifice and evolution, which most outsourcings do – cannot be strictly managed off the static words on a page, but instead through a trusting, mutually beneficial relationship. So, if you are taking the contract out of the drawer, instead of managing via relationship and trust, either it means that you are being adversarial, which is sure to just escalate and lead to a deteriorating relationship; or it is evidence, in and of itself, that you do not have a good relationship. In this way, the contract is seen as a “negative” – some sort of necessary evil on the front end (perhaps to appease Legal and Finance), that somehow can be vanquished once the contract is signed and the real relationship begins.

Earlier in my career, I thought that the danger in this thinking was that it primarily would lead to the customer failing to enforce its negotiated rights – whether due to the outsourcer’s self-interest, the outsourcer’s lack of incentive to do the “right” thing, or just pure lack of knowledge on both parties’ part. And while this may often be the outcome, I have come to realize that the “keep the contract in the drawer” principle is even more dangerous than that, and ultimately will work to the detriment of both parties.