Posted

On 7 September 2011, the UK privacy watchdog, the Information Commissioner’s Office (“ICO”), published a comprehensive guide (the “Guide”) to new European laws relating to, amongst other things, the measures a public electronic communications provider (“Service Provider”) should take to protect the security of its services, including the notification to the ICO of a personal data breach, and the ICO’s new audit powers.

The Guide includes useful commentary on the Privacy and Electronic Communications Regulations (SI 2426/2003) and the Privacy and Electronic Communications (EC Directive) (Amendment) Regulations (SI 2011/1208) (the “2011 Regulations”), which came into effect on 26 May 2011, made a number of amendments to earlier regulations and implement in the UK the amended European E-Privacy Directive (2002/58/EC).

The Guide on Security Of Services

Posted

In Part 1 of this blog post Time to Mind Your Ps and Qs we made the case that there is limited additional opportunity in continuing to pound on “P” in the P x Q = Total Price equation and that to achieve the next breakthrough the supplier community has to address Q. In Part 2, we addressed why more virtualization is not the real answer. Where are the next big benefits going to come from and who is willing to make the paradigm shift?

Continuing in our example from Part 2 where our Buyer was looking for $125M in savings over a five year term, if the virtualization dog won’t hunt (well enough) what dog might? Perhaps x86 hardware consolidation should be addressed in a different way in a sourced environment. What if instead of using 15,000 virtual images, applications could be stacked, like they are on other platforms like mainframes. While no application stacking effort would achieve 100% results, neither would virtualization. For simplicity in calculating the virtualization numbers we assumed 100% of the images could be virtualized and we will do so again for the application-stacking alternative. In both cases, what can be achieved in actual implementations will be less.

Let’s assume that each of the 15,000 O/S images runs one application instance. Then let’s take those applications and stack them inside let’s say three O/S images on each of 1,000 machines. We will still need the same amount of hardware, the same amount of virtualization software, which will cost $62.3M over the term, but then let’s stack the 15,000 application images in the resulting 3,000 O/S images. In that case our service fees would drop from $202.5M to $89.1M (15,000 * $225 for 18 months + 3,000 * $225 for 42 months) a projected savings of $113.4M over the term. The $113.4M is 90% of the buyer’s savings goal of $125M.

Posted

In Part 1 of this blog post (Time To Mind Your Ps and Qs), we made the case that there is limited additional opportunity in continuing to pound on “P” in the P x Q = Total Price equation, and that to achieve the next breakthrough the supplier community has to address “Q”. The current standard answer from suppliers on reducing Q is “virtualization”, but that won’t solve the problem, at least not entirely. Here’s why.

Assume we have a buyer with significant IT Infrastructure labor costs — say $125M per year. The buyer decides to go to market despite having a pretty good idea that its unit costs are roughly at parity with current managed services market pricing. The buyer’s objectives include, in addition to qualitative and risk mitigation goals, lopping $20M to $25M p.a. off the labor costs to manage its infrastructure. A five-year labor-only deal in the $500M TCV range is certainly going to attract plenty of interest in today’s marketplace. The buyer has made a strategic decision not to source hardware and software ownership to the supplier so, if necessary, they can “fire the maid without selling the house.” Furthermore, the buyer has decided to signal to the suppliers that its unit costs are near where it believes the market should be and winning this business is probably going to require a clever solution that addresses the Qs along with the Ps.

So, let’s first look at this from the supplier’s perspective. If you are the clever solution developer at a major supplier, you see a way out of this conundrum. You’ll propose a virtualization initiative for the buyer’s vast portfolio of x86 servers! And, since x86 services are typically priced by O/S images, you will still get the same amount of revenue regardless of the degree of virtualization, 15,000 images on 15,000 machines or 15,000 images on 1,000 servers — all the same to you, right? However, since this is a labor only deal and you will be reducing the quantities of something that isn’t in your scope, you have to change the way the buyer calculates benefits to include all the ancillary stuff they won’t buy from you anyway (i.e., floor space, energy, racks and, other than a couple of suppliers, the machines themselves). Starting right in the executive summary you will tell the buyer to think strategically, not tactically. That is, think about TCO, not just about this isolated deal when calculating benefits. You are still going to have to employ a lot of “weasel words” to deal with how virtualization will occur (and how fast) — but at least there’s a story to tell.

Posted

Traditionally, the mechanism for creating value in an IT Infrastructure sourcing has been to push down hard, real hard, on price — the “P” lever. The notion is that a sourcing will result in a lower unit cost for the labor needed to manage a device or deliver a service. The objective is to create as big a difference as possible between the buyer’s current unit cost and the suppliers proposed unit price. The reason for that is obvious: P * Q = Total Price.

To create value for the buyer by reducing the total price either P or Q has to change. Historically, P is what changes, because the buyer expects to have at least the same, if not a higher, quantity of things (devices, applications, project hours, etc.) as they have today. Like the last two decades, this remains the strategy behind most if not all IT Infrastructure managed services arrangements. Supplier’s value propositions are predicated on lower unit costs partially achieved through lower labor costs and partially achieved through improved productivity.

Yet, over the last several years the conventional alchemy has become less likely to create the benefit buyers are seeking. We are seeing a number of buyers whose unit costs are at or below the unit prices offered by suppliers. While it is hard to pin down exactly why buyers’ costs have declined, it is pretty clear that investments in technology and productivity or lower salaries are not the drivers. Generally, it appears to be the result of the weak economy and the constant pressure on IT budgets. IT Infrastructure organizations have been forced to reduce staff and leave open positions unfilled while the quantity of devices, storage and services have stayed the same or increased — reducing the unit cost. Service delivery risk has likely increased but we have yet to see a buyer quantify or price the added risk.

Posted
By

Given the great interest in “the cloud” from a business perspective, as well as Microsoft’s popularization of the concept with its “To the Cloud!” advertising campaign, it’s no wonder that many game providers are looking to the cloud as the next viable and profitable gaming platform. The cloud movement not only provides economic incentives through various subscription and pay-to-play models, but also helps defeat piracy by locking down game code and other intellectual property from potential thieves.

Cloud game providers have a lot to gain from virtualization, but moving to a cloud-based framework raises potential legal issues that should be considered.

LatencyThe first big issue for gaming providers considering moving to the cloud is both a practical one and a legal one – latency. Unlike digital downloads, streaming games require both down and upstream communications. Further, gaming often demands instant, real-time action, so any material latency will be noticed, especially for multi-player, FPS-type or other real-time games. Currently, some game providers have tried to satisfy gamers’ demand for real-time, low-latency play by operating in data centers that are physically close to the gamer. From a technical perspective, cloud gaming may present an issue because it could involve moving the game servers much farther away from the gamer, thus having the potential to lead to increased, or even significant latency. Another technical fix may be to use “tricks” similar to those used in non-cloud gaming to compensate for latency issues.

By
Posted In:
Posted
Updated:

Posted

With the same lack of fanfare that accompanied the April 13 release of the Reasonable Security Practices and Procedures and Sensitive Personal Information rules , today the Indian government released a clarification to those rules to address the most serious concerns arising from ambiguities in the original provisions.

As we noted in our previous post on the new rules, Pillsbury does not provide legal advice on Indian law, but we have been in contact with the Indian legal community and service providers with regard to the new rules.

The Press Note provided on the Indian government’s web site states:

Posted
By

Suppliers of IT outsourcing services limit their responsibility for paying damages arising from the loss of customers’ sensitive data (whether or not intentionally lost by the supplier). Only a few years ago, it was commonplace in an IT outsourcing agreement for a supplier to agree to be responsible for any losses of customer confidential information caused by the supplier. Today, however, due to the widespread increase of data breaches and the higher potential for large amounts of liability that can result from such breaches (see Zurich Insurance fine) suppliers are much less likely to agree to open-ended liability.

IT outsourcing suppliers have taken various approaches to capping their exposure to damages resulting from data breaches, both for amounts owed directly to the supplier’s outsourcing customer as well as the amounts owed to the customer’s clients.

Some suppliers will accept “enhanced” liability for some amount of money that is larger than the general limitation on damages recoverable for standard breaches of the contract; this enhanced amount of money is often set aside as a separate pool of money that cannot be replenished once it is “used up” to pay for the data losses. Some differentiate the amount of exposure they have to these breaches based upon whether the data in question is or should have been encrypted. Still others vary the amounts of exposure based upon whether data was merely lost or whether the data was actually misappropriated by the supplier.

Posted

Once the ink is dry on a signed outsourcing contract, the real work begins for the customer and the service provider. Before the customer can start to realize any savings, efficiencies or service improvements, the parties must first complete the critical task of transitioning from the customer to the service provider the day to day responsibility for performing in-scope functions. This transition process can take several weeks or even months.

Each party has a strong incentive to complete the transition on time. Ordinarily, the customer wants to start reaping the benefits of the outsourcing as quickly as possible. Likewise, the service provider wants be in a position to start charging full freight fees for steady-state services as soon as the transition is complete. In addition, appropriately structured contracts often include an additional incentive for timely performance by the service provider: monetary credits to the customer if transition milestones are not completed on time.

Competing with this need for speed (or, at the very least, on time completion) is the customer’s desire to mitigate the operational risks associated with any transition. Complexities abound, especially if the transition involves multiple service towers and geographies, a transfer of personnel and assets, and a physical change in the location from which services are performed. The stakes are high for both parties. In the worse case scenario for the customer, a hasty transition can result in an interruption or degradation of a critical business activity.

Posted
By

A recent survey conducted by Duke University’s Fuqua School of Business and the American Marketing Association yielded some interesting findings, including:

  • Social marketing budgets are anticipated to increase significantly over the next few years, possibly reaching 18% of total marketing budgets by 2015; and
  • 72% of companies had outsourced some aspect of their marketing programs, and 41% of companies expected to outsource more in 2011.
By
Posted In:
Posted
Updated:

Posted

In part 1 of this discussion we described two front end challenges that, if not properly anticipated and addressed, can (and very often do) derail successful completion of enterprise projects. We’ll now turn to the downstream transactional considerations that can help position a project for success.

The Right Contract Architecture

Customer’s often grapple with how to develop the appropriate contract for enterprise projects. A statement of work alone is not sufficient to cover all the complexities of the project. Instead, customers should consider entering into a master service agreement (MSA) with the supplier. In addition to establishing a contractual framework (e.g., the form and process for developing statements of work) and the terms and conditions, the MSA should address the governing principles or “rules of engagement” for project delivery. Project delivery – especially in a multi-supplier environment – has its own rules which differ in many ways from those followed in typical managed service arrangements. Examples of rules of engagement might include: