Search Results for: NS0-404 Schulungsunterlagen 🩱 NS0-404 Fragen&Antworten 🍕 NS0-404 Zertifikatsfragen 🚋 ➠ www.itzert.com 🠰 ist die beste Webseite um den kostenlosen Download von ⮆ NS0-404 ⮄ zu erhalten 🦗NS0-404 Fragen Antworten

Posted

Back in 1999 Kevin Ashton, the British technology pioneer and cofounder of Auto-ID Center at MIT (creators of the global standard system for radio-frequency identification (RFID)), coined the term, the Internet of Things, to describe “uniquely identifiable objects (things) and their virtual representations in an internet-like structure.” Put simply, the Internet of Things refers to networks of everyday objects such as phones, car and household appliances which are wirelessly connected to the internet through smart chips, and can collect and share data.

Now, well over a decade later, the European Commission has issued an online questionnaire which seeks views on the future regulation of the Internet of Things. The Commission sees both opportunity and threat from the exponential growth of interconnected networks, with 50 billion wirelessly connected devices predicted by 2020: “The Internet of Things holds the promise of significant progress in addressing global and societal challenges and to improve daily life. It is also a highly promising economic sector for sustainability, growth, innovation and employment. But it is likely to have a profound impact on society, in areas like privacy, security, ethics, and liability.”

Predicting a future where everyday objects are linked, the Commission has started to gather views on how best to design and shape a regulatory framework which operates in an open manner, enabling a level playing field, whilst ensuring an adequate level of control over the connected devices gathering, processing and storing information. Views on privacy, safety and security, security of infrastructure, ethics, interoperability, governance and standards are sought. Responses to the questionnaire are requested by 12 July 2012. The Commission’s recommendation on the Internet of Things is expected to be published by summer 2013.

Posted

India’s recent demand for European Union designation as a data secure country (see our blog) has brought the issue into the spotlight. Here we take a closer look at those nations which have achieved EU recognition and the benefits of doing so.

Article 25.1 of the Data Protection Directive (in the UK enacted through the eighth principle of the Data Protection Act, 1998) prohibits the transfer of personal data to a third county (i.e. a country or territory outside the EEA) unless that third country provides an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data. Several exceptions to this rule are available including, in particular, the use of the approved EC model clauses.

Data transfers to third countries can take place in many circumstances, such as where an EU- based business relocates functions to subsidiaries outside the EEA, establishes an offshore shared service centre which processes, for example, HR or payroll data, where data is transferred for offshore processing as part of an outsourcing agreement with a third party supplier or as part of a hosting or cloud computing deal. The onus is on the data controller to ensure that he complies with the eighth data protection principle in relation to any cross-border data transfer of personal data.

The European Commission has designated a small number of third countries as providing adequate protection. The European Commission publishes a list of its decisions on the adequacy of personal data in third countries together with copies of the Commission Decision and Article 29 Working Party Opinion on which the decision is based.

The European Commission has so far recognised the following countries: Andorra (2010), Argentina (2003), Canada (2002), Faroe Islands (2010), Guernsey (2003), Isle of Man (2004), Israel (2011), Jersey (2008) and Switzerland (2000); the primary data protection laws considered by the Working Party to provide adequate protection being:

  • the Qualified Law on the Protection of Personal Data, 2003 (Andorra);
  • the Argentinean Constitution, Personal Data Protection Act No.25.326 and Regulation approved by Decree No. 1558/2001 (Argentina);
  • the Personal Information and Electronic Documents Act, 2000 (Canada);
  • the Data Protection Act, 2001 (Faroe Islands);
  • the Data Protection (Bailiwick of Guernsey) Law, 2001 (Guernsey);
  • the Data Protection Act, 2002 (Isle of Man);
  • the Privacy Protection Act, 1981 (Israel);
  • the Data Protection (Jersey) Law, 2001 (Jersey); and
  • the Law on Data Protection, 1992, as amended by Swiss Federal Council ruling of 1993 (Switzerland).

The Commission’s webpage listing needs to be reviewed carefully because it appears to list Australia and, although there is an air passenger data transfer agreement between Australia and the EU, no general finding of adequacy applies to Australia. New Zealand, which the Article 29 Working Party found to be adequate in April 2011, is not yet listed, so we assume that the European Commission has not yet made an adequacy finding. The delay may be explained because, according to the Working Party, some concerns exist in connection with direct marketing and the oversight of data transfers. Israel’s 2010 adequacy decision was not without a few bumps along the way either, with a formal objection from the Irish government (a political move linked to the alleged use of forged Irish passports by an Israeli intelligence agency in the assignation of a Hamas operative in Dubai in January that year), requiring a full debate and vote, rather than the shorter written procedure which allows automatic adoption absent any objections by the member states. One can imagine that territories, such as Guernsey, Jersey and the Isle of Man, received a rather smoother passage through their adoption of data protection laws closely modelled on the UK’s Data Protection Act.

The US has not been deemed as providing adequate protection, however personal data sent under the Safe Harbor scheme signed between the EC and the US government in 2000 is considered to be adequately protected. Not all US companies can qualify for the safe harbor programme, e.g. companies in financial services, transport and telecommunications. There are also several international agreements to which the EU is a party which permit and require the transfer of passenger names records of all airline passengers (e.g. Canada in 2005, US in 2007 and Australia in 2008).

The benefit of recognition is that personal data can flow from the 27 EU countries and three EEA member countries (Norway, Liechtenstein and Iceland) to a recognised third country without any further safeguard being necessary. The recognition process can however give rise to pressure brought to bear on the third country to undertake some remedial action. For example, whilst making a finding of adequate protection in the case of Argentina, the Working Party urged “the Argentinean Authorities to ensure the effective enforcement of the legislation at a provincial level by means of the creation of the necessary independent control authorities.”

What is not entirely clear is whether the Working Party monitors how its recommendations are dealt with, if at all, once a third party has obtained a decision of adequacy and whether it monitors and reviews amendments to existing data protection regulations and the introduction of new data protection regulations, with a view to reaffirming (or otherwise) an adequacy finding.

As data protection and security is increasingly high up on the corporate agenda, recognition itself may add a degree of comfort to the enterprise sending data to that third country (recognising however that adequacy is not the same as equivalent). As reported by Economic Times of India, the India government believes that “recognition as a data secure country is vital….to ensure meaningful access in cross border supply.” Underlying this seems to be the fear articulated by Ameet Nivsarkar, vice-president of Nasscom, that “European companies start insisting on a data secure status as a critical factor for giving business.” As the European Commission recently announced its comprehensive reform of EU data protection rules, perhaps we will see an uptick in third countries looking to achieve an adequacy designation.

Posted

ZDNet blogger, Michael Krigsman, reported recently that nearly 70% of IT projects fail in some important way: An eye-popping number!

There can be endless debate on the actual failure rate of IT projects – the answer most likely depends on the criteria used to define “failure” – but a couple points are clear:

  • An unacceptably large percentage of IT projects are not delivered on time or on budget or fail to produce the desired outcomes.
  • This is a chronic problem in the industry that has not materially abated over time despite extensive commentary describing “best practices” for reducing the incidence of failure.

Engaging an external supplier to perform an IT project, whether as part of a larger outsourcing or as a standalone initiative, adds a further layer of complexity. Not only does the customer need to focus on the internal challenges to achieving success (e.g., clear articulation of objectives / requirements, strong executive / project leadership, setting realistic expectations with users), it must integrate a third party – whose interests are not fully aligned with those of the customer – into the project.

Given the high failure rate of IT projects, customers are advised to spend the time required to negotiate contracts that provide appropriate protections against financial and contractual risks. While there are many elements that should be addressed, the following deserve special attention:

Specifications – the more detailed and specific the better. IT projects are typically priced on a fixed fee, capped time and materials or straight time and materials basis. Under a fixed fee or capped time and materials model, the pricing structures are intended to provide the customer greater certainty as to the cost of the project by shifting the risk to the supplier of the cost of performance. However, a fixed fee or cap is only as good as the specifications on which it is based. The same is true for committed completion dates. High level, inaccurate or incomplete specifications usually result in numerous unanticipated pricing increases and schedule adjustments through change orders.
To the extent practicable, customers should invest the time upfront (i.e. prior to committing to a particular supplier) to develop a detailed set of business and functional requirements for the project and include those specifications as part of the Statement of Work (SOW). This will enable the parties to establish realistic expectations concerning the cost, schedule and outcomes for the project at the outset.

There is one notable exception to bear in mind. Some IT projects – particularly those related to large ERP implementations – may not be susceptible to fixed or capped fee arrangements at the very early stages of the project (e.g., for the macro design or blueprinting stages). If sourced at the very early stages of these projects, a straight time and materials model may turn out to be the more commercially feasible approach for the front-end work. After the early phase work is completed, however, the supplier will have been intimately involved in framing the specifications and downstream requirements and, therefore, should be able to commit to fixed or capped fees for many (if not all) of the subsequent project phases (e.g., technical design, coding, implementation). See SourcingSpeak’s earlier post for further discussion of pricing and contract structures for ERP projects.

Change Control – even with detailed specifications, there will inevitably be some changes along the way. Some of these changes will have a meaningful impact on the supplier’s cost of performing the project and ability to meet scheduled completion dates. In these cases, the supplier should be entitled to reasonable adjustments to its fees and the project schedule. However, many changes are relatively minor (e.g., a modification to a screen layout) and unless they cumulatively become significant should be treat as “background noise.” Contractually, there should be a materiality standard – either a general principle of materiality or a defined threshold – before adjustments to specifications trigger adjustments to pricing or schedule. Absent a materiality standard, the customer may find itself subject to a seemingly endless series of incremental price increases and schedule delays through the change control process.

Acceptance – sign-offs and acceptances of interim deliverables should be provisional pending acceptance of the final integrated suite of deliverables for the project. Suppliers typically characterize interim deliverables (e.g., the technical design for a systems implementation) as subject to “acceptance” by the customer. That’s fine for purposes of determining whether a particular payment milestone has been met, but should not prejudice the customer’s right to ultimately accept or reject the final project deliverables (e.g., the fully implemented system) or to require the supplier to perform re-work on components parts that were previously “accepted” by the customer but later found deficient (e.g., through systems integration testing).

In most cases, the customer will not have sufficient knowledge or expertise to fully evaluate whether interim deliverables may have deficiencies that will impact the final integrated work product, particularly if the project involves the customization or implementation of a proprietary product of the supplier. (The one notable exception would be the customer’s business and functional requirements where it is generally reasonable for suppliers to expect the customer’s sign-off to be final). Because acceptance is an important demarcation point in determining the customer’s contractual rights and remedies, the contract should provide that the final set of project deliverables – including all previously “accepted” component parts – are subject to final acceptance or rejection by the customer.

Excuses – do not allow a general assumption that the customer will do everything it promises to do or broad or open-ended statements in the SOW of the customer’s retained responsibilities. SOWs prepared by suppliers typically include overly broad statements of the customer’s responsibilities and a general assumption that the customer will properly perform all of those responsibilities in a timely manner. This assumption will almost always prove to be incorrect in some respect and, as a result, create uncertainty regarding the supplier’s accountability for successful completion of the project.

While suppliers should not be held responsible for unavoidable delays caused by customer performance failures, the circumstances under which the supplier is granted relief need to be carefully circumscribed.

  • Clear & Precise Definition – Care should be taken to describe with as much specificity as possible in the SOW the commitments and resources that are required of the customer to support the supplier’s performance of the project. Statements like: “The customer will provide timely access to all customer resources and subject matter experts required for completion of the Project . . .” should be avoided as it subjects the customer to open-ended obligations, which in turn offers fertile ground for the supplier to claim an excuse from performance (and a basis for an increase in price or change in schedule).
  • Prompt Notice – The supplier should be required to notify the customer promptly and in writing of the customer’s failure and its potential impact on the project. The notice should spell out the specific failure and its impact on the project. Besides being good project management (i.e. addressing issues sooner rather than later), the supplier should not be allowed to wait until the project has fallen off the rails to shift responsibility to the customer based on issues the supplier did not bother to bring to the customer’s attention when they arose.
  • Workarounds – The supplier should be required to use reasonable efforts to work around the customer’s performance failure and remain on schedule. In many circumstances, the supplier can easily work around a customer performance failure. For example, if the customer is a few days late in reviewing a particular deliverable, there is a good chance the supplier can fill those days with other project activities that are not dependent on the customer’s input on the deliverable.

Only if these conditions are satisfied should the supplier be entitled to an adjustment to the project schedule (and potentially project fees). The contract should provide that any adjustment be no more than is reasonably required to account for unavoidable delays caused by the customer’s performance failure.

Remedies – both “Plan A” and “Plan B” remedies should be addressed. When the supplier fails to produce conforming deliverables in a timely manner, the best option (Plan A) is usually to have the supplier correct the problem as quickly as possible at no additional cost. The contract should include mechanisms to ensure that the supplier is motivated to do so. While there are various approaches (e.g., holdbacks, milestone payments), if the failure causes schedule slippage, a common mechanism is to provide for financial credits for delay. The longer the delay, the larger the financial credit.

If the problem is big enough and the delay is long enough, the customer may lose confidence in the supplier and decide to invoke Plan B, which is to pull the plug on the supplier’s continued participation in the project (or cancel the project in its entirety). There are gradations to Plan B remedies ranging from:

  • Engaging a third party to complete the project (where feasible) with the supplier being responsible for incremental costs incurred by the customer.
  • Accepting the deliverables in their deficient state and equitably adjusting the supplier’s charges to reflect their diminished value.
  • Rejecting the deliverables and seeking a refund of amounts paid to the supplier.

It is preferable that the appropriate remedies be spelled out in the contract rather than left to the murky realm of common law remedies.

* * * * *
The contractual protections outlined above are not intended as a substitute for following non-contractual “best practices” for avoiding project failures. When engaging a supplier to perform a project, however, securing appropriate contractual protections is an essential element of risk mitigation.

Posted
By

The topic of the day appears to be “big data,” meaning the aggregation, mining, and analysis of data. This data analytics helps determine customer profiles so that companies can tune their offerings and sell more of the right things to the right customers. As recently reported in the New York Times Magazine, Target, through the use of such analytics, was able to determine that a teen was pregnant by her purchases before her father knew she was pregnant. This allowed Target to adjust its coupon offers based on Target’s knowledge of buying practices of mothers-to-be. But, at what cost does this analytics come?

Caribou Honig, writing on Forbes.com, makes a case “In Defense of Small Data” that collecting, storing, and processing mounds of data is costly and provides no more–and perhaps less–useful data than analyzing only the limited data set that really matters. In addition, storing this volume of data has its own direct costs.

And this is only half of the story . . . There are also legal costs and risks to big data.

With every item of data collected and retained, comes increased data privacy risk. Nearly every state and the District of Columbia has a data breach law that requires companies to take affirmative actions in conjunction with any release of personally identifiable information. The net result is that if information is improperly disclosed, a company can face huge financial and reputational risk. Any time a company collects more data, a company increases its risk of disclosure of personally identifiable information. This means that added security is required, additional insurance may be required, and there is still the risk of a disclosure. These problems are compounded in the international space where different countries have laws that are even more stringent than those in the US about how personally identifiable information can be used–particularly without the consent of the relevant individual.

Even seemingly anonymous data can become personally identifiable. For example, as noted in a study by CIO Magazine, a mobile application that uses anonymous geolocation data can pretty readily identify me simply because most of the time, I am either at my home or at my office–that combination alone is likely sufficient to uniquely identify me.

Of course the proponents of “big data” are also correct. There is a big benefit in mining data to better target a company’s offerings (frankly, I prefer to get banner ads that are relevant to me than completely irrelevant ads, and I am more likely to click through those ads), but when forming a data analytics strategy, a company needs also to focus on the cost side of the equation and should only collect and process that data that really provides a commercial benefit. There is no single right answer, this is a commercial balance between business benefit and risk.

As they say, “bigger isn’t always better.” Sometimes, the smallest bowl of porridge is just right.

By
Posted In:
Posted
Updated:

Posted

According to a report in the Economic Times of India, the Indian government has demanded that the European Union designate her as a data secure country. The request came in the context of current bilateral free trade agreement negotiations. An Indian government official is reported saying “Recognition as a data secure country is vital for India to ensure meaningful access in cross border supply.” The official goes on the state that “we have made adequate changes in our domestic data protection laws to ensure high security of data that flows in.”

Seasoned India-watchers may disagree. Traditionally India has had no dedicated privacy or data protection laws, with various statutory aspects scattered under a number of enactments, such as India’s cyber law, The Information Technology Act 2000. In 2011, India finally enacted the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011 to implement parts of the Information Technology (Amendment) Act 2008. The 2011 Rules cover a subset of personal data (referred to as sensitive personal data, but unhelpfully the meaning of this term differs from that used in the Data Projection Directive) and lay down security practices and procedures that must be followed by organisations dealing with such sensitive personal data.

The 2011 Rules were broad in scope and ambiguously drafted. The impact on the outsourcing sector was unclear and subsequent clarifications had to be rushed through by the Indian government. These clarifications helped somewhat but were still found wanting, with one commentator describing them as “half baked.”

The EU’s Data Protection Directive permits personal data to be transferred to third countries (i.e. countries outside of the EEA) if that country provides an adequate level of protection. The current list covers only a handful of countries including Canada, Switzerland and Jersey, and more recently New Zealand. The US is not deemed adequate but personal data sent under the Safe Harbor scheme is considered to be adequately protected. India is not deemed to offer adequate protection. Accordingly it has become standard practice to use the approved EC model clauses wherever EU-based outsourcing involves data transfer and offshore processing in India. These clauses, which provide an alternative lawful means of data transfer, place strict obligations on both parties to ensure privacy of data and are considered by some to be onerous and to act as a disincentive for business.

Thirty percent of India’s $100-billion IT and business process outsourcing industry comes from customers based in the European market. Industry representatives are concerned that India defends and grows her share of the European outsourcing market, although for the time being it is worth pointing out that none of her main competitors, such as China, the Philippines, Singapore and South Africa, have achieved data secure nation status. As reported in the Economic Times of India, according to Ameet Nivsarkar, vice-president of Nasscom, the trade association which represents the Indian software industry, “if European companies start insisting on a data secure status as a critical factor for giving business, it will become a very important criterion for perception of a country. Nonetheless, most of our companies adhere to very high level of data security.”

India has a strong track record of performing-low end data processing but desires to move up the value chain into more sophisticated outsourced work in sectors such as healthcare, clinical research and engineering design. Achieving data secure nation status will support this; the process however is a relatively arduous, and potentially political, one involving:

  • a proposal from the Commission,
  • an opinion of the Article 29 Working Party,
  • an opinion of the Article 31 Management Committee delivered by a qualified majority of Member States,
  • a thirty-day right of scrutiny for the European Parliament, to check if the Commission has used its executing powers correctly, and
  • the adoption of the decision by the College of Commissioners.

It will be interesting to see how the EU reacts to India’s demands, especially given the current proposals to reform EU data protection legislation in order to strengthen individual rights and tackle the challenges of globalisation and new technologies. Uruguay, Australia and Japan are all ahead of India being at different stages of advancement in the process. One thing seems clear – India will need to ensure her data protection laws and enforcement regime will stand up to EU scrutiny if she is serious about wanting to join the small but growing club of nations with EU data secure status.

Posted

Transition Services Agreements (TSAs) have become common (and more complex) in corporate divestitures, mergers, and spin-offs due to the increasing operational complexity of the environments impacted by these transactions. And if M&A activity increases as expected, despite a slow start in 2012, these agreements will continue to play an important (but often undervalued) role in the success of the transaction (especially after the closing dust settles).

Transition services typically are provided by the seller to the buyer (or by the former parent to the spun-off enterprise) to ensure business continuity and interim operational support for the impacted business during a “transitional” period after closing. Transition services may also be required from the buyer or divested enterprise where, for example, commingled tools, operations, software products, and know how need to be leveraged by the seller or former parent for some period of time. These “reverse” transition services are often overlooked.

In effect, transition services are a form of outsourcing where the processes that were previously handled internally are performed by the formerly affiliated enterprise during the transition period. Sounds simple, right? Isn’t it just maintaining the status quo for a short time?

Well, most companies have learned (some the hard way) that it’s not always so simple. For example, what happens (as is often the case) where the services are actually provided by a third party service provider (an outsource provider) under a contract with the seller and now need to be extended to the newly unaffiliated enterprise? Does the third party agreement allow for services to be extended to an unaffiliated party? Is consent required? Will it impact the price of these services under the agreement? And what about the underlying factors of production, such as enterprise licenses needed to provide the services. Does the license allow for this use? You get the picture.

For these reasons the tendency to put the TSA on the backburner until the deal is nearly closed has proven to be a formidable pitfall in many transactions. If you are the buyer, imagine how sales might be impacted if basic blocking and tackling for the sales support of your new company is not in place right after closing. Suppose, for example, it turns out that a mobile device (an iPad, perhaps) is not connected to the database in which key sales data resides? Or maybe there is access to the database but the server is down and (oops) there is no support or helpdesk to resolve the problem quickly? Scenarios such as these must be carefully considered and addressed in the TSA just like they would in an outsourcing.

Don’t Wait
Whether you are the buyer or the seller, you should treat the TSA as an integral part of the overall deal negotiation. Negotiating the TSA at the eleventh hour could:

  • limit the leverage of the buyer (for favorable terms and reasonable service pricing),
  • introduce surprises for the seller that could delay closing (e.g., “you never told me you needed 24×7 help desk support for 6 months”), and
  • add financial risk to the deal without a corresponding change in price.

Negotiating the TSA in parallel with the purchase agreement helps avoid these pitfalls, particularly in a competitive bid process.

Know Where You Stand and Plan Ahead
Similar to outsourcing arrangements, it is important to perform proper due diligence to avoid unpleasant surprises. For example, software licenses need to be examined when IT services are involved. If software licenses are part of the assets being sold, some combination of the purchase agreement and the TSA must address the license transfer process, including who pays for the cost of any consents from the vendors, and who bears the financial risk for licenses that can’t be transferred and need to be acquired.

There also is the wrinkle of determining whether software licenses need to be leveraged to allow for delivery of the transition services. For example, many enterprise licenses don’t permit the use of software for “service bureau” purposes where the software is used to provide a service to an unaffiliated third party. For these reasons, early and informed diligence must be integrated with the transition services process to ensure that the intended transition strategy can be executed effectively.

The relationships formed by a TSA generally are out of necessity, not convenience, and usually are focused on the minimum operational needs of the impacted business over a short period of time. However, even in the short-term, needs change and the unexpected can occur. While the service provider might not be expected to accommodate significant changes, the TSA should include some measure of flexibility to address the reasonable needs of the service recipient’s business – perhaps in the form of a protocol for adding a new, short term service or allowing for a modest extension of the transition term with advance notice.

So What’s the Key Takeaway?
Take the TSA seriously and plan for it early and in an informed manner. The issues highlighted above offer only a glimpse into the complexity you should be prepared to consider to ensure that the transition is effective and without surprises for either side. If you are a seller, resist the urge to postpone preparation for transition services until you’ve selected a buyer. If you are a buyer, at a minimum outline the likely scope of what you’ll need for transition operations as part of your diligence process. By approaching the TSA as an abbreviated form of outsourcing, the seller can mitigate the risk of potential decreases in the sale price (and a delay in closing), while the buyer will be in a better position to avoid the risk of service interruptions or, even worse, a dilution of the synergies that inspired the acquisition in the first place.

Posted

Not too long ago a major supplier asked us what we are seeing in the cloud space. We thought the interchange might be of interest to readers of the blog — so here are some selected questions and our responses.

What impact have you seen or expect to see Cloud will have on the CIO Agenda?
We’ve seen:

  • Some amount of talk, but not a great deal of action.
  • Some interest in internal IT deployment, especially for test.
  • Concern over multi-tenancy security, privacy and compliance concerns.
  • Interest in standardization opportunities but concerned about cost and other obstacles of converting all or segments of the installed base.

Expect but haven’t seen:

  • New apps that take advantage of Cloud fungibility and on-demand capabilities.
  • Movement of apps with high peak to average ratios from existing infrastructure to the Cloud. (Likely because the infrastructure capital cost is sunk).

What are you seeing with client adoption of Cloud solutions (what’s hype and what’s not)?
Plenty of hype from industry participants – not much hype from buyers. First level of maturity for most buyers appears to be cost reduction and maybe instant provisioning (although outside of test, we haven’t heard anyone say why they need this). Cost reduction not really dependent on Cloud (but it is a catalyst), more driven by software stack pruning and standardization. Clients are looking for lower unit operating costs for standardized images from services suppliers.

Where/ What are some of the pitfalls, shortcomings of Cloud and how might clients work around or be more aware of them?

  • “Scared Cloudless” – suppliers need vertical domain stories and solutions to begin to overcome this. Overcoming fear would be helped by actual, proven, vertical community Clouds.
  • A lack of understanding as to what the cost of getting the installed base from the As-Is to the To-Be Cloud. Until suppliers can show infrastructure portfolio stratification and outcome rules-of-thumb, buyers won’t seriously listen to suppliers on switching costs or the expected results and benefits.

What do you believe advisors should be doing to aid in understanding of the opportunities to switch from conventional infrastructure to solutions with Cloud underpinnings?
We need a much better understanding of the switching problem. Advisors need to know what it will cost to switch, how long it will take and what work the client will need to do. We need to be able to create models that can reasonably predict results in a tops-down fashion in a couple of weeks. Bottoms-up, months long studies are inconsistent with the sourcing process. We also would like to see the supplier community be prepared to step up to proposing the price and outcomes of transformations/transitions from As-Is configurations to the To-Be Cloud.

Posted
By

Since the start of the 112th Congress, there has been a heightened focus on cybersecurity. Congress has not passed new cybersecurity related legislation since 2002 when the Federal Information Security Management Act was enacted. In 2011, the Obama Administration announced its cybersecurity proposal, and a number of bills are currently active in both the House and Senate that focus on different aspects of cybersecurity and the mechanisms to protect private infrastructure and networks against cyber threats. One of the major philosophical differences between the various bills is which government entity should be responsible for cybersecurity – the Department of Homeland Security (DHS) or the National Security Agency (NSA). The Administration’s proposal favors DHS over NSA.

The most widely supported proposal is the bipartisan Cybersecurity Act of 2012 sponsored by Sens. Joe Lieberman (I-Conn) and Susan Collins (R-Maine). The hallmark of this Bill is the requirement that companies notify DHS of intrusions into their networks and the creation of mandatory compliance with industry specific cybersecurity standards. Senator John McCain (R-AZ) has a competing bill in the Senate, the Secure IT Act (S.2151), that focuses on self-regulation by the private sector rather than imposing government standards.

In the House, there are three notable active bills: (i) The Secure IT Act (H.R. 4263) , (ii) the Promoting and Enhancing Cybersecurity and Information Sharing Effectiveness Act “PRECISE Act” (H.R. 3674), and (iii) the Cyber Intelligence Sharing and Protection Act of 2011(H.R. 3523). The House Secure IT Act was introduced on March 27, 2012, and mirrors Sen. McCain’s version of the bill. The two other bills set cybersecurity standards for critical private networks and focus on information sharing mechanisms between the government (notably the NSA) and internet service providers so that threatening traffic can be blocked before causing harm.

Ultimately, it seems unlikely that any major cybersecurity legislation will pass during this session of Congress given the current election cycle. However, the recent activity on the Hill highlights to private industry areas where significant cyber improvements are warranted. Information sharing between government and the private sector, as well as compliance with certain baseline security standards for privately held infrastructure, are perhaps the most prominent topics. The potential for legislation, on top of existing requirements like the SEC’s cybersecurity disclosure guidance, demonstrate that for private industry, focusing on only the technical aspects of cybersecurity is likely to be insufficient. Companies will need to understand the evolving legislative and regulatory requirements with which they must comply and build compliance into their operations.

Posted

In 2009, the EU issued Directive 2009/136/EC of the European Parliament. The Directive concerns the ‘regulatory framework for electronic communications networks’ and includes what has come to be known as the “EU Cookie Rule”; the part concerning the use of cookies is just a small part of the whole Directive. Other articles of the Directive included accessibility for disabled users, provision of public telephones, and the universality of affordable internet connections at a reasonable connection speed.

All EU Member States were to have implemented new laws to comply with the Cookie Rule by May 26, 2011, but not all have. In the case of the UK, the Directive was implemented and the government immediately suspended enforcement for 12 months to provide organizations with time to comply. We’re now about 10 weeks from May 26, 2012, when websites selling goods or services to individuals in the UK must comply with the UK implementation of the Cookie Rule or face investigation by the Information Commissioner’s Office with the potential for fines of up to £500,000.

If you operate a website that provides goods or services to residents of the EU, and the UK in particular, before May 26, 2012, you should download and read the UK ICO’s Guidance on the New Cookies Regulations (the “Cookie Guidance”), which sets out the steps you need to take now to ensure you comply. In particular, you should (if you haven’t already):

  • Inventory all of your organization’s websites that provide goods/services to EU residents; and
  • Audit each of those websites and determine:
  • what kind of cookies (and other similar technologies) are being used and for what purposes;
  • which are 1st party cookies and which are 3rd party cookies;
  • if any are persistent cookies, how long do they last; and
  • which of those cookies are “essential to the operation of the service” and/or “explicitly requested” by the data subject.

Once you have done all that, you’ll need to verify that your website’s privacy policy accurately describes your practices with regard to the use of cookies, determine how best to inform consumers about your cookie practices and determine how best to obtain consent from your users, all before the May 26, 2012, deadline. Unfortunately, what it means to properly provide information and obtain consent in this area can be complicated, so you’ll need to consult legal counsel, as well.

Posted
By

After deciding recently that peeking cautiously at quarterly brokerage statements might not be the best investment strategy, I can now say that while I’ve been sleeping at the investing switch for the last couple of years, innovation has been working overtime.

Having scoffed for a while at what “good paying green jobs” might have meant, it didn’t take a lot of poking around in the battery, fuel cell, natural gas and chemical industries, to paint a more vivid and alluring picture. As an investor waking up from a long hibernation, I only wish this was a party where I had shown up unfashionably early.

Despite most of us having spent the last few years of the economic meltdown hunkered down, reducing our expenses and keeping a low profile, there have been some brave souls that have been hard at work reinventing how the world might work in this century.

Take for instance a company that calls themselves a mobile application studio – Chaotic Moon. It takes guts (and success) for them to promote their services by saying they’re smarter than you, they’re more creative than you and they can make you more money. While I’m not personally interested in brainwave controlled skateboards, I was very interested to read what Wired, PCWorld and GeekWire (hey, an investor has to seek high and low for good opportunities), had to say about them helping Whole Foods develop the shopping cart of the future.


Do I really need a shopping cart that follows me down the aisles? I wouldn’t have thought so, but given the skinny aisles in my local grocery store and the added benefit of not having to worry about circumnavigating stock boys, small children and those whose only job appears to be getting in everyone else’s way, maybe my visceral reaction was all wrong. Add in the bonus of not having to correct for wheels that inevitably pull to the side or having to apologize for smacking my cart directly into someone while I’m busy scanning the shelves for some product that I’m too embarrassed to ask where it’s located, and grocery shopping begins to look more like toy shopping from a kid’s perspective.

Think about it. Self navigating, self powered, shopping carts that know where to find the items on your list, know what you’ve added, tell you if you’ve added the wrong item (say, frozen rather than fresh broccoli – without having to wait for your spouse to tell you after you’ve gotten home) and can perform the checkout without having to take everything out of the cart. That smells like the future is about to arrive.

So what’s involved? I guess we’ll all have to wait until MJ from iFixit gracefully performs the official teardown, but in the meantime, it appears the Smarter Cart consists of a basic shopping cart that’s been modded to include a Windows 8 tablet, Kinect sensor, barcode scanner, battery, motor, a whole lot of software (locally and in a cloud or two) and a speech-based interface like Apple’s Siri, that hopefully won’t make too many smart remarks about my food choices or mock me with synthesized tsk-tsk noises.

Will it work? Maybe. Others have tried aspects of this before. Like IBM’s Shopping Buddy.

What I like about it as a consumer is the chance to improve the grocery shopping experience. What I like about it as an investor is the knowledge that innovation is still alive and kicking all over, not just at 1 Infinite Loop. What I like about it as an outsourcing advisor is that it seems like things are about to get really interesting.

By
Posted In:
Posted
Updated: