Search Results for: NS0-404 Schulungsunterlagen ๐Ÿฉฑ NS0-404 Fragen&Antworten ๐Ÿ• NS0-404 Zertifikatsfragen ๐Ÿš‹ โž  www.itzert.com ๐Ÿ ฐ ist die beste Webseite um den kostenlosen Download von โฎ† NS0-404 โฎ„ zu erhalten ๐Ÿฆ—NS0-404 Fragen Antworten

Posted
By

In the wake of some extreme weather during 2011 (earthquakes, tsunamis, tornadoes, hurricanes, and mudslides), what better time to review your disaster recovery and business continuity (DR/BC) solution and planning processes?

In some cases, DR/BC planning is a legal or regulatory requirement, but even where it is not, common sense argues for a sound DR/BC plan for any business. Why?

  • For most businesses, the dependency on computer systems, applications, databases, networks and electronic delivery systems increases daily – to the point where the efficiency and productivity of the business would drop precipitously if these tools are not available.
  • As more and more of a business’ computer systems become interconnected, the possibilities of failure in one system rippling across and adversely impacting other systems increases.
  • With the advent of cloud and distributed computing technologies, information systems frequently rely on computing infrastructure that is geographically dispersed, requiring a detailed analysis of the risks associated with service interruption in multiple locations.

What are the key activities and decision points for establishing a sound disaster recovery and business continuity plan? DR/BC planning can be boiled down to three key steps.

  • Develop a DR/BC plan based on business requirements. A DR/BC plan is a detailed description of actions to be taken in the event of a disaster to support the continuity of your business and operations. It addresses the recovery of all resources needed to run your business, including people, systems and data. To inform the development of a DR/BC plan, start with a business impact analysis of each discreet business function – how does this business function support the delivery of your business to customers, what other business functions is it dependent on, what other business functions depend on it, and what resources are required to ensure that that business function can operate. Each business function can then be ranked in order of importance and the appropriate recovery processes and timeframes can be established. Security measures should also be addressed to ensure the safeguarding of the company’s systems and sensitive data even in the event of a disaster.
  • Test the DR/BC plan on a regular basis. Periodic, comprehensive testing the DR/BC plan is critical to its success. Even very thorough business impact analyses often miss critical interdependencies or make assumptions that prove false in real life. Testing your plan – whether through tabletop exercises that “talk” through the steps of activating and executing the plan or full-fledged drills that include activating DR facilities and systems and reconstituting operations – often uncover unexpected circumstances that were not accounted for in the plan or where the procedures outlined in the plan were not adequate to meet the business needs. Document all such findings of what worked well and what did not via a structured post-exercise evaluation and debriefing and use the lessons learned to enhance the plan going forward. Regular testing also provides a means of training (and refreshing) the impacted personnel on DR/BC plan.
  • Update the DR/BC plan as necessary to account for changes in your business or technologies. In order to be effective, the DR/BC plan cannot be developed and then put on a shelf. Given the pace of technology and business process change in today’s competitive marketplace, a good DR/BC plan will remain sound and relevant only if it is viewed as a “living document” – continuously updated to reflect how your business operates today and will need to operate the day after the disaster happens.

How does outsourcing impact DR/BC planning? Outsourcing does not mean that a company needs to accept lower standards for disaster recovery and business continuity. In fact, outsourcing can offer the opportunity to improve a company’s DR/BC solution by using the vendor’s more established DR/BC services and because the outsourcing process itself may shine light on current weaknesses. There are, however, certain complicating factors to consider when defining DR/BC services to be provided by an outsourcing relationship.

  • Will the vendor’s standard DR/BC solution meet your requirements? The question isn’t if an outsourcing vendor offers a DR/BC solution, but whether their standard solution is acceptable to meet your company’s requirements. You should engage the vendor in a frank and open dialog about what your requirements are (yes, you will need to be able to articulate them!) and what their solution provides in order to identify and resolve any gaps (for example, recovery times requirements). Obviously, one key factor to consider — the more customized a solution the more likely that there will be additional costs associated with implementation (including the purchase of additional infrastructure, if needed, which you may decide to purchase yourself or have the vendor provide) and ongoing maintenance / operation. Also, interestingly, we find that vendors are sometimes hesitant to share details of their solution. This is a red flag – if they can’t or won’t clearly describe their solution and be willing to commit it to writing, you may need to consider alternatives to your DR/BC planning.
  • Will you use a third party DR/BC provider for any part of the solution? There are many third parties that specifically offer disaster recovery services, ranging from full service solutions to simply providing reserved facility space that a company can use in the event of a disaster. If you choose to use a third party disaster recovery vendor in addition to an outsourcer for your regular production systems, then consider (and align with both vendors on) where the responsibility hand-offs and demarcations are between the outsourcing vendor and the DR/BC vendor.
  • How will the vendor’s DR/BC Plan integrate with your overall DR/BC Plan? In today’s world of narrowly-scoped outsourcing arrangements and multi-vendor environments, it is often necessary for a company to maintain an overall DR/BC plan to ensure that all aspects of its operations can be recovered in the event of a disaster. Similar to when integrating with a third party DR/BC provider, when integrating the outsourced vendor’s DR/BC solution into your overall DR/BC plan it is absolutely crucial to understand the responsibility hand-offs and demarcations.

Regardless of what DR/BC solution you end up with as part of the outsourcing, make sure it is well described and documented in the contract. Key topics to address, in addition to a general solution description of where and how the services will be recovered, include:

  • Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs) for each of the services. These measure, respectively, how quickly the vendor will have the service back up and running and when data is involved how current that data will be when recovered. You may have a single RTO and RPO expectation for all of the outsourced services, or they may range based on the criticality of the service. Regardless – get them documented (and consider whether you want service level commitments tied them).
  • Timeframe for implementing a custom DR/BC Plan. If you do require a customized DR/BC solution, make sure that there is alignment on how quickly the vendor will implement the solution. Clearly this should be prior to the completion of the transition to the services of vendor, but it is important to consider where in the transition timeline this should take place. For example, some companies may prefer to have the backbone of the solution in place prior to the start of the general transition of the services, so that the it can be tested in conjunction with the overall transition.
  • Testing frequency and your ability to participate. As previously mentioned, periodic testing of the DR/BC solution is crucial to both confirming that the solution continues to meet the DR/BC requirements and providing valuable practice to ensure vendor personnel know what to do if a disaster actually occurs. In fact, it is such an important activity that you, as the customer, may want to actual participate in the testing so you can verify for yourself that the solution works.
  • Vendor’s participation in your company’s overall DR/BC testing. If you do have an overall DR/BC plan that requires integration with the outsourced vendor’s solution, then you also might require the vendor to participate in your testing of your own DR/BC plan. This might be separate from the vendor’s testing of their plan – or you might require that they align their tests to occur along with your own tests so that the full, end-to-end recovery process can be reviewed.
  • Integration with Force Majeure provisions. Most contracts include standard force majeure provisions – if a “force majeure event” occurs, there is some level of excused performance for the party affected by the event and (often) termination rights for the other party if the event affects performance for a prolonged period of time. You should be careful, though, that the force majeure provisions do not excuse the vendor from performing the agreed DR/BC services. If this carve-out is not clear, then you run the risk that the force majeure provision may in fact nullify obligation the vendor might have to actually perform the DR/BC services in the event of a disaster that constitutes a force majeure event (I can’t think of any circumstance when this would ever be the intent!).
  • Additional fees, if any, for the DR/BC services. As mentioned above, DR/BC services may incur additional fees – most commonly when the vendor is providing a custom solution. Make sure that these are known and included as part of your business case analysis, and then documented appropriately as part of the charges under the contract.
  • Parameters governing the declaration of a disaster (and proper escalation notifications and communications). Who is responsible for declaring a disaster? What is the communication plan in the event of a disaster? Who (you or the vendor) is responsible for making each of the communications?
  • Service Level or other performance expectations during a disaster. Many vendors will want performance relief in the event of a disaster. If the agreed DR/BC solution does not permit compliance with the steady state service levels, then consider negotiating separate service levels that do reflect the expected level of service during a disaster.

Remember, the ultimate goal of proper disaster recovery and business continuity planning is to permit the recovery of your business by minimizing the loss and downtime sustained by your company’s operations in the event of a disaster. Having a plan that is sound, up-to-date, and ready to be implemented will allow you to start off the new year on the right foot!

Posted

Because evaluating a service provider’s security posture is more challenging in the cloud, in Part Three of this article we looked at ways to evaluate a cloud service provider’s security prior to signing the contract and some of the issues between customers and suppliers created by the SEC Guidance. In Part Four we’ll look at ways to monitor the provider’s security during the term of the agreement.

Auditing Security

For years customers of outsourced IT services have asked providers for a copy of their SAS 70 Type 2 audit report as a means of evaluating a supplier’s security. Since the SAS 70 wasn’t really designed to be a security audit, it isn’t really suited for this, but in the absence of a more security-specific standard, the SAS 70 was a suitable proxy.
Recognizing the need for a more security specific audit, in mid-2011, the American Institute of Certified Public Accountants (“AICPA”) established a Service Organization Controls (“SOC”) reporting framework in the hope of providing the public and CPAs with a clearer understanding of the reporting options for service organizations.

Additionally, the AICPA sought to reduce the risk of misuse of SSAE 16, which recently superseded SAS 70, as a mechanism for reporting on security, compliance, and operational controls.

To achieve these goals, the AICPA released the following reporting framework:

  • SOC 1: Reporting on Controls at a Service Organization Relevant to User Entities’ Internal Control Over Financial Reporting (also known as the “SSAE 16”)
  • SOC 2: Reporting on Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality, or Privacy
  • SOC 3: SysTrust for Service Organizations

Of the three, SOC 2 is the only new type of examination. Since the SSAE 16 has replaced the SAS 70, service providers are already offering to share their SSAE 16 audit report with customers. However, between the two reports, the SOC 2 is really what customers need to see to evaluate a cloud provider’s security. Unlike SOC 1 (SSAE 16), the focus of the SOC 2 is on controls related to security, compliance, and operations, rather than controls relevant to financial reporting. SOC 3 reports review a service provider’s controls related to security, availability, processing integrity, confidentiality, or privacy but do not provide the same level of detail as provided in a SOC 2 report. If data or processes that could create a material risk to the customer will be going into the cloud, then customers should expect to see both a service provider’s SOC 1 and SOC 2 reports. If the data going to the cloud is not sensitive and/or the processes going to the cloud are not important to a company’s operations (i.e., they don’t create a risk to the company), then it may be acceptable for a cloud provider to provide a SOC 3 instead of a SOC 2.

The AICPA provides a downloadable comparison of the three SOC reports in MS Word format here.

SOC 2 examinations report on controls that mitigate the risks of achieving the following “Trust Services” principles:

  • Security – The system is protected against unauthorized physical and logical access.
  • Availability – The system is available for operation and use as committed or agreed.
  • Processing Integrity – System processing is complete, accurate, timely, and authorized.
  • Confidentiality – Information designated as confidential is protected as committed or agreed.
  • Privacy – Personal information is collected, used, retained, disclosed, and destroyed in conformity with the commitments in the entity’s privacy notice and with criteria set forth in generally accepted privacy principles (GAPP) issued by the AICPA and Canadian Institute of Chartered Accountants (CICA).

Companies are not required to be assessed on all five Trust Services principles. Cloud providers are permitted to select the Trust Services principle(s) that best meet their reporting objectives. While service providers are expected to base the selection of principles on the relevance of each principle to their services, as well as the interests of their customers, customers need to recognize that the service provider’s opinion on which Trust Services principles best meet the service provider’s reporting objectives may not be the same as the ones the customer wants to have evaluated.

When selecting a Trust Services principle, the cloud provider asserts its compliance with the principle and the underlying “criteria.” There are no “bright line” rules defining the specific controls that must be implemented to meet the criteria of selected principles. For example, item 3.4 of the Security Principle states: “Procedures exist to protect against unauthorized access to system resources.” The criteria provide illustrative controls that could be used to meet the requirement, including the use of VPNs, firewalls and intrusion detection systems, but none of the illustrative controls are specifically required. Cloud providers also can present their own internal controls as long as those controls meet the criteria for the selected Trust Services principle(s). This means that, just like the ISO 27001 certification and the Statement of Applicability, customers must dig deeper to understand how the supplier’s controls satisfy the relevant criteria.

The scope of a SOC 2 examination can be expanded or contracted based on the specific services being provided. Specific criteria can be omitted if they are not applicable to the service being reviewed, or the scope of a SOC 2 examination can be expanded to report on topics not specifically covered by the SOC 2 guidance. This gives cloud providers the ability to request that the auditor also report on compliance with other frameworks, such as the security requirements of HIPAA or the CSA CCM within a single SOC 2 report.

Unlike ISO 27001 and SOC 3, SOC 2 is not a certification. However, service providers that successfully complete a SOC 2 examination are entitled to display the AICPA’s service organization logo on their promotional material and website for 12 months following the date of the report.

As part of your contract with a cloud-based service provider, you should require the supplier to have a SOC 2 report prepared on a regular basis and that report should be shared with you along with a plan to address any issues identified by the SOC 2 audit.

Conclusion – So What?

As we discussed in Parts One and Two, the SEC’s new Guidance requires that companies not only disclose material cybersecurity events when they occur, but also disclose material risks that could occur. For those companies that outsource functions that have material risks, the Guidance also requires a description of those functions and how companies address those risks. In Parts Three and Four we looked at how you can evaluate and monitor the security posture of your cloud service providers.

The SEC has sent a message in no uncertain terms that it expects public companies to provide timely, accurate and complete-but-not-overly-disclosing information about cyber incidents and risks.

While, from the SEC’s perspective, this new Guidance merely clarifies the existing requirement that public companies disclose “material” information to investors, these new guidelines impose significant obligations that such companies would almost certainly consider new.

The impact of these new requirements is magnified when combined with the whistleblower provisions in the Dodd-Frank Wall Street Reform and Consumer Protection Act. The Dodd-Frank Act offers a reward of 10-30% of any recovery over $1 million to informants who provide certain types of information leading to successful securities actions — notably including failure-to-disclose actions.

Companies now face the unenviable task of deciding what aspects of cyber incidents or risks are “material” and disclosing them, with the knowledge that the sophisticated and determined nature of today’s cyber-attackers make predicting the nature of an attack and its consequences incredibly difficult. The nature of the cyber threat is one that is constantly adapting and evolving. For example, should RSA have anticipated that an attacker would target information about their tokens and disclosed the risk that if someone did somehow compromise the algorithms embedded in the tokens, then RSA might have to spend $52 million replacing all of the tokens? Almost by definition, once such an event happens it could be considered a “risk” that should have been disclosed. And if a company does not disclose an event, their IT staff could collect a $100,000 – $300,000 bounty (or more) for information leading to a successful failure-to-disclose action.

It’s considered axiomatic in the security community that it’s not a question of whether a company will have a cyber incident, but, rather, when it will happen. Once a company that has outsourced a function to a cloud provider (or any provider, for that matter) and that cloud provider suffers a cyber incident that creates a material issue for the company that must be disclosed, the company will be forced to defend itself against a claim that it should have disclosed the (now apparent) material risk associated with that outsourcing. The best defense in such circumstances will be the fact that the company did as much due diligence as possible, including selecting a supplier that was certified as ISO 27001 compliant with an acceptable SOA, and that was subject to ongoing SOC 2 audits.

Faced with these new disclosure obligations, companies should examine their own cybersecurity processes and procedures as well as those of their suppliers, look at their incident response plans, and examine their cyberinsurance coverage.

Posted

In Parts One and Two of this article we discussed the new Guidance issued by the Securities and Exchange Commission (SEC) Division of Corporation Finance that provides guidance to companies with regard to whether and how a company should disclose the impact of the risk and cost of cybersecurity incidents (both malicious and accidental) on a company.

In particular, the Guidance suggests that companies need to evaluate cyber-related risks including:

  • prior cyber incidents and the severity and frequency of those incidents;
  • the probability of cyber incidents occurring;
  • the quantitative and qualitative magnitude of those risks, including potential costs and other consequences resulting from misappropriation of assets or sensitive information, corruption of data or operational disruption; and
  • the adequacy of preventative actions taken to reduce cyber-related risks in the context of the industry in which they operate and risks to that security.

The Guidance specifically states that if a company outsources functions that have material cybersecurity risks, the company should provide a description of those functions and how the company addresses those risks. The Guidance also appears to recommend that companies use secure logging, which becomes challenging when functions are outsourced to the cloud.

Since researchers recently found flaws in Amazon Web Services that they believe exist in many cloud architectures and enable attackers to gain administrative rights and to gain access to all user data, in this Part Three and in Four of this article we’ll discuss how you can evaluate the security of a cloud service and the contractual terms you should consider (or try to insert) into your cloud contracts.

Evaluating Security Compliance

ISO 27001
One of the best known information security management standards is ISO 27001. According to ISO:

“[ISO 27001] specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System within the context of the organization’s overall business risks. It specifies requirements for the implementation of security controls customized to the needs of individual organizations or parts thereof.”

The cloud service provider you select should be certified as being compliant with ISO 27001. Instead of being certified as compliant, some providers’ standard form contracts will say things like, “Supplier will meet the requirements of the ISO 27001 standard” or “Supplier will conform to the ISO 27001 standard,” neither of which are the same as being certified as compliant. Cloud service providers should represent and warrant that they are certified as compliant with the ISO 27001 standard and that they will remain certified during the term of the agreement. However, that certification is only the first step in the customer’s understanding of the supplier’s security posture.

The ISO 27001 certification means that a company has implemented the controls it has selected for its environment, but it doesn’t necessarily provide an opinion on the quality of those controls. Customers need to review a service provider’s Statement of Applicability (“SoA”), as well, to understand a supplier’s information security objectives and associated controls. Some service providers are reluctant to share their SoA, claiming that it contains sensitive security information that the company does not disclose. From a customer perspective, this should not be an acceptable answer. Without understanding the service provider’s objectives and associated controls, the customer can neither assess the security value of the ISO 27001 certification nor determine whether the cloud service being evaluated could create a material risk that should be disclosed pursuant to the Guidance.

CSA CCM More recently, the Cloud Security Alliance has been developing tools to assist cloud service providers in being secure and cloud customers in evaluating the security of the services they’re receiving. Among other things, CSA has developed the “Cloud Controls Matrix.”

The Cloud Security Alliance Cloud Controls Matrix (CCM) is specifically designed to provide fundamental security principles to guide cloud vendors and to assist prospective cloud customers in assessing the overall security risk of a cloud provider. The CSA CCM provides a controls framework that provides a more detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in 13 domains. The CCM is designed to tie into other industry-accepted security standards, regulations, and controls frameworks, such as the ISO 27001/27002, ISACA COBIT, the Payment Card Industry Data Security Standards, the various NIST security standards, and others.

Customers should verify that the service provider has incorporated the CCM into its information security management system. The CCM also provides an excellent tool for evaluating a cloud service provider’s security controls.

As mentioned above, there are other security standards besides the ISO 27001 certification. In addition to the ones already mentioned, an organization called Shared Assessments www.sharedassessments.org is working on standardizing and improving the efficiency of service provider controls assessments.

Disclosing Cybersecurity Risks and Incidents

The SEC Guidance increases the tension between cloud service providers, who would prefer not to disclose to customers any known risks to their environment or cyber-incidents that occur unless they have to, and customers, who need to know about such risks and incidents to determine whether they impact their reporting obligations.

Companies contracting with cloud service providers for any functions that could create a material risk to the company, either due to the type or quantity of data being held by the cloud provider or the functions being performed by the cloud provider, need to have a frank conversation with the service provider regarding the company’s needs for purposes of disclosure and the supplier’s policies regarding disclosure of cyber risks and incidents. Customers may even need to know about incidents that do not affect their own data because the fact that such an incident occurred may require the customer to disclose the risk as part of its reporting obligations or may cause the customer to take prophylactic steps that should also be disclosed. Among other things, cloud customers need to understand the supplier’s policies and procedures around cyber incidents, including how the supplier responds to requests from law enforcement for information.

The procedures for notifying the customer of cyber-related risks and incidents should be clearly spelled out in the contract and in relevant sections of the supplier’s policies and procedures (which the supplier should not be able to change without notifying the customer).

Having looked at some of the ways you can evaluate a cloud provider’s security prior to signing a contract and some of the potential issues created between customers and cloud providers by the SEC Guidance, in Part Four we’ll look at how you can monitor the supplier’s security during the term.

Posted
By

With cloud services now obtaining as much press as the fallout from Kim Kardashian’s wedding, it seems safe to say that clouds are likely to be in the business forecast for the foreseeable future.

A strong answer to every IT infrastructure manager’s prayers, cloud computing can provide both a scalable on-demand combination of hardware, software and services, as well as helping fulfill corporate/social mandates for becoming greener.

The people over at Carbon Disclosure Project decided to commission a study into the potential impact of cloud computing on large US businesses. Released in July 2011, the report was independently produced by Verdantix and sponsored by AT&T.

Not surprisingly, the study shows “that by 2020, large U.S. companies that use cloud computing can achieve annual energy savings of $12.3 billion and annual carbon reductions equivalent to 200 million barrels of oil – enough to power 5.7 million cars for one year.”


What is surprising is the incredibly thoughtful nature of the free, 23-page report (aptly named “Cloud Computing – The IT Solution for the 21st Century“). Not only is it an easy read, but it offers:

  • Terrific insight into the characteristics, types of services and deployment models of clouds
  • A crisp explanation of the differences between dedicated IT, private clouds and public clouds
  • Analysis that is based on at least 10 name-brand, multi-national companies (e.g., Aviva, Boeing, Novartis, State Street) that have invested in cloud computing
  • The logic as to why adopting a cloud model makes sense
  • A financial analysis of the costs of the various models in response to a hypothetical (but realistic) loss of operational support for an HR application within one year
  • The green benefits of clouds, including a carbon emissions model for CO2 reductions
  • A glossary of cloudy and cloud-related terms

While not a silver bullet, for the right applications, cloud computing can offer dramatic savings of both time (think in terms of multiple weeks for new servers to be provisioned to minutes) and money (think in terms of limited or no upfront capital costs and a pay-for-what-you-use billing model).

By
Posted In:
Posted
Updated:

Posted

Hot on the heels of the UK Information Commissioner’s approval of First Data’s binding corporate rules (BCRs), Viviane Reding, the Vice President of the European Commission and EU Justice Commissioner has signalled reform of the BCR scheme aimed at making BCRs even more effective. BCRs are a way of ensuring compliance with the complexities of European data protection law – they are particularly relevant to multinationals with business operations located in the EEA who need to transfer personal data to affiliates in jurisdictions outside of the EEA.

In a speech given to the International Association of Privacy Professionals’ (IAPP) inaugural Europe Data Protection Congress in Paris on 29 November 2011, Reding announced her plans as part of upcoming revisions to the EU data protection framework. Reding’s proposed reforms will be built around on 3 principles: simplification; consistent enforcement; and innovation. Above all, Reding proposes reform “compatible with small innovative companies’ endeavours to operate on a global scale” so that companies of all sizes and operating across all business models will be able to take advantage of BCRs.

Simplification. Under Reding’s proposal the BCR approval process would be streamlined with approval by one Data Protection Authority (DPA) resulting in automatic recognition by DPAs in all other member states without the need for consultation which currently operates across the 19 participating DPAs. This should help to speed up the approval process and reduce the burden on the applicant. Further, once BCRs are approved by a DPA, there would be no need for additional national authorisation prior to transfer, as is currently required in some member states (but not others, such as the UK).

Consistent Enforcement. Reding outlines a vision of a more consistent approach to data protection and enforcement across Europe. DPAs can expect a levelling of regulations and enforcement powers on a consistent basis, putting companies which operate across European borders on a level playing field. Some DPAs will see an increase in their enforcement powers as a consequence. And BCRs would become directly binding within companies and with respect to third parties, meaning that they could be enforced through DPAs or directly by data subjects through the courts (as she says, there’s a clue in the name – binding means legally binding).

Innovation. The subtitle to Reding’s speech “unleashing the potential of the digital single market and cloud computing” is a signpost for what is perhaps the most interesting and forward thinking part of her speech, where she states that the boundaries of traditional methods of regulation need to be pushed to enable European business to compete globally, including by embracing new technology (such as the cloud). Key here is Reding’s critique of the geographic restrictions of current regulation: “Data protection laws that apply only within a given territory just do not work in an era where information flows are global: personal data is stored in one country, effectively processed in another and the data subject is located in a completely different country.” The new BCRs will instead apply to “all internal and extra-EU transfers of any entity in a group of companies”. Establishment of BCRs by a corporate group will enable “one single document that governs the privacy policy of the whole group instead of a variety of different – and not always consistent – contracts.” The rules would also extend the use of BCRs to data processors – indeed all kinds of business models including cloud computing – whereas currently only data controllers may use them.

Of course, this is all fairly blue sky stuff and it will be interesting to see whether the implemented BCRs match the aim of a simpler and less burdensome set of rules governing the transfer data. It is encouraging to see a regulator thinking in such an enlightened manner and, although the detail remains to be worked out, Reding has clearly signalled her intention to make the adoption and use of BCRs significantly less complex and a more cost efficient way of facilitating intra-group transfers of personal data. “I encourage companies of all size to start working on their own binding corporate rules!”, she says. That said, companies considering embarking on the BCR journey might take a moment to pause for breadth whilst the detail of these reforms unfolds; for those already embarked, it will be interesting to see if and how the lead DPAs and the Article 29 Working Party decide to respond to the proposed reforms.

Posted

14 November 2011 saw First Data Corporation become the 11th entity to have binding corporate rules (BCRs) approved by the UK’s Information Commissioner’s Office (ICO).
First Data Corporation is a global electronic commerce and payment processing company. As a payment processor, secure handling of data is at the heart of First Data’s business. First Data has business operations in 35 countries and serves more than 6 million merchant locations, thousands of card issuers and millions of consumers worldwide. First Data is the first payment processor to have achieved BCR approval. Time will tell, but while it maintains this distinction, this may give it a significant advantage over its competitors at a time when data privacy issues, including some recent high profile data breaches and regulatory settlements, are never far from the news and the handling of personally identifiable data continues to be subject to a high level of scrutiny by regulators across the globe.

According to First Data’s Chief Executive Officer Jonathan J. Judge: “Data privacy is fundamental to the success of our business, and we’re deeply committed to protecting the information entrusted to us by our clients and employees alike. We have high standards for data privacy, and this recognition from exacting European regulators demonstrates our global leadership in data protection compliance.”

BCRs allow a data controller to transfer personal data from the European Economic Area (EEA) to affiliates located outside the EEA in compliance with the eighth data protection principle and Article 25 of the Data Protection Directive (95/46/EC). BCRs are particularly relevant to multinational companies with operations located within the EEA who regularly need to transfer personal data (whether customer data, employee data or otherwise) to diverse affiliates located outside the EEA. BCRs do not provide a basis for transfers made outside a company’s corporate group (e.g., in connection with outsourced data processing or under a data sharing agreement).

Approval given by one of the data protection authorities (DPAs) of the 19 participating EEA countries (the lead authority – in First Data’s case, the UK’s ICO) binds the other DPAs under the principle of mutual recognition – if the lead authority is satisfied that the BCRs put in place adequate safeguards within the meaning of Article 26(2) of the Directive, the other participating Data Protection Authorities (DPAs) should have confidence in their decision and accept their findings without further scrutiny or comment. Each application will have already been circulated to the other DPAs for comments under what is referred to as the co-operation procedure.

Seeking BCR approval is no light matter. As First Data found out, it requires a significant commitment of resource and time; however for some organisations, BCRs may offer a better solution than the use of the European Commission approved model contract clauses. This is particularly true for multinational companies with complex structures, where hundreds of contracts can be required to cover transfers between all affiliates resulting in a significant administrative burden in terms of making sure that contracts are kept up to date and kept on pace with changes to the corporate structure. The US Safe Harbor regime also has its limits. For example, businesses in some sectors not subject to jurisdiction of the Federal Trade Commission or the Department of Transportation cannot use the Safe Harbor. This includes banks and other financial service providers, and telecom providers. And Safe Harbor only works where the transfer of personal data is from an EU-based data controller to a data controller in the US.

The Article 29 Working Party has published papers on Binding Corporate Rules including a model application form, a BCR framework and BCR FAQs. Making use of these materials will help to speed up the process. However, it may still take up to 12 months from the start of the co-operation procedure. The key to a successful application is in demonstrating that adequate safeguards within the meaning of Article 26(2) of the Directive are in place. Although the application form drives this result, there is a significant undertaking in terms of providing supporting information, including details of the organisation’s privacy function, confirmation of the binding nature of the BCRs throughout its group, and provision of supporting privacy principles, data security and related policies, training plans, etc. Organisations without the necessary privacy infrastructure in place will struggle to meet the Working Party’s requirements.
The ICO reports a great deal of interest in BCR applications to date, including requests made under the UK’s Freedom of Information Act. The ICO recommends that, since an application will likely contain confidential information, such information be clearly identified and marked as commercially sensitive. Other entities that have had BCRs approved include General Electric, Koninklijke Philips Electronics, the Hyatt Hotel Corporation, Accenture, JPMorgan Chase and BP. First Data’s BCR approval is the 3rd in 2011 – Spencer Stuart Management Consultants and CareFusion Inc. also achieved recognition this year, which may suggest the start of an upward trend in their adoption.

Viviane Reding, Vice President of the European Commission and EU Justice Commissioner has just announced plans to reform the system of binding corporate rules (BCRs) as part of the upcoming revision of the EU data protection framework. Expect a SourcingSpeak blog post on this shortly.

Posted

The holiday shopping season in the U.S. started in earnest on Black Friday (or even Thursday for some stores) and online shopping celebrates today with “Cyber Monday.”

Contrary to popular belief that Black Friday is the day that retailers go from being in the “red” to being in the “black” for the year, according to Snopes.com the name Black Friday was actually coined to be a derisive term applied by police and retail workers to the day’s plethora of traffic jams and badly-behaved customers. The popularity of Cyber Monday shows that the problems of high traffic and bad behavior aren’t limited to the brick and mortar environment any more.

According to this article from eweek.com,

“Worries about ‘denial-of-service outages are the name of the game for online retail organizations during the heavy holiday shopping season,’ Adam Powers, CTO of Lancope, told eWEEK.

Some can be inadvertent, driven by high demand from shoppers. Powers described Target’s launch of the Missoni clothing line earlier this year as a ‘poster child for a legitimate oversubscription DoS,’ noting that high demand for Missoni merchandise ‘brought’ Target ‘to its knees.'”

Online retailers and brick and mortar companies with e-commerce websites need to make sure they can handle the increased traffic expected during the holiday season – particularly on days like “Cyber Monday.” To deal with the potential volume, they can turn to cloud-based services to add capacity and prevent the site from crashing, but as we’ll discuss below, the availability commitments made by many cloud services create their own risks.

Companies don’t only have to worry about benign customer traffic. Denial of service attacks could come from entities trying to sabotage a retailer’s site during this period for a number of reasons:

  • Hacktivists might try to take down a prominent site to take advantage of the increased media attention during the holiday season or to make a point to a brand they don’t like;
  • Less scrupulous competitors might hope that customers who can’t access a site will jump to their site;
  • Criminals might try to blackmail a site and get the retailer to pay for the DOS attack to stop.

According to the National Retail Federation, retailers earn 25% to 40% of their annual revenue during the 61-day holiday period of November and December.

For online retailers whose websites are expected to be available 24×7 that means that each hour (especially peak hours) could be worth a meaningful percentage of the retailer’s annual revenue – putting a tangible value on each minute of downtime.

It’s not only online retailers who have to worry about outages. Even without a denial of service event, retailers whose systems are provided by a cloud-based service provider are at risk. Cloud service provider’s availability SLAs are frequently as low as 95% per month. That could mean up to 36 hours per month of unscheduled down time before any SLA failure is triggered.

Since most cloud providers use the “no harm – no foul” SLA model where it only counts as “down time” if you call in the issue, that could mean a brick and mortar retailer that is open 12 hours, 7 days per week during the holiday season could experience up to a total of 3 business days of down time each month of the holiday season before the cloud provider even fails a 95% availability SLA. If that maximum downtime were reached during November and December, it could put as much as 4% of a brick and mortar retailer’s annual revenue at risk. For online retailers, where the average value of each hour is lower due to 24×7 operations, the risk is lower, but still substantial.

No service level credit will make up for those losses and the limitation of liability in cloud contracts will probably preclude any other recovery from the cloud service provider.

The eweek.com article also notes,

“‘Retailers don’t have to just worry about making sure their sites are up and capable of handling the ‘influx of shoppers,’ but that the payment data being collected remain secure,’ Mandeep Khera, CMO of LogLogic, told eWEEK. Merchants who collect credit card information have to ensure that their databases are secure so that attackers who try to break in don’t waltz off with payment information. Ensuring they are following all 12 PCI requirements would help retailers protect customer credit card data, according to Khera.”

which brings us to an excellent article from Ericka Chickowski at Dark Reading. She notes that for many organizations,

“the holiday shopping season isn’t just a time for chocolate fudge — it’s also time for fudging on the security rules and mindset laid out by PCI guidelines. According to Branden Williams, global CTO of marketing at RSA, the Security Division of EMC and a member of the PCI Board of Advisers, most retail outfits of all sizes have already entered a network freeze period during which no changes of any type can be made to avoid even the whisper of complications that could cause downtime. That’s well and good from a business standpoint, but the truth is that vulnerabilities that need patching and mitigation don’t take a raincheck during the high shopping season, he warns.

‘We’ve already entered the network freeze for most of these companies, so no changes to network components, system components, or applications are going to occur for the next month-and-a-half, until the middle of January. Nobody wants to get in the way of payments from going through,’ Williams says. ‘Even though I understand it, it still amazes me because it impacts some of the decision-making criteria about how severe a vulnerability might be. When I see a patch that comes out, theoretically if I’m doing this right for PCI purposes, I’m doing a detailed analysis of what the patch is and a risk assessment of what that means for the organization. I would hope that something that looks like a severe vulnerability would not be ignored in favor of the freeze.'”

The practice of freezing IT and not implementing security updates may be part of the reason that, according to a recent study by Verizon, only 21% of businesses that store credit and debit card data maintain compliance with Payment Card Industry (PCI) regulations in between their mandatory annual audits.

The percentage of retailers’ revenue at risk during the holiday season makes a focus on IT critical. The bank robber Willy Sutton is often (erroneously) quoted as saying that the reason he robbed banks is that’s where the money is. Even if a company doesn’t have to deal with down time due to a cloud service provider or its own IT issues, or a denial of service (intentional or not), criminals are looking to exploit the volume of online transactions during this holiday season and the number of retailers who are not PCI DSS compliant.

Online retailers need to balance the risk of downtime against the revenue and reputation risks associated with a major data breach. This season is not the time for IT to be on a “freeze,” – it is time for IT to redouble its efforts to maintain both uptime and security compliance.

Posted

Do you transfer personal data from Europe to the US? Do you use cookies on a website aimed at European customers? Do you send marketing emails to Europe? Do you otherwise “process” data in Europe? Do you really have consent to process personal data? If any of these questions strike a chord with you, then you should certainly note recent trends in the EU regarding the concept of “consent,” not least the news from Germany that Facebook is to be prosecuted (and potentially fined up to $400,000) over its facial recognition software feature and for failure to properly obtain consents.

This issue of what constitutes proper consent has been coming to the boil in 2011.

A recent Opinion published by the Article 29 Working Party (the grouping of data protection authorities from each EU state – the “Working Party”), looked again at the concept of “consent,” which, subject to certain exceptions, is required from individuals before such activities are carried out. Adopted 13 July 2011, it was aimed to provide a thorough analysis on the concept of consent as currently used in the European Data Protection Directive 95/46/EC and the e-Privacy Directive 2002/58/EC.

Germany’s Hamburg Data Protection Authority (DPA) recently announced that it will start proceedings against Facebook over the company’s facial recognition feature and photo tagging. This has further highlighted what a problem the issue of consent can be in the EU. The DPA, like many other EU enforcers, has been losing patience with companies who don’t seem willing to comply with the black letter requirements, particularly around consent, and what many have “got away with” in the past will now likely generate trouble and possible exposure to increasingly large fines. Dismissing Facebook arguments that a checkbox element amounted to compliance, the DPA is reported as saying further negotiation is “pointless” and will now look to enforce compliance with fines of up to 300,000 euro (over $400,000).

Just to top it all, we have also had confirmation this past week that proposals to overhaul the DP Directive itself should be forthcoming early in 2012.

This post will look at the complex issue of consent and see if the recent Opinion has at least managed to shed some light.

Consent clarified?

One of the Opinion’s main aims is to clarify the existing legal requirements for obtaining consent, given this is such a key issue which crops up time and again.

The Working Party’s Opinion will be persuasive in the eyes of the various European privacy regulators when making enforcement decisions, so it is worthwhile for companies, including US companies, doing business in Europe to pay close attention to its detail.

The requirement for consent
In a nutshell, under the Data Protection Directive, personal data can only be “processed” (i.e., collected, stored, amended, transferred, deleted etc) in Europe in fairly limited circumstances. One legal basis that gives a data controller the right to process personal data, however, is “unambiguous consent of the data subject.”

Further, the processing of “sensitive” personal data (medical, religious, etc.) requires “explicit consent” (unless some narrow legal grounds apply).

The Data Protection Directive goes on to define consent as “any freely given, specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed.”

The e-Privacy Directive also introduces the notion of consent, requiring the “consent” of individuals, for example, before they are sent electronic marketing communications such as emails, or before cookies are placed on their hard drives.

It can be seen, therefore, that understanding the varying approaches to the issue of consent is critical. The Opinion seeks to clarify how companies can obtain consent correctly to avoid a number of pitfalls which surround this issue. In particular, it explores in detail what “any … indication of his wishes,” “freely given,” “specific,” “informed,” “unambiguous” and “explicit” mean in practice.

“Any … indication of his wishes”
The Working Party considers that an “indication” is “any kind of signal, sufficiently clear to be capable of indicating a data subject’s wishes and to be understandable by the data controller.”

In other words, consent cannot be inferred from a lack of action. This would appear, therefore, to be the death-knell in terms of the use of pre-ticked boxes which are a common way in which companies attempt to obtain consent.

“Freely given”
“Freely given” consent means that “there must be no risk of deception, intimidation or significant negative consequences for the data subject if he/she does not consent,” says the Working Party.

It follows, therefore, that it would not be possible for a US company to say to an employee in Europe that he or she will consent to his or her data being transferred to the US otherwise he or she will not get paid (consent to a transfer being one of the ways which an extra-European transfer of data can legitimately be made). Some other workaround would be required to ensure the consent was valid and/or the transfer could be made.

“Specific”
The Working Party goes on to consider that “blanket consent without determination of the exact purposes does not meet the threshold” in the context of the meaning of “specific” consent.

When seeking consent, therefore, rather than inserting the processing information in the general conditions of a contract, for example, specific consent clauses separated from the general terms and conditions should be considered.

“Informed”
Consent is “informed,” when the information provided is sufficient to guarantee that individuals can make well-informed decisions about the processing of their personal data.

The Working Party considers that, first, the way that information is given must ensure the use of appropriate language so that individuals understand what they are consenting to and for what purposes. So for instance, the use of overly complicated legal or technical jargon would not meet legal requirements.

Second, the information provided to individuals should be clear and sufficiently conspicuous so that users cannot overlook it. It should not be “hidden” on a website, for example.

“Unambiguous consent”
According to the Working Party, “unambiguous consent calls for the use of mechanisms to obtain consent that leaves no doubt as to the individual’s intention to provide consent.”

In practical terms, this requirement obliges companies to adopt mechanisms to seek a “permanent” record of the consent such as an email record.

“Explicit consent”
The Working Party considers that “explicit” consent is similar to “unambiguous” consent and it means “an active response, oral or in writing, whereby the individual expresses his/her wish to have his/her data processed for certain purposes.”

It is considered, as above, that explicit consent cannot be obtained by the presence of a pre-ticked box, for example, and that consent should be recordable.

So what does all this mean for US companies (and those located elsewhere) doing business in Europe?
It is crucial that US companies (and those located elsewhere) first consider fully whether their activities in Europe with regard to the processing of personal data will require the consent of individuals before data is processed.

Consent will generally be required for a very wide range of activities, for example, when placing cookies on potential customers’ hard-drives in Europe, transferring names and other details to the US from Europe or selling a mailing list to a third party.

Companies should then bear the following in mind where consent is required for activities covered by the Data Protection Directive or the e-Privacy Directive:

Consent must be provided before the processing of personal data starts.

Consent cannot be inferred from a lack of action – it is far safer to obtain a signature, request that a box be ticked, etc.

No negative consequences must be attached to a failure to give consent.

Beware of “blanket” consents – it is safer to separate the different consents required in any given scenario (e.g., pop-up boxes to seek consent).

Keep the language clear and simple when seeking consent – avoid legal jargon.

Keep a record of all consents obtained.

Individuals who have consented should be provided with a method by which they can withdraw their consent.

As noted above, whilst the Working Party Opinion is not law per se, it will be taken seriously into consideration by the national regulators in Europe who do have teeth to bite those who trip over the rules on consent or those who blatantly flout them.

US companies, and those doing business elsewhere, are urgently encouraged, therefore, to audit their data processing activities in Europe and to regularly monitor these activities to ensure they do not become a victim of increasingly tough sanctions for breach of European data protection laws relating to consent. Aside from the freshly announced action in Germany against Facebook, we also have a fairly new enforcer in the UK who has recently been given significantly stronger powers, including the power to issue fines over $750,000 on the spot, per offence. Conducting a fresh review of your data processing and transfer activity has never been more timely or well-advised.

Posted

In Part One of this article, we looked at the Securities and Exchange Commission (SEC) Division of Corporation Finance’s recent release – CF Disclosure Guidance: Topic No. 2 – Cybersecurity (the “Guidance”), which is intended to provide guidance to companies on whether and how to disclose the impact of the risk and cost of cybersecurity incidents (both malicious and accidental) on a company.

In Part Two we’ll look at the specific advice provided by the Guidance regarding specific reporting regulations and how it might apply to some recent cyber-incidents.

Management’s Discussion and Analysis of Financial Condition and Results of Operations

The next section of the Guidance discusses the way that companies should address cybersecurity risks and cyber incidents under the reporting rules associated with Management’s Discussion and Analysis of Financial Condition and Results of Operations (“MD&A”) under Item 303 of Regulation S-K and Form 20-F, Item 5.
According to the SEC, the standard for discussion of cyber incidents in a company’s MD&A is the same as for non-cyber events. Thus, if the costs or other consequences associated with a cyber incident, or the risk associated with potential incidents, represent a material event, trend, or uncertainty that is reasonably likely to have a material effect on a company’s results of operations, liquidity, or financial condition or would cause reported financial information not to be necessarily indicative of future operating results or financial condition, then those costs, consequences or risks must be disclosed and discussed by the company.

For example, if intellectual property is stolen in a cyber attack, as was the case in the RSA attack, and the effects of the theft are reasonably likely to be material to the affected company, the Guidance suggests that the company should describe the stolen IP and the effect of the attack on its results of operations, liquidity, and financial condition and whether the attack would cause reported financial information not to be indicative of future operating results or financial condition. Since RSA is offering to replace all of the RSA SecureID tokens that could be affected by the information stolen in the RSA attack, at a potential cost of up to $52 million, that could rise to the level of a materiality. Similarly, if it is reasonably likely that the hacking attack will lead to material reduction in revenues or a material increase in cybersecurity protection costs, including related to litigation, the SEC wants the company to discuss these possible outcomes, including the amount and duration of the expected costs.

Alternatively, if a hacking attack or some other cyber incident did not result in harm to a company, but it prompted the company to materially increase its cybersecurity protection expenditures, the SEC wants the company to disclose those increased expenditures. However, the Guidance is careful to note that discussions of increased cybersecurity spending do not require disclosure of information that would make it easier to attack the company.

Other Disclosures

The Guidance goes through other disclosure requirements and provides examples of when a company might have to disclose information about a cyber incident.
If a cyber incident (or multiple incidents) materially affect a company’s products, services, relationships with customers or suppliers, or competitive conditions, the company should provide disclosure in the company’s “Description of Business” as required by Item 101 of Regulation S-K; and Form 20-F, Item 4.B. In determining whether to include disclosure, the SEC recommends that companies consider the impact on each of their reportable segments. For example, if a company has a new product in development and learns of a cyber incident that could materially impair the future viability of the product, the company should discuss the incident and the potential impact to the extent the impairment to the future of the product would be considered material. As in the previous section, the impact on RSA of the loss of intellectual property associated with the SecureID token could reach the level of materiality.

Similarly, if a company or any of its subsidiaries is a party to a litigation that involves a cyber incident, the company may need to disclose information regarding the litigation in its “Legal Proceedings” disclosure, just as it would any other litigation as required by Item 103 of Regulation S-K. For example, if a significant amount of customer information is stolen, as was the case in the Epsilon and RSA attacks, and the loss results in material litigation, the Guidance recommends that the affected company should disclose the name of the court in which the proceedings are pending, the date instituted, the principal parties, a description of the factual basis alleged to underlie the litigation and the relief sought.

Finally, the Guidance notes that risk mitigation and cyber incidents could impact a company’s financial statements, and the SEC has provided examples to help companies make sure costs are given the appropriate accounting treatment. The SEC notes, for example, that after a cyber incident companies might try to mitigate the business damage by providing customers with incentives to maintain the business relationship, which should be handled in accordance with ASC 605-50, Customer Payments and Incentives. Similarly, cyber incidents may result in losses from asserted and unasserted claims, including those related to warranties, breach of contract, product recall and replacement, and indemnification of counterparty losses from their remediation efforts, all of which should be handled in accordance with ASC 450-20, Loss Contingencies.

From a more strictly accounting perspective, cyber incidents could also result in diminished future cash flows, requiring the affected company to consider the effect of certain assets including goodwill, customer-related intangible assets, trademarks, patents, capitalized software or other long-lived assets associated with hardware or software, and inventory. According to the SEC, pursuant to FASB ASC 275-10, Risks and Uncertainties:

“[Company] may not immediately know the impact of a cyber incident and may be required to develop estimates to account for the various financial implications. [Companies] should subsequently reassess the assumptions that underlie the estimates made in preparing the financial statements. A [company] must explain any risk or uncertainty of a reasonably possible change in its estimates in the near-term that would be material to the financial statements. Examples of estimates that may be affected by cyber incidents include estimates of warranty liability, allowances for product returns, capitalized software costs, inventory, litigation, and deferred revenue.”

If a cyber incident is discovered after a company’s balance sheet date but before the company actually issues its financial statements, the SEC recommends that companies should consider whether disclosure of a recognized or non-recognized subsequent event is necessary. If the cyber incident constitutes a material non-recognized subsequent event pursuant to ASC 855-10, Subsequent Events, the company’s financial statements should disclose the nature of the incident and an estimate of its financial effect, or they should include a statement that such an estimate cannot be made.

Disclosure Controls and Procedures

The Guidance is written at a fairly high level and does not prescribe any particular technologies or practices. However, there is an interesting statement at the end of the document:

“To the extent cyber incidents pose a risk to a [company’s] ability to record, process, summarize, and report information that is required to be disclosed in Commission filings, management should also consider whether there are any deficiencies in its disclosure controls and procedures that would render them ineffective. For example, if it is reasonably possible that information would not be recorded properly due to a cyber incident affecting a [company’s] information systems, a [company] may conclude that its disclosure controls and procedures are ineffective.”

In other words, when determining whether a company’s disclosure controls and procedures are effective under Item 307 of Regulation S-K, management should consider how vulnerable those systems are to cyber incidents and whether the company can conclude in good faith that its disclosure controls and procedures are “effective.” It might be that disclosure controls and procedures should be considered “effective” only if they include a monitoring system that is protected from cyber attacks to recognize when an incident has occurred – which begs the question whether anything short of secure logging would be “effective.” On a local network, logging is relatively easy, but when we start incorporating multi-tenant cloud solutions into the environment, logging starts to get a lot more challenging. On a more general level cloud providers have been reluctant to share information about their security efforts as well as any risks or failures that they don’t have to disclose.

In Parts Three and Four we’ll talk about how you can assess whether your cloud service provider is providing a secure solution and some of the things you should look for in your cloud services contract.

Posted

On October 13 the Securities and Exchange Commission (SEC) Division of Corporation Finance released CF Disclosure Guidance: Topic No. 2 – Cybersecurity (the “Guidance”), which is intended to provide guidance to companies on whether and how to disclose the impact of the risk and cost of cybersecurity incidents (both malicious and accidental) on a company.

This represents a reminder that companies should think about cybersecurity and data breach incidents when deciding how to fulfill their obligations under the SEC’s existing disclosure requirements. Up to this point, the market’s focus has been on how US law requires disclosure of data breaches affecting personal information of specific types. Other security incidents only became public knowledge because of unofficial disclosures or because of their effect (e.g., a denial of service attack). Now, the SEC has made it clear that the risks associated with cyber incidents, the costs of mitigating those risks, and the consequences of a cyber incident may rise to the level of materiality that would require disclosure to investors and regulatory authorities.

Although the Guidance is not, in itself, a rule or regulation, companies who ignore such guidance may do so at their peril.

From the Guidance:

“The federal securities laws, in part, are designed to elicit disclosure of timely, comprehensive, and accurate information about risks and events that a reasonable investor would consider important to an investment decision. Although no existing disclosure requirement explicitly refers to cybersecurity risks and cyber incidents, a number of disclosure requirements may impose an obligation on registrants to disclose such risks and incidents. In addition, material information regarding cybersecurity risks and cyber incidents is required to be disclosed when necessary in order to make other required disclosures, in light of the circumstances under which they are made, not misleading. Therefore, as with other operational and financial risks, registrants should review, on an ongoing basis, the adequacy of their disclosure relating to cybersecurity risks and cyber incidents.” [Emphasis added]

Evaluation of a company’s own cybersecurity profile is hard enough, but it’s made even more difficult in a world where a significant portion of a company’s services are outsourced and cloud-based.

In Parts One and Two of this article we’ll look at the Guidance and how it applies to companies in general. In Parts Three and Four we’ll look at ways to evaluate and document the security of your cloud-based service providers.

Although the SEC’s language focuses on cyber attacks, many of those same consequences would apply to an accidental incident. Given the potential adverse consequences of a cyber incident, the Guidance states, “as with other operational and financial risks, registrants should review, on an ongoing basis, the adequacy of their disclosure relating to cybersecurity risks and cyber incidents.”

The remainder of the Guidance focuses on how companies should disclose and discuss cyber incidents in the context of various reporting obligations.

Risk
To determine whether your company is required to disclose a particular cyber-related risk factor in accordance with the Regulation S-K Item 503(c) requirements, the SEC has said that companies should evaluate their cybersecurity risks and take into account all available relevant information, including:

  • prior cyber incidents and the severity and frequency of those incidents;
  • the probability of cyber incidents occurring;
  • threatened attacks of which they are aware, which could include things like the hacker group Anonymous’ potential threats to attack Facebook;
  • the quantitative and qualitative magnitude of those risks, including potential costs and other consequences resulting from misappropriation of assets or sensitive information, corruption of data or operational disruption;
  • the adequacy of preventative actions taken to reduce cyber-related risks in the context of the industry in which they operate and risks to that security.

Tying these to the industry in which a company operates might also mean that a company needs to consider the recent US Department of Homeland Security report that raises the possibility that members of Anonymous are actively looking for ways to attack critical infrastructure. At the same time, however, the Guidance states that risk disclosure must adequately describe the nature of the material risks and how each risk affects the company. According to the SEC, companies should not present risks that could apply to any company and should avoid generic risk factor disclosure. Depending on your particular facts and circumstances, and to the extent material, appropriate disclosures may include:

  • Discussion of aspects of your business or operations that give rise to material cybersecurity risks and the potential costs and consequences;
  • If your company outsources functions that have material cybersecurity risks, description of those functions and how you address those risks, which, for cloud-based services will be discussed in Parts Three and Four;
  • Description of cyber incidents your company has experienced that are individually, or in the aggregate, material, including a description of the costs and other consequences;
  • Risks related to cyber incidents that may remain undetected for an extended period; and
  • Description of relevant insurance coverage.

According to the SEC, a company might have to disclose known or threatened cyber incidents to place the discussion of cybersecurity risks in appropriate context. The SEC provides the following example, “if a [company] experienced a material cyber attack in which malware was embedded in its systems and customer data was compromised, it likely would not be sufficient for the registrant to disclose that there is a risk that such an attack may occur. Instead, as part of a broader discussion of malware or other similar attacks that pose a particular risk, the registrant may need to discuss the occurrence of the specific attack and its known and potential costs and other consequences.” In this context, if predictions from this Information Week article regarding the malware discovered in 2010 on the NASDAQ Director’s Desk platform are correct, it will be interesting to see how companies might disclose the risks associated with that cyber attack.

In Part Two we’ll look at the rest of the Guidance and its specific recommendations for handling disclosure of cyber-related risks and incidents under that various reporting regulations.