29 September 2014

Hidden threats

In a recent survey, conducted by KPMG, to map the exposure of Swedish organisations for cyber threats, it was found that 93% of the participating organisations were breached and 69% were exfiltrating information.

Technology from FireEye was used to perform the analysis of both outbound and inbound network communications of 14 organisations, both public and listed, with an average size of 5000 employees.
Out of 15 586 security incidents recorded during the period of the analysis, 49% were related to unknown threats. 
For 11 of these organisations, callbacks were initiated by the same hosts that were previously identified as infected by a malware object or browser exploit, within an interval of a few minutes after the infection. 

It was found that each organisation was in average subject to 43 security incidents per day. It was also discovered that organisations were averaging two new infected devices each day and 30 exfiltrations of data per day.

The ability to reduce the time spent to discover and respond to an ongoing attack is primordial for organisations to scale down the window of exposure to unknown threats.
The study report can be found here



13 June 2012

Trust Cloud Computing By Understanding Risks

It is not a surprise that new starting businesses no longer build their IT infrastructure in a closet.
Even though it was still a mainstream reality a few years back having servers standing on the floor of a cleaning locker, among mops and dustpans, start-ups nowadays directly begin their activity using cloud computing.
With limited resources, it is rational to focus on core business instead of having to acquire and maintain hardware-based solutions and hire technical staff whose mission is to support the delivery of IT services to the business.
No doubt that adopting a hosted collaboration platform for email/calendar/contacts is much more cost effective than deploying an in-house Exchange or Notes platform. The same applies to Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), shared storage or backup services.

When it comes to build a full scale three tier infrastructure for lab, demo and even production environment, it take no more than 15 minutes to deploy such topology in the cloud.
Besides, with the booming of low cost cloud service providers (CSP) and open source virtualisation platforms, e.g. OpenStack, you can now find cheep or even free Infrastructure-as-a-Service (IaaS) offers which are technically reliable.
  
Many companies gain in IT maturity by learning from their past misstakes and accordingly adopting guidelines, processes and procedures to standardise the delivery of IT services.
There are numerous security standards and regulatory frameworks that would help to implement a sound IT governance in a contained IT environment, within a defined security perimeter. Yet using multiple cloud services from divers CSPs makes the task of securing data much harder.
Moreover, there are many additional risks related to hosting data in the cloud, such as loss of governance, isolation failure between cloud customers, compliance risks, data leakage, incomplete data deletion, management interface compromise, malicious insider and lock-in risks.

The more cloud services a company uses, the more management interfaces there are to deal with to control parameters such as identity and access management, performance monitoring, log analysis, provisioning and rights management.

There is still a lack of standards and regulations for cloud computing, therefore it is essential to look at best practices to most appropriately preserve information confidentiality, integrity and availability in the cloud.
Key concerns with cloud computing are related to identity (i.e. secure identity federation, user provisioning and strong authentication), infrastructure (i.e. integrity of the cloud stack, data geolocation and system hardening) and compliance.
To address those concerns there is a number of compensative controls that can be implemented.

The first step is to use encryption for data-at-rest, which is actually offered by 75% of CSPs today. However, it is not necessarily recommended to use the CSP's own encryption mechanism if you are worried about privacy issues. Let's keep in mind that 90% of all data infractions in the cloud are caused by data centre employees having access to customers' data. Therefore, it is a good practice to use a third party encryption provider instead, such as Porticor, to avoid risks related to above mentioned malicious insider, geolocation violation and key management in particular.

The next measure is to implement a centralised Identity and Access Management (IAM) solution, which provides standardised mechanisms to federate multiple cloud services in terms of user accounts and access rights management. Such mechanisms can be for instance provided by Intel Cloud SSO.

Public cloud services have the characteristic of being widely accessible which means that they are likely to be subject of attacks from the internet.
Gathering and correlating log data can be useful to efficiently monitor, troubleshoot, audit and analyse users' activity as well as usage and performance of managed services. Cloud based Security Information & Event Management (SIEM) solutions, e.g. CloudAccess SIEM technology, will allow you to centralise log data into one place and offer you a common management interface to supervise incidents.

Considering the different cloud service models - i.e. IaaS, Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) - the key security consideration with cloud computing is that the lower down the stack the CSP stops, the higher responsibility will the consuming organisation have to protect its assets.
For example, in a PaaS delivery model, the CSP's responsibility will consist on maintaining the physical support infrastructure (facilities, rack space, power, cooling, cabling, etc), the physical infrastructure security and availability (servers, storage, network bandwidth, etc) and host systems (hypervisor, virtual firewall, etc).


Independently of which cloud service model and delivery model an organisation utilise, the confidentiality and integrity of the data remains the entire responsibility of the owner, e.g the organisation itself.

Using adequate compensative controls will help an organisation to mitigate risks related to migrating data to the cloud.
However, it is never worth spending $1000 to protect an information which value is $100.


Quentin Authelet
Risk consultant, KPMG

17 March 2012

Lessons learned from PCI-DSS

From an audit perspective, data is generally considered safe as long as the surrounding perimeter remains secured. Would this statement still be valid in a distributed or "cloud" environment with virtual boundaries?
This article aims to underline that an accurate information security inevitably goes through content awareness.
What is particularly interesting about PCI-DSS (payment card industry data security standard) is that it focuses on the data, whether it is being used, transferred or stored.
There is an array of processes in the PCI standard that an IT auditor could definitely benefit of while performing general audits, risk assessments or gap analysis assessments, leveraging ISO27000 standard family, SOX (Sarbanes-Oxley) or ISAE 3402.

PCI in short
PCI-DSS is probably the most detailed standard related to information security. Despite that PCI addresses a very specific type of data, i.e. cardholder information, it applies to a wild range of entities involved in payment card processing.
What the payment card industry has managed to achieve through its data security standard is to define and adopt a guideline for which everyone in the chain of business has to comply to, including merchants, processors, acquires, issuers and service providers, as well as all other entities that store, process or transmit cardholder data.

The crown jewels
For companies concerned by PCI, the crown jewel is the card related information, which implies that cardholder data is clearly identified and differentiated from other king of data.
That is what classification is all about; identify and categorize information according to its level of confidentiality and criticality for the business.
Every organization has some sort of intellectual property to protect from falling into wrong hands, e.g. application source code, PII (personal identifying information), passwords or patient information. But how many companies out there do actually implement data classification?
When you don't know what information to protect, how can you tell if sensitive data is safe or not?

Processes and procedures
The PCI standard scopes what information to protect, it also requires to identify where it is located, who has authority to handle it and how it is accessed and transferred. Then it defines the proactive and detective controls to implement in order to keep that data safe.
Despite of data classification, the safeguards described in the standard include identity management, access control, threat detection, application vulnerability testing, network segmentation and encryption.

The question that you should constantly ask yourself while engaged in an assessment (PCI or else) is if data is still secured in a changing context?
From a policy enforcement point of view, you can rarely take accurate security measures based on a single attribute, such as "who" is accessing the data. You also need to consider exactly "what" data is concerned, "where" is it destined to or originated from, "how" it is transmitted and through "which" channel of communication.
That is the reason why policy enforcement should be consolidated instead of having multiple controls looking each at independent sources and without any correlating intelligence, like a firewall would filter based on source/destination, a directory service would identify users or resources, an IDS (intrusion detection and prevention system) would look at communication flows.
An efficient way of enforcing multiple data security policies from a single management point is DLP (data loss prevention) technology. DLP would look at data in its context and makes policy decisions according to different pre-defined and/or customized rules, e.g. PCI, SOX or PII.
DLP has the advantage of being both host-based and network-based and can discover, detect and prevent regulatory irregularities and abuses for data-in-use, data-in-motion as well as data-at-rest.

Needless to say that logs are of great value when performing auditing and forensic analysis, and in a general perspective, routine log analysis is beneficial for identifying security incidents, policy violations, fraudulent activity and operational problems.
The argument of consolidating, mentioned above, applies to monitoring too. Despite that it is a requirement to track all manipulation of cardholder data, log consolidation really makes sense when it comes to PCI standard.
It would be rather inefficient for an auditor to separately review all the mandatory logs without cross references. For example, if you were just monitoring access logs, you would not learn much about what happens to the data after it has been accessed.
Log consolidation can be achieved through the use of SIEM (security information and event management) systems. SIEM analyzes the data from all the different log sources, correlates events among the log entries, identifies and prioritizes significant events, and initiates responses to events if desired.

As part of the PCI-DSS compliance effort, IAM (identity and access management) facilitates the enforcement of role based access control by restricting access to data by business need-to-know. 
The basic concept behind SoD (segregation-of-duties) is that no user or group of users should be in a position of perpetrating and concealing errors or fraud in the normal course of their duties.
Implementing a proper SoD will prevent one individual from having both access to assets and responsibility for maintaining the accountability of those assets.
IAM offers a set of processes that consolidate and systematise control of access to resources. Playing a major role in PCI-DSS compliance, these processes consist on SSO (Single sign-on), user provisioning and policy management.

PCI-DSS also requires encryption of certain card related information, while stored, processed, or transmitted. Encryption of cardholder data is one of the most effective means of preventing cardholder information disclosure and reducing the risk of fraud.
Protecting information at its core is a good business practice. 
Generally speaking, whether considering cardholder information or other classified details, there is simply no excuse for not encrypting sensitive data.

Summary
Overall, compliance with PCI reduces an organization's likelihood of suffering a data breach by 50 percent. 
As part of any information security audit, either a PCI compliance assessment, SOX, ISO 27K or ISAE 3402, an IT auditor should keep in mind the remediation which are part of PCI standard for mitigating or avoiding risks.
Considering today's fast paced adoption of software, platform and infrastructure as a service, where data is implicitly hosted in the cloud, processes such as data classification, policy enforcement consolidation, encryption, log consolidation and identity and access management are relevant approches for protecting information in a changing context.

References
Point-to-Point Encryption Technology and PCI-DSS Compliance - a draft by the PCI security standards council.
PCI Forensic Investigator - a PCI SSC program guide.
Getting PCI-DSS Compliance Right: How Identity Management Can Help Security Secure Information Access - an Oracle white paper
Payment Card Industry Compliance Report - A study by Verizon.



Quentin Authelet
Risk consultant, KPMG