Traditionally organizations have focused IT security primarily on perimeter defense, building walls to block external threats from entering the network. While this is still important, it is not enough. Cybercriminals regularly breach perimeter defenses and data frequently lives outside those defenses in the cloud elsewhere, so organizations need to apply a data-centric security strategy that protects data wherever it is. With today’s proliferation of data, evolving global and regional privacy regulations, growth of cloud adoption, and advanced persistent threats, data-centric security enables organizations to be in control of their data regardless of location while rendering it unreadable to data thieves. But, to be effective, this protection must happen automatically without relying on user intervention.
This white paper outlines the challenges of data security in this age of data proliferation. It also provides strategies to discover and classify your critical data and apply data-centric security to it.
Data Proliferation, Increased Regulations, and Better Cybercriminals
Many legacy data security architectures were built on the assumption that data will live in a data center and be consumed on-premises. The traditional IT environment was controlled by IT from end-to-end. IT owned and operated the infrastructure, security, and applications and in turn had immense visibility into and control over both data and users. All access to data and applications passed through layers of perimeter security, such as firewalls, next generation firewalls, VPN, anti-virus, intrusion prevention system, etc.
However, for the modern organization these checkpoints no longer exist. No matter how strong the perimeter around the datacenter is, the security it delivers is merely conceptual, because:
1. Perimeter security cannot scale for the movement and proliferation of data
The widespread adoption of cloud services, big data environments, and IoT technologies means organizations are moving huge amounts of data very rapidly, often to third party infrastructures and partners. This presents a host of challenges:
•Diverse data forms, including structured, semi-structured and unstructured data
•Perimeter security choke points, which add latency and performance bottlenecks that violate service level agreements (SLA) and therefore users often have direct access to cloud services.
•Insiders everywhere: No longer are the insiders your employees within your perimeter. Your data is now in the hands of contractors, service providers, and other third parties. These “insiders” are individuals you didn’t vet, can’t monitor, and don’t control.
2. Operational complexity and regulation
Movement of data to the cloud, containers, big data technologies, and disparate tools from multiple vendors add to complexity. With increasingly blurred security perimeters, organizations are challenged to afford, implement, and manage consistent, unified policies to distributed IT resources. Every organization has a mix of legacy and new platforms.
Explosive data growth is further complicated by the increasing number of global and regional privacy regulations with differing compliance requirements. To effectively comply, organizations can no longer rely on siloed and legacy approaches to secure their data.
All of this adds up to today’s data environments becoming increasingly complex. So, it comes as no surprise that organizations perceive operational complexity as the top barrier to deploying data security. Chief Information Security Officers (CISOs) and Chief Data Offers (CDOs) increasingly recognize the need for comprehensive and integrated data security solutions that provide strong protections for sensitive data regardless of where it is stored or used.
Because legacy data security architectures do not address many of the characteristics of the modern data-centric world, they cannot protect organizations against sophisticated data breaches coming from increasingly determined attackers. If today’s CISOs and CDOs want to break the reactionary cycle of measures and counter measures, they must take a completely new approach to security.
Three-Point Strategy for Protecting Sensitive Data in your Organization
Legacy security architectures have failed often and dramatically, because they reflect outdated views of how organizations interact with their data. Data security today needs to recognize not only that data is the most valuable asset of the organization, but also that it is ever-proliferating exponentially.
Data-centric security protects the data itself rather than just the endpoints, networks, and applications it moves between. Consequently, the data itself is secure, so it can move as much as the organization needs it to without increased risk. Instead of slowing down progress and inhibiting the proliferation of data, data-centric security empowers the organization to make the most of its data wherever it’s stored and used.
A data-centric security approach must be woven into the DNA of the organization. This holistic approach is based on Thales’ experience working with hundreds of enterprise CISOs, CDOs, CIOs, and architects on the frontline of data security and protection as well as best practices required by numerous regulations and industry standards. To adopt this approach to data security, organizations need to do the following :
1. Discover and classify your sensitive data
Sensitive data sprawls across the enterprise, the cloud and well beyond. Typically, IT security has limited visibility into where data is stored and who has access to it. Distributed data risks range from breaches to compliance violations. Start by identifying where the most sensitive data assets reside in your on-premises data center, and then move to your extended environments, such as cloud and hosted services. Begin by searching your storage and file servers, applications, databases, and virtual machines. Find data across the organization, wherever it exists, and classify its sensitivity and importance based on internal policies and external regulations.
Discovering, identifying, and classifying your sensitive data is the critical first step in this process, but it also needs to be repeatable and agnostic of technology or geography. Today’s data discovery and classification solutions provide visualized dashboards and drill-downs that help you get a clear understanding of what kind of sensitive data you have, where it is located, and its risk score. The risk scores aggregate various parameters, such as protection level, number of elements found, location, amount of sensitive data, etc., and allow organizations to identify the sensitivity of data objects, such as files and databases. Businesses can then protect data and mitigate risks, for example, by prioritizing remediation or making educated decisions about third-party data sharing or cloud migration.
2. Protect your sensitive data
Ideally, to protect the sensitive data itself, you set across your organization a baseline encryption strategy, which mitigates data leakage and breach disclosure risks.
With your data discovered and classified, you can determine the risk each data set adds to your business and prioritize how and where to implement access controls and obfuscation security mechanisms, such as file-level encryption with granular access controls and tokenization with dynamic data masking. This means protecting the data by making it more difficult for unauthorized users to access and making it unreadable and useless, if it’s stolen or leaked.
Currently, encryption is one of the most popular and effective data security methods used by organizations. Data encryption translates data into another form, cipher text, so only authorized users can access the data as clear text. While encryption transforms data using a specific algorithm, tokenization protects sensitive data by substituting non-sensitive data. Tokenization creates an unrecognizable tokenized form of the data that maintains the format of the source data. The tokenized data can also be stored in the same size and format as the original data. So, storing the tokenized data requires no changes in database schema or process. If the type of data being stored does not have this kind of structure – for example text files, PDFs, MP3s, etc., tokenization is not an appropriate form of obfuscation. Instead, file-system level encryption would be appropriate. It would change the original block of data into an encrypted version of the data.
When determining which data encryption solution type will best meet your requirements, there are several considerations. At a high level, data encryption types can be broken out by where they are employed in the technology stack. There are four levels in the technology stack in which data encryption is typically employed: disk, file system, database, and application. In general, the lower in the stack encryption is employed, the simpler and less intrusive the implementation will be. However, the number and types of threats these data encryption approaches can address are also reduced. On the other hand, by employing encryption higher in the stack, organizations can typically realize higher levels of security and mitigate more threats.
3. Control encryption keys
The security of cryptographic processes is dependent on the security of the cryptographic keys used to encrypt the data. If the keys used to encrypt or tokenize data are stolen with the encrypted or tokenized data, the data is not secure, because it can be deciphered and read in plain text. For encryption and tokenization to successfully secure sensitive data, the cryptographic keys themselves must be secured, managed and controlled by your organization and not a third-party or cloud provider.
As organizations deploy ever-increasing numbers of siloed encryption solutions, they find themselves managing inconsistent policies, different levels of protection, and escalating costs. The simplest path through this maze is to transition to a centralized key management model. Encryption key management involves administering the full lifecycle of cryptographic keys and protecting them from loss or misuse. Keys have a life cycle: They’re created, live useful lives, and are retired. Key lifecycle management includes generating, using, storing, distributing, archiving, and deleting keys. Some of the benefits of centralized key management are: Protection Layers Security Deployment Complexity Application Database File system Disk Centralize key management Cloud Storage Databases Storage & Archive Application & Web Servers File Servers Virtual Machines VM
•Unified key management and encryption policies
•System-wide key revocation
• Reduced risk of human errors in setting user and administrative permissions
• High availability and scalability
• Secure FIPS 140-2 validation
• Cost savings with automation
• Consolidated audit information
• Simplified backup and recovery
• Enhanced security with comprehensive separation of duties