The Practical Executive’s

The Problem

There has been much confusion in the marketplace regarding data loss prevention (DLP) controls. There are numerous contributing factors, most notably a general lack of understanding in the vendor community about how data security works or what communicates risk to a business. Impractical processes were established, operational bottlenecks ensued, and the ongoing threat of data loss and theft persisted. As a result, organizations that want to protect their confidential data and comply with laws and regulations are often skeptical and unsure where to turn. Some have been burned by unsuccessful implementations.

The important thing to realize is that it’s not the technology behind DLP controls that ultimately determines your success— it’s the methodology and execution strategy of your vendor that governs both your experience and results. This whitepaper provides guidance and clarity on the following: it explains important distinctions and advises on how to assess a potential vendor; it provides valuable insight into data-breach trends; it offers an easy-to-follow nine-step process for implementing and executing a data protection strategy in a manner that is practical, measurable, and risk-adaptive in nature; and finally, it offers numerous “practical best practices” to avoid common pitfalls and eliminate most of the operational challenges that challenge DLP implementations.

A Starting Point

All DLP controls should fulfill the first two objectives in the following list. However, a more advanced DLP solution will also be equipped with the third capability.

⦁ They provide the ability to identify data.

→ Data-in-Motion (traveling across the network)

→ Data-in-Use (being used at the endpoint)

→ Data-at-Rest (sitting idle in storage)

→ Data-in-the-Cloud (in use, in motion, at rest)

⦁ They identify data as described or registered.

→ Described: Out-of-box classifiers and policy templates help identify types of data. This is helpful when looking for content such as personal identifiable information (PII).

→ Registered: Data is registered with the system to create a “fingerprint,” which allows full or partial matching of specific information such as intellectual property (IP).

⦁ They take a risk-adaptive approach to DLP

→ Risk-adaptive DLP sets advanced data loss prevention solutions apart from the other DLP tool sets. Derived from Gartner’s continuous adaptive risk and trust assements (CARTA) approach, risk-adaptive DLP adds flexibility and pro-activity to DLP. It autonomously adjusts and enforces DLP policy based on the risk an individual poses to an organization at any given point in time.

To illustrate how the first two common capabilities work, a DLP control is told:

→ What to look for (e.g., credit card numbers)

→ The method for identifying the information (described/registered)

→ Where to look for it (e.g., network, endpoint, storage, cloud)

What happens after a DLP control identifies the information depends on a) the risk depends on the risk tolerance of the data owner, b) the response options available when data loss is detected, and c) if the solution is risk-adaptive.

From Vision to Implementation

Although all DLP controls provide similar capabilities, it’s important to understand that not all vendors have the same vision for how DLP helps to address the problem of data loss. Therefore, your first step is to understand the methodology and execution strategy of each vendor that you are considering.

By asking a vendor, “What’s your methodology?” you are really asking, “What’s your vision for how this tool will help solve the problem of data loss?” This is an important yet rarely asked question; the answer allows you to understand a vendor’s vision, which in turn enables you to identify its unique capabilities and the direction its roadmap is likely to head. For decision makers, knowing why vendors do what they do is much more relative to your success and long-term happiness than knowing what they do.

A vendor’s methodology also heavily influences its execution, or implementation, strategy. For example, if one vendor’s methodology starts by assessing data-at-rest, and another’s starts by assessing data-in-motion using risk-adaptive controls, then their execution strategies differ greatly. How a vendor executes DLP controls matters because it impacts both your total cost of ownership (TCO) and your expected time-to-value, which are crucial for making the right purchase decision and for properly setting expectations with stakeholders.

An important note: You should avoid applying one vendor’s methodology to another’s technology. The methodology defines and drives a vendor’s technology roadmap, so by mixing the two aspects you risk investing in a technology that won’t meet your long-term needs.

Measurable and Practical DLP

If you’ve attended a conference or read a paper on DLP best practices, you are probably familiar with the metaphor, “don’t try to boil the ocean.” It means that you can’t execute a complete DLP program in one fell swoop. This is not a useful best practice because it doesn’t help you figure out what to do and when. In some respects, “don’t boil the ocean” sounds more like a warning than a best practice.

Unfortunately, many published best practices aren’t always practical. Lack of resources, financial or otherwise, and other organizational issues often leave best practices un-followed— and therefore effectively useless. There’s far greater value in practical best practices, which take into consideration the cost, benefits, and effort of following them, and can be measured to determine whether you and your organization can or should adopt them.

In order for your DLP control to be measurable and practical in managing and mitigating risk of data loss, there are two key pieces of information that you have to know and understand:

⦁ To be measurable, you have to know and apply the risk formula for data loss. Although similar to other risk models, the risk formula for data loss does have one substantial difference, which we explain below.

⦁ To be practical, you must understand where you are most likely to experience a high-impact data breach and use the 80/20 rule to focus your attention and resources.

The Risk Formula for Data Loss

The basic risk formula that most of us are familiar with is:

Risk = Impact x Likelihood

The challenge with most risk models is determining the likelihood, or probability, that a threat will happen. This probability is crucial for determining whether to spend money on a threat-prevention solution, or to forego such an investment and accept the risk.

The difference with the risk formula for data loss is that you are not dealing with the unknown. It acknowledges the fact that data loss is inevitable and usually unintentional. Most importantly, the risk formula allows risk to be measured and mitigated to a level that your organization is comfortable with.

Therefore, the metric used for tracking reduction in data risk and ROI of DLP controls is the rate of occurrence (RO).

Risk = Impact x Rate of Occurrence (RO)

The RO indicates how often, over a set period of time, data is being used or transmitted in a manner that puts it at risk of being lost, stolen, or compromised. The RO is measured before and after the execution of DLP controls to demonstrate by how much risk was reduced.

For example, if you start with an RO of 100 incidents in a two-week period, and are able to reduce that amount to 50 incidents in a two-week period after implementing DLP controls, then you have reduced the likelihood of a data-loss incident (data breach) by 50%.

One important consideration is that if one of the DLP solutions you are comparing has risk-adaptive technology, it is likely to show a smaller RO. This is because risk-adaptive DLP is far more accurate at identifying risky user interactions with data, hence producing fewer false positives and a lower overall RO. This presents an advantage over traditional DLP solutions. However, it also makes comparing the reduction in risk a bit more tricky.

To accommodate for this, it is recommended that each incident produced by the non-risk adaptive technology be reviewed and verified that it is not a false positive. Take into consideration that just because the data identified matches the DLP rule created, it does not necessarily mean that the data is a violation of policy. Intent and context around the data loss incident must also be inspected to ensure that the incident is in fact a true positive.

The 80/20 Rule of DLP

In addition to identifying RO, it’s important to discover where your organization is most likely to experience a high-impact data breach. To do this you need to study the latest breach trends and then use the 80/20 rule to determine where to start your DLP efforts. A recent study has made this information readily available.

According to a 2018 study by the Ponemon Institute, 77% of data breaches occur from internal employees in the form of accidental exposure and compromised user credentials.

To truly have an effective program for protecting against data loss, you have to feel confident about your ability to detect and respond to data movement through web, email, cloud, and removable media.

This is where a risk-adaptive DLP solution can provide an advantage. Traditional DLP solutions often struggle to identify items such as broken business processes or irregular activity, both of which can lead to significant data loss. Risk-adaptive DLP understands the behavior of individual users and compares them to their peer groups to quickly and autonomously tighten DLP controls when activity is not in line with the end user’s job function. This proactive approach can reduce risk for accidental data loss and exposure.

To read full download the whitepaper:
The Practical Executive’s Guide To Data Loss

SEND ME WHITEPAPER

Previous articleData Protection Trends
Next article5 Education Operations Management Trends