How SOC Teams Can Master Collaboration Security with Both Efficiency and Risk Management

There’s an enormous increase in the number of collaboration tools being used in day-to-day business operations.

It’s a rapidly growing market for good reason; they’re key to succeeding in today’s accelerated business environment. However, in addition to all of the benefits they bring, they can also introduce new security risks.

Security Operations Center (SOC) teams need a better way to secure these tools without sacrificing efficiency or adding to alert fatigue. Efficiency is becoming increasingly important, especially during the economic downturn. The good news is that there are ways to find the balance between these needs.

The challenges of collaboration security

Part of the problem is that these SaaS tools are scattered all over. No longer is there a traditional perimeter, which can make them harder to protect. They contain a massive amount of data – some of it sensitive – in a variety of different formats.

Complicating the situation is that most organizations don’t have a full understanding of all the sensitive information that is in these tools; there tends to be a lack of full comprehension and/or visibility. There’s also (typically) no effort to map and classify the sensitive data in these apps. That leaves companies defending in a generic way – not knowing what they are defending against.

What happening is that many organizations are just putting their collective heads in the sand without addressing this important security issue. These “unknown unknowns” are a major problem; not knowing or understanding your level of risk means you’re basically ignoring it.

Some organizations turn to data loss prevention (DLP) and data security solutions, but they too often have a reputation for being inefficient, which hinders their use. There’s no efficient way to map or classify data in DLP.

Many such solutions result in far too many false positives, which then further bogs down SOC teams who must then deal with these alerts. While estimates vary, false positives are believed to account for as much as nearly half of all alerts most organizations are grappling with, and this is a major contributor to SOC analyst burnout. These false positives are really the outcome of the inefficiencies of maintaining logic when it comes to knowing what needs to go in and out. The security team needs to know what’s good or bad, but they typically don’t because they don’t have the context for it. That context comes from knowing what the sensitive data is in the tool, whether it needs to be shared externally, who needs to get permission to see or send it and so on

What’s needed is a way to use these tools in a way that doesn’t sacrifice efficiency but also meets compliance regulations.

Trying to get ahold of the problem

There are two options: you either choose to ignore it and expose your entire business to an enormous risk, or you do something about it. If all these collaboration channels are open externally and internally, there’s an element of risk involved. But at the same time, if you do use data security tools, you might be running the risk of slowing down your business, because many of them aren’t efficient.

If you don’t know the data itself and the people who should be interacting with it, you’re never going to be able to know if you’re sending sensitive information to the right parties or whether actions related to this information are legitimate.

For the SOC team, especially, the challenge with data security is that not only do you have to maintain the tool, but you have to ensure it brings value to the security program overall. The SOC will need to use the data security tool to first and foremost defend against sensitive information leaking outside of the organization or get accessed by people who shouldn’t access it.

This requires constant tweaking – and it can be an intensive process that simply adds to the workload for what are often already overburdened teams; there is often a lot of legwork involved. You also need to be able to put the logic behind it; this is all a great deal of work.

Balancing risk and efficiency

Key to the balance of risk and efficiency is getting an understanding of what’s legitimate and what’s not in terms of information being shared, with which channels and which individuals. To begin with, you need to formulate a baseline of normal behavior. This will be better than any rule-based tool; it’s learning as you go.

What’s needed is a contextual engine, one that can complement existing tools in the SOC as opposed to being a rip-and-replace situation. With a contextual engine, you can do remediation workflows in existing SOC tools to maximize efficiency. A context-based tool understands the relationship between individuals and platforms and therefore can assign a rationale for an activity before creating an alert. This in turn dramatically reduces false positives.

Efficiency and risk management converge

As collaboration platforms have proliferated, they’ve often been adopted without a full understanding of the security risks that can come with them. These tools and platforms are handling loads of data, much of it sensitive in nature. IT teams need a way to deal with this risk without ignoring efficiency. 

The existing approach just doesn’t work for today’s landscape – it doesn’t work at scale and it’s not going to work in today’s collaborative environment. There’s simply too much information coming in and from different places. The only way to fix this issue is going to be with an API-based approach and using machine learning to map, classify, assess and remediate.

About the Author

Oz Wasserman is head of product at Reco, is a long-time security professional with deep technical experience from the Israeli Defense Forces (IDF), FireEye and Abnormal Security. Oz and the Reco team are building the next-generation of data security platforms.

Featured image: ©Gorodenkoff