Autonomics in Security Operations: What Is Possible with Cyber Security Automation?

Rajat Mohanty
By Rajat Mohanty

July 21, 2016

The Increased Need for Automation inCyber Security Automation

Many industry reports project a growing gap between the demand and supply of security skills. In the future, millions of people will be needed for day-to-day security operations, and there is a concern that we will not be able to find them. Even the smallest SOC would need 10-12 people, but when you consider the amount of SOC’s that all enterprises, governments and vendors would need, that number runs into the millions. But, to quote a Danish saying, “prediction is difficult, especially about the future.” The future may not be about more people in your SOC but about more autonomics in your SOC. It may come down to being prepared with cyber security automation. At least that’s what I think.

Download our free Whitepaper on how to upgrade your SOC with
security analytics and orchestration

My view originates from the simple historical perspective that productivity always increases in every human endeavor. From the Stone Age to the agrarian era to industrialization to the information era of today, it has always been a story of doing more with less effort as time progresses. The security industry is relatively new and probably hasn’t focused much on productivity gains so far, although we do have the continuing challenge of building new technology to stop new threats. The truth is, managing day-to-day security operations was not a big priority in the past. The priority was stopping the threats. 

Give us a challenge and we will rise to meet it. With security operations becoming a top priority for every organization today, it is a given that the industry will move towards building SOC and cyber security automation platforms. Several such platforms are already here, but the question is how far can we automate?

The Possibilities within Automation

That answer depends on what activities are conducted in modern security operations. Since there is no common taxonomy for this, I tried putting what I have seen in Paladion’s own SOCs into a bubble graph.

Blog-min.jpg

The size of the bubble shows the time and effort that goes into these tasks and the color denotes the complexity of the tasks. In security, complexity is the main enemy to tackle when we try to automate tasks. For example, you can easily automate the tasks around reporting and trending. Every SOC needs to produce reports for a variety of stakeholders including ad hoc requests for reports. And while they take a lot of time, it is not a complex task and can therefore be automated and easily integrated into cyber security automation systems.

But how do you automate alert analysis? In today’s thinking, only an analyst can figure out if an alert is bad enough to investigate further for threats or if it should be dropped. For that matter, consider hunting; the current paradigm is that it is an even more complex task than alert analysis and that only a skilled security person with knowledge of data sciences can do it. So how can these processes be integrated into a cyber security automation system?

True productivity gains in security operations is not going to involve just automation, but will also be a way to reduce complexity through mimicking human cognitive processes.

 

Security Autonomics - Marrying cyber security automation with cognition

A generic cyber security automation  system not only processes an input into output based on rules but also pulls in data from a variety of other sensors, makes judgments based on those sensory data with a repository of knowledge, and keeps learning with experience.

Autonomic-System-Graph-min.jpg

Autonomics in security operations can exist at 3 levels:

Level 1 - Using sensory data and adding decision making to cyber security automation

All of these examples are about having the full contextual information on assets, vulnerabilities, attackers, network, controls, and then using historical data along with context to arrive at a score for each alert. The scoring is the know-how part which will vary with each organization, but can also be built into a platform.

So, a platform that can constantly collect a variety of context data and has a model for scoring can do the job of alert triaging and only the triaged alert will go to an analyst. In this model, machines can do thousands of alert analyses without the need for human involvement.

Level 2 - models and algorithms for decision making beyond a rule based system

Scoring an alert based on a variety of sensory data and historical data is a simple decision making system. Complexity arises when the context data is incomplete and a machine needs to extrapolate or predict it. An example would be vulnerability data that is not available for some assets under attack and a machine needing to predict the vulnerabilities based on other similar assets (profiling) or patching history (Bayesian probabilistic models).

Similarly for hunting, the machine executes a data sciences driven model based on triggers (malware beaconing based on a trigger of AV alert) and a hunter only looks at the output of such models.

Level 3 - Learning from experience

The Holy Grail for machine learning in security is to immediately know from alerts that an attack is actually happening or has succeeded and to take the right countermeasure to stop or eradicate it. This level of autonomics may not be possible in security systems where there are many factors which makes an alert qualify as an attack. The same alert which was considered an attack or incident in the past need not be an attack today if the asset has changed or if the controls or even the attack pattern has changed.

Unfortunately, there are too many contextual parameters for modeling reliable and consistent supervised machine learning. However, the sub tasks can be modeled for a machine to learn from human analysts. For example, for a certain type of alert, the analysts might pull a specific set of data from packets/ logs/ end machines and carry out a certain type of analysis. The hope is that the machine can learn to do this analysis automatically in the future and present it to human analysts. Similar things can be done for counter measure deployments. These developments will lead to new changes and adjustments in security automation and autonomics. 

What are your views on cyber security automation and autonomics? Let me know in the comments below.

Several of these concepts have already been translated into results in our AIsaac platform, and we are experimenting with many more such ideas. You can contact a Paladion Solution Expert for a demo on these features here.

ed864bb7-a016-4ae6-8f61-77dabf36c00d-1.png


Tags: blog

About

Rajat Mohanty

Rajat Mohanty is the Co-founder, Chairman of the Board of Directors and Chief Executive Officer of Paladion Networks. He has been Paladion’s Chairman & CEO since the inception of the Company in July 2000

SUBSCRIBE TO OUR BLOG

Buyers-Guide-Collateral

WHITEPAPER

Buyer’s Guide to Managed Detection and Response

Download
MDR

Get AI Powered

Managed Detection and Response

MDR-learmore-btn

 

MDR-Guide-Collateral

REPORT

AI-Driven Managed Detection and Response

Download Report
Episode

EPISODE-25

Red-LineAsset-6

Why Your ‘Likes’ on Facebook May Be Revealing Far More than You Thought

Click URL in the Post for the Full Podacst
  • FacebookAsset
  • LinkedinAsset
  • TwitterAsset