Malware often use beacons to communicate with command and control ( c&c) servers. Malware beacons are very strong indicators of compromise (IOC) and any time spent improving your detection methods and success rates is time well spent. There has been an increase in beacon detection academic research as well as products and services in the last few years. One specific set of techniques that we would like to mention is the use of entropy to detect beacons in your network’s data flows. A web proxy will usually contain all the information that you need to build a baseline of normal traffic flow. You’re then left with the task of using that baseline to compare against. You might have some false positives but can immediately flag them and avoid them thereafter.
There’s been a lot of traction in C&C (command and control) heartbeat detection techniques. One of the most exciting techniques in that area is using entropy to detect the heartbeats. Entropy can be seen as the amount of disorder that exists in something. Your network flow monitoring systems might provide you with entropy information, or your IT staff can write a script that provides it.
For example, by calculating the entropy of a group of packets, you can flag connections that have low entropy. By correlating that information with host information and the durations and frequency of the connections, it is possible to detect C&C heartbeats.
Lateral movement on a breached network is a difficult proposition. Any malware intending to avoid detection will have to be smart about how it moves on that network, how often and at what times, which is great news for the security staff as it affords them the opportunity to detect that lateral movement depending on several factors. One almost-sure IOC is malware attempting to connect to a host that either doesn’t exist or is simply a honeypot that no-one else in that organization is meant to interact with; other than information security and IT staff in the case of a honeypot.
The same principle as lateral movement in the Host Access model applies here. Using Netflow data it is indeed easy to detect connection attempts to ports that do not exist or should not be interacted with. As with most tasks in network flow anomaly detections, triaging positives from false positives will require a minimal amount of trial and error. However once those false positives have been flagged and marked as such, it should leave MDR staff in a position to accurately detect suspicious connections to non-existing ports. One is not limited to any technology or software when it comes to detecting such anomalies. The sky is the limit.
Another heavy lifter in the category of clear IOCs is assets behaving strangely. The moment you see a server that stores accounting data connecting to a machine in Ukraine well outside of business hours, is the moment you should go in full alert mode. Some assets are never meant to communicate with other assets, some are never meant to connect to a host outside of the company intranet, some only connect during the day or the night. Establishing a baseline to compare against is where most of the hard work will go. Once that is done, it’s smooth sailing.
Volumetric anomaly detection is another fantastic tool when detecting network anomalies. When applied to network data flows it will allow the detection of worms, botnets, D(DOS) and port scans, just to name a few. Once again, the governing idea here, is that we have a baseline of “normal” network flows that we can compare live traffic against. That baseline would obviously have to based on historical data. That historical data in turn is collected by network monitoring software.
There exist various tools that automate as much of the process as possible, allowing analysts to be alerted of incidents requiring immediate attention.
When applied to network connections, volumetric anomaly detection, will detect and flag connections whose properties deviate too much from a “normal” baseline. Your baseline would contain information about connections that are normally taking place on your networks. It would contain host, port, timestamps, protocol etc… Any information that your monitoring solutions can collect about connections, will help you define what should be expected. Once again, it does not really matter what tools and technologies are used to collect the data, as long as the data is valid.
Web proxies are quite useful when attempting to identify Watering Hole attacks and the assets that were affected. Careful analysis of your web proxies’ logs will constitute a big part of the job. That will however give you the information that you need: the list of websites that the company employees regularly visit, how they interact with them, how long they stay on them, the types and sizes of files that they download and at what times and how often.
Any enterprise of a certain size will have codified a list of technologies, resources and assets that its employees are not allowed to use or are discouraged from while at work. In particular, access to certain websites should trigger alarms as soon as they take place. Indeed, access to prohibited websites can be IOCs once we’ve pruned out false positives. Either an employee is violating internal policies by accessing a forbidden resource or you have malware or hackers on your network. Any host accessing a forbidden website at a time when no one is at work, should definitely be investigated. Setting up alarms for autonomous behavior detection in the context of accessing forbidden resources will reveal IOCs.
If there ever was a domain where data acquisition and analysis were useful, it’s data exfiltration. Once a malicious actor has compromised a network, navigated to where the target data is, he/she still has to exfiltrate the data to a remote server. That’s a very strong IOC. We again meet the familiar pattern consisting of building a baseline based on historical data (packet transmission rates, number of connections, frequency of connections, hosts and ports combinations) so that we might compare specific events against it. Malware and malicious actors will do their best to exfiltrate the data while avoiding detection. They’ll try to egress via allowed protocols such as HTTP, HTTPS or even DNS.
Packet transfers is another sub-context where volumetric anomaly detection shines. It should allow you to flag irregularities in packet transfer rates and frequencies. I’m sure you’ll agree that getting dozen of DNS queries from a host that shouldn’t be operational at weird times is a clear IOC. Unfortunately volumetric anomaly detection isn’t a science, yet. However, there have been tremendous resources and energy poured into it and a simple search of Google on-line Patent Database will reveal dozens of patents being granted in that field. Whether you’re developing in-house tools or using off-the-shelf solutions, volumetric anomaly detection is a field you want to keep an eye on.
Certain types of network behaviors just lend themselves to being profiled. Assuming you have a baseline of attack/association profiles (created from aggregated historical data), you can setup your particular monitoring solution to flag and alert you when weird/unknown associations are detected. The hard work will be fine-tuning your database of known attack/association profiles. There’s no silver bullet; those profiles will have to be built by collecting data and analyzing it. There exist various solutions that can automate part of the process but you’ll still need a human that understand the broader context, and who can eventually decides filter positives from false positives.
Any piece of software is created according to certain assumptions. An attacker will not have access to those assumptions, which means that you can watch for behaviors that should they occur, are clear indicators of compromise. One size does not fit all, and your IT department and MDR staff will have to cooperate in order to agree on what the assumptions are and how to detect whether they are being violated. That’s where your WAF (web application firewall) comes in. It should collect data about all attacks, inbound connection and generally interactions with your application. Analyzing that data will allow you to flag behaviors that are unknown or forbidden.
Inbound connections are usually well screened and they’re either allowed or denied. If that was all there was to it, we could all go home and enjoy a good old cup of tea. Unfortunately, things are not that simple, and a for a very good reason. A skilled piece of malware or a skilled malicious actor will make sure to get into your network by hiding into legitimate traffic. They’ll use the right protocols and do their best to be invisible. However, some attacks will succeed while others fail. Building a compendium of attack profiles, will allow you to detect them and alert the right people, the next time they happen. That will also give you the opportunity to discern which types of attacks were successful and which weren’t. That data will in turn permit you to separate what works against your infrastructure from what doesn’t work.