Webinar: Security of remote work | Incident Response, or how to quickly detect and neutralise a threat 

How to proceed in the event of a suspected data leak? Incident management specialists will talk about procedures, actions and available tools. Transcript of the webinar: Incident Response, or how to quickly detect and neutralize a threat.

Security of remote work: Incident Response, or how to quickly detect and neutralise a threat (17.06.2020)

Slawomir Pyrek, Project Manager, EXATEL S.A.
Robert Dąbroś, Senior Presales System Engineer, Fidelis Cybersecurity

— Sławomir Pyrek —

The topic of today’s meeting is incident detection and handling. Over the past few months, we have witnessed dynamic changes in the functioning of the IT environment caused by the coronavirus pandemic. These changes affect many organisations. The first such change that has taken place is that we have started working remotely on an unprecedented scale, which raises some specific implications when it comes to cyber security. Firstly, most of the communication that used to take place internally, within the organisation, now takes place over the Internet, which obviously increases the risk of disclosure of sensitive data. Additionally, not all internal IT security systems monitor user activities. When we are inside an organisation, when we are going online, we go through a whole series of security systems that monitor our activities and neutralise damaging criminal activity. When we work remotely, we are definitely more exposed to malicious activity.

Another problem that may arise due to remote work are problems with updating either the systems or applications that we use or even anti-virus software, due to the fact that not all monitoring and management systems encompass home workstations. This results in an increased risk of security breaches. Also, application access issues are common, which translates into using some malicious applications, sites, file format converters, which can result in sensitive data being exposed.

04:35
In order to make remote working possible, organisations had to rapidly expand their remote access systems. Obviously, this was linked to purchases and implementation of these systems at a very fast pace, resulting in the possibility of configuration errors under time pressure. During such a process, our internal systems may be penetrated because of just such errors. The next phenomenon we observe is the very widespread use of remote communication tools. They have their vulnerabilities. Of course, their developers probably do their best to remove these vulnerabilities as far as possible on an ongoing basis, but in any case, it also results in the possibility of data loss or a breach of security systems.

The next thing that comes with switching to remote work is quick reconfiguration of access to internal company resources. This access was organised differently when employees were present directly on the company premises. Also changes in the circulation of sensitive documents – in this area there is also a risk of errors, be it configuration errors or errors in document circulation processes, which may result in the leak of sensitive data. And, of course, forced modifications to the systems in use. We probably didn’t have time to take a 100% detailed look at how changes or modifications to the systems in use addressed vulnerability levels or configuration error rates. We also note the use of the coronavirus pandemic as a kind of bait used by criminals, e.g. phishing campaigns redirecting to malware serving websites. These campaigns often have Covid in their background. At the beginning of the pandemic, with shortages in supplies of masks, gels or protective gear, there were a great many such campaigns that offered to buy the missing goods. At the moment, in turn, we are encountering a very large number of offers for financial assistance or help in finding a job. These campaigns are very intense and can infect the computers we use at home.

08:05
Of course, we use cloud applications more extensively, which also has some consequences, e.g. higher probability of disclosing access data or propagation of malware via cloud systems.

The effect of all these activities related to purchases and changes to IT environments is also an increased workload on staff – both IT and security staff.

— Robert Dąbroś —
You have put it perfectly. Remote work has changed a lot in terms of infrastructure, access methods and other issues you listed. Is there anything in the threats themselves that surprises us, that forces us to take a different approach?

— Sławomir Pyrek —
No, I would rather say that the portfolio of threats is the same, i.e. certain techniques, ways of attack are the same, however, by greater exposure of sensitive elements, such as workstations and, first of all, leaving the fortress that our companies have been so far, i.e. going beyond this area of defensive walls in the form of security systems, we are more exposed to these actions. Of course, the issue of Covid crops up somewhere in the methods of attack, but the portfolio of threats is unchanged, we are still dealing with all the threats that we have known so far, and they are still intensively developed by criminals.

— Robert Dąbroś —
In other words, business as usual, we must continue to be prepared for the worst and react appropriately.

— Sławomir Pyrek —
Yes, react appropriately, but under slightly altered conditions. We have a little more work, or more to think about and to reconfigure. What do we need to prevent the negative effects of IT change? Nothing has really changed here either compared to the time before the pandemic. People are still the most vital element. Of course, we need some technological solutions to support us, and some organisational framework for the whole process of risk detection, recovery and future prevention. As for the users, we should take care to increase their awareness of threats, make them aware of phishing campaigns, certain actions such as using unknown sites, receiving e-mails from unknown people. This results in much less work for security staff. Every increase in end user knowledge, translates exponentially into less work for cyber security services. On the other hand, we need suitably qualified personnel who will be responsible for detecting and responding to threats. These people obviously need to be equipped with effective tools. The response to threats should be structured in the form of a process, appropriate procedures. It would be great if these procedures were repeatable. We must be prepared for worst-case scenarios and, if one develops, not have to wonder what to do at that particular moment. There simply will not be time for that. It is useful to have a defined course of action so that each person who is responsible for handling an incident knows what to do at any given time. Of course, support from a trusted partner is very important, too. Our resources are most often limited – we don’t have unlimited cyber security personnel or people who are equipped with every bit of knowledge available, so it’s good to have a trusted partner who can help you in critical situations and support the recovery process, explain how the attack happened, and ensure the system is restored to the pre-incident status.

13:42
The most critical element in detecting and responding to threats is time. A typical attack scenario, following the classic recipe, is to start by gaining a foothold in the organisation under attack, and preliminary infiltration of its network. This is followed by what we call a consolidation phase, which usually involves additional software being downloaded from the criminals’ respective management centres. This software then begins to scan, or sample, our internal network to find vulnerabilities and spread through the network. Then comes the stage of identifying the resources that are being targeted, and then stealing the data or altering it. In such a classic example, when we do not care about security, the time needed to detect a threat is very long. At the moment, it is assumed (depending on the source) that it takes companies between 146 and 240 days on average to find an attacker in their internal network. Of course, when an organisation does not take care of security, this time can extend to over 300 days.

At this point, most often such information is provided by third parties which inform the organisation that it has been penetrated. This is the most disastrous scenario.

An important goal of our action is to shorten the time needed to both detect a threat and neutralise it, to such an extent that the company is protected from theft or unauthorised operations on its data, which means, in general, detection of an attack at the moment when either infiltration of a network takes place, or communication with management centres of criminals takes place, or at the stage of spreading in the network. As we are not able to deal with this problem with the power of our own mind alone, we need tools.

16:55
First, when it comes to monitoring and responding to threats, we need to be fairly comprehensive in monitoring the behaviour of our users, end devices (laptops, mobile phones), systems that serve applications, and monitoring network traffic. It is good if monitoring is done at a very detailed technical level, due to the fact that certain elements of attacks are only noticeable at a very deep technical level. Therefore, if we’re monitoring the network, it’s a good idea to monitor it down to the layer seven level so that we have network protocol decoders there. It’s good if these decoders work regardless of the number of the port the traffic is routed to. We should be able to observe objects sent over the network, as well as to decode the transmitted objects, so that we know whether a given file is a presentation in PowerPoint or is actually just a file that has a PowerPoint extension, but in fact has some script or executable file inside. When it comes to workstations, it’s worth having the ability to monitor operations on processes, registers, file systems, because this is where the effects or modus operandi of criminals are pretty well seen.

Of course, the system must be equipped with, or it must be possible to connect it to, threat intelligence systems that provide us with information on how to detect malicious operations and tactics used by attackers. When it comes to responding to suspicious behaviour, it is recommended that such a system has powerful tools that allow us to detect a significant volume of malicious activity. Most often, such systems are equipped with databases of IoC, or indicators of compromise. It is important for this subsystem to be precise, so that the verdicts are accurate and give us maximum information of transmitted objects or network traffic and malicious activity indicators found.

20:11
We must also have a system that can operate on historical data. In the case of zero-day threats, we (or the system) may miss a given malicious activity, but once it has been defined, described and IoC tags have been created for it, then the system must review this historical data once again and check whether there was no event in the past that can result in breaking our security. Of course, because we assume that monitoring will be done globally at a low technological level, we will therefore be operating on very large data sets. Such systems have to cope with such operations. Ideally, they should be able to cope in close-to-real time.

A very important element that such a system should have is the ability to draw conclusions, that is, to know the criminals’ modus of operandi; this is abbreviated as TTP for: criminal techniques, tactics and procedures. Knowing these methods gives us the opportunity to make our systems more resilient or fine-tune them, so that in the future a given type of incident does not happen again or its probability is reduced.

21:51
What we would like to invite you to do in this webinar is to take a closer look at the Fidelis Elevate solution, which meets the conditions I mentioned earlier. It is a solution that provides us with automatic threat detection and gives us the ability to respond to the threats. This solution consists of three modules: a network traffic analysis module (Fidelis Network), a server workstation behaviour analysis module (Fidelis Endpoint) and an attack behaviour detection module (Fidelis Deception).

What advantages does this system offer us? It helps us see all the operations that are performed over the network, both at the workstation level and when connecting to the cloud application. We have very thorough network traffic analysis – layer seven and regardless of the number of the port, of course. What’s more, the analysis is made on a very deep level related to the analysis of objects that are transmitted through the network, i.e. if a file is sent through the network, this file is also analysed in terms of structure, whether there is no embedded code or some scripts or pieces of malicious software. The system automatically detects such threats and provides a response in the form of blocking or quarantining e-mails containing harmful content. It is possible to configure policies here, from simple warning of suspected malicious activity, to blocking or quarantining traffic. We also can respond to threats on the workstation. Of course, we monitor operations on files, registers and processes, we check what operations are performed on connected flash drives, but we also have the ability to respond to threats on the workstation. We can automatically cut off such a station from the network, isolate it, run a memory image dump, or run script execution. The system can also create artificial objects that are bait for the attacker, which allows us to see what means the attacker will use to try to break through the security. This gives us knowledge of what techniques/tactics the attacker is using. The effect of using this system is the reduction of the risk of our data being compromised, as well as a significant decrease in the response time to an incident.

25:31
The execution system uses sensors in the network layer. These sensors are of different types, i.e. they may be sensors for network traffic, operating in either inline or out-of-band mode; they may be sensors dedicated to mail traffic, which also operate in several other modes, e.g. MTA or Blind Copy. We also have the possibility of enabling web sensors that communicate with Proxy servers, ICAP and monitor HTTP/HTTPS or FTP traffic. On the endpoint side, the executables are the agendas. The agendas are installed on workstations on servers, and it is an executable that implements policies that first monitor the behaviour of these low-level operations on servers or workstations, but also allow for reactive operations, such as disconnecting the computer from the network, or executing scripts, dumps, or any other operations. The data that is collected from these network probes and by agents on workstations and servers is deposited as metadata on the collectors. There is a separate collector for endpoints and a separate one for the network, so we have a historical metadata base here. The data on network and endpoint collectors can also be checked. And, of course, the management systems, which is CommandPost for the Fidelis Network systems, Endpoint Services, and for the trap system. The Endpoint Services component serves policies and collects the results of executed requests from workstations. Of course, the system is connected to the developer’s systems, which is important because the developer continuously takes care to update the IoC database, it updates information on known threats both in the network part and in the endpoint part an ongoing basis. Regardless of this, the system can still be integrated with other data sources. I haven’t yet mentioned the Decoy Server, the component that is responsible for constructing the trap system. It is a system that allows us to detect the attackers’ behaviour.

— Robert Dąbroś — 30:06
I have such a sample here, which is called FTCode. The sample we have is actually an Income Statement – an XLS file. We did a little bit of processing on this sample to make it more obvious what’s going on here. Normally, this ftcode_safe panel is part of an Excel file and from the user’s perspective, when you open such a file, nothing terrible happens and there is nothing interesting inside. However, the result is what’s in that bottom box, which is an Excel file with the ftcode extension and a new file named READ_ME_NOW appears. And now all the visibility that Sławek was talking about, what was going on during the execution of this Excel, you can see here in some of the screenshots. This is an example of metadata, or host visibility artifacts at the level of processes that are in progress. Here we have a PowerShell call with the appropriate options and as a consequence of this file, this is a way to prevent the user from seeing that this interaction is taking place. We can trace it graphically. The fact that PowerShell launched from Excel and ran so many more tasks is suspicious. In the next screenshot you can see a very interesting thing – this is the stage when this sample gets implanted or starts getting implanted. Here you can see how the backup directory is deleted, the system state backup is also deleted. You cannot restore the previous status. The system works, but we don’t have any option to go back to the last saved correct configuration and stuff like that. From this point, the encryption begins, and here our sample has been tamed a bit and it doesn’t encrypt 190 file formats, but it only looks for the xls files, and only in the directory where it is running. As a consequence, we obtain files with a new name: ftcode.

If we don’t have a backup, we don’t. If we have a backup on a network share and we have it connected, there is a very good chance that ftcode in the next step would also go to those network shares and encrypt our backup as well. So you have to be careful.

Now we have this information that you have to pay $500 for data recovery. Sometimes we have to pay because the data can’t be recovered, sometimes someone manages to crack those keys, get in and steal it, and sometimes government agencies shut down the server where the keys are and the data can’t be recovered anymore. So that visibility that we’ve been talking about is crucial in order for us to do something.

What we learned when we analysed this sample was, first of all, that an Office-type tool executed some other system tool, in this case it was Excel and PowerShell, and that was one of the behavioural rules that we could use to detect that something was wrong. I can’t imagine everyone having Excel open at office on a daily basis and running PowerShell inside – that’s a complete abstraction. The second – executing PowerShell itself with options – IEX, Download String, Invoke-Expressio – which are not used on a daily basis, they are usually some system tools, they should be used in a predictable way. Another rule – Shadow Copy Deleted, you can see the options. Another one – bcdedit – this was the example where the sample got implanted and all the backup directories… removing them is not typical of the user. If it does, something is wrong. The last option, Unusual File Rename, we see if something performs a mass change on multiple files and adds some strange extensions, this is also the moment when we should be wondering what is going on. We have a rule here that could detect something like this, whereas note that I keep making a point about how much visibility there is of what is going on at the process level in the system and how we can react when we have a detection. Without such data, it would be tough to do any incident response, because it would be difficult to do anything else than restore the station from backup.

— Sławomir Pyrek — 36:00
The part you showed is clearly about the endpoint part. Here it was presented generally what we see, what we are able to detect using endpoint. However, in the context of remote working: will this system allow us to work just as effectively when we work from home? Is there an option here to connect our home computer online to this management system in real time?

— Robert Dąbroś —

There’s a component called gateway that runs in the demilitarised zone of all encrypted communication, all computers that are connected to the Internet anywhere are manageable, so we have the same visibility and the same options regarding its ability to be repaired. Now, what could we add to the whole toolkit, which is the whole Elevate, to have high visibility at the network level? Fidelis Network Sensors are highlighted here. We could hold an hour-long presentation about these elements alone – on how to actually install them, what these components can do, but here’s a simple example: a sensor and a rule that downgrades a nested script of any type in documents like Office and e-mail to quarantine, and such a sample never makes it to the station. Obviously, this is to do a big network analysis and visualise the flows for us. Sensors are responsible for many things and everything they see is deposited in the form of artifacts into the collector, which is on the left, for us to use when we just have incident response. That is, all network traffic, even that in which there was no detection, at the time of detection is deposited in the form of artifacts that we can see on subsequent days. The whole system, both endpoint and network, lives by feeds, ready-made policy sets that we can download and that analytically describe an event such as that PowerShell is nested in Excel, or there’s some compressed JavaScript, and so on. Of course, there’s an integrated Malware Detection Engine, which is this kind of thing that puts all this data together and is able to use it in an analytical way to have a high level of detection and visibility as well.

38:47
I was talking about metadata. Everything that the network sensors see, whether they generated an alert or not, even if there was no alert, gets recorded in a huge database that we can store for up to 365 days. In fact, there are no technical obstacles to keeping them longer, and probably no licensing obstacles either. In the form of such a large metadata base of metadata, or network artifacts – they mean nothing today, but tomorrow we can detect in them, we can check in them previous actions of the host, if necessary, we can see that something happened, that this file was transported from point A to point B, or from the station to the server, and so on. It’s all very easily verifiable and we can see a lot of cool stuff here.

And, of course, Fidelis uses that metadata for several things. First, to build what we call a cyber terrain. When we understand what we’re defending, meaning what hosts are on what network, how they communicate in specific villains or between villains, what the data flows are, how hosts communicate with the Internet, what protocols they use, and so on, we are in a better position to prepare for the detection and handling of an incident, if there is one, because we have historical data, current data, and we’re able to check everything. We use this to detect anomalies in the network based on our understanding of the terrain, i.e., a new server appears and uses a new protocol, or a new SSL fingerprint appears on a newly opened port, or a host communicates with the geolocation and does a large upload there, and so on. This is all a result of collecting this metadata in the collector. In addition, the system of course can use the sandbox, decide that you should put something in the sandbox, because, for example, it was compiled an hour ago, is a small file and was sent by e-mail from some completely untrusted, strange address. It should be placed in the sandbox to see what the sample is. Detection is also provided by a feature called Deception, our interactive bot system. Deception traps are emulators of real objects, they are also elements that, based on this understanding of the network, or understanding of the terrain, can create elements which are perfectly in line with reality, in a specific VLAN, specific services, specific operating systems, in order not to stand out, but to encourage the attacker to start this lateral movement adventure with our systems, and not with real systems. When an opponent is in the network, then, unless we have a Deception class system, they see what is in that network and each of their next steps brings them closer to their goal. That is, whatever they touch, it is one of the real systems, just another opportunity to take the next step to finally do the exfiltration. However, once the Deception class system is implemented, the perception changes completely. We change the attack area, which becomes virtually extended and, most importantly, this is a non-exploitable form. These emulators are just emulators. Whereas such an attacker takes the next step, such as port scanning, they will immediately come across our decoys. Decoys obviously change the network image so that there are a lot of fake components from the network perspective to attract the attacker. This is one of the big components of detectability already at this late stage.

42:36
Here’s a great lesson of TTP that Sławek was talking about, which is any exploit dropped on of the decoys ends up being sandboxed and we can immediately see the results, what the tool is, what it’s doing, just to give us the same visibility when it’s done at the endpoint level.

This is what it looks like, here we see some host that has a given number of services emulated. The green colour in the charts on the right represents real hosts, while grey represents emulated hosts, so you can see exactly how the system attracted specific things to the services. And this is indistinguishable at first glance. When we connect to the network, we can’t tell the difference, when we touch things, they immediately alert. Any interaction, login attempt, successful login, those are all critical alerts and we see that.

We can run PlayBooks from anything we detect, whether at the endpoint, network, or Deception level. Just as here – we have the Emergency Host Isolation PlayBook. One of the first stages is Network Isolation, which means we isolate the host, we have access to it ourselves, we can perform any subsequent task, we can look at it, test it, download data, while they are no longer able to communicate, or infect anything new. Here we immediately read other things like autorun, user accounts, administrators, memory analysis, so we know what’s going on in the memory, and of course we continue our adventure with the diagnosis of such a host. Here we see “show me open sessions to network resources”, memory acquisition which means retrieve my memory for forensic analysis, show me scheduled tasks, autorun and all the other things that we might be interested in in order to make the right response, because here on the one hand we collect information to see what was going on, and we also have information in the collectors of course. We want a little bit more, just to do the response. And that response can vary. Please see that here we have this Process Kill By Search option, which means we kill some process that we have identified as harmful. We can do the same thing in an automated response, of course, but here we have a situation where the analyst sits down, makes some decisions, takes next steps: deleting keys from the register, deleting a file, and so on. As I am sure you remember, the example was that we have nothing to go back to. The only option (not listed here in the table) would be to restore from backup, but we do not want someone who is at home, and has something to do, to bring their laptop for us to restore, but we want to do it remotely. If there is such an option, that’s great, this script can restore the operating system from backup, while we want to get it back up and running quickly.

45:26
One of the last slides, it’s the element of recovery, because we’ve already identified the threat, got rid of it, and we want to see if this system works. E.g., is the Windows firewall status correct, are the antivirus settings properly updated, are the passwords and user names compliant, etc. Here we have full control over the station, we can literally do whatever we want with it, and here on the right you can see the so-called “live consoles”. We have chosen the situation that these Excel files are already encrypted, the user says: “I have them there on the server, please copy them for me here, because I need to finish my report right now”. Here we have a File System viewer, we connect remotely to this machine, copy back the files that were encrypted, and the user can carry on with their work. This is Live Console. We have unlimited access to the operating system, to the file system and to the processes, so that we can play a little bit more threat hunting, so that we can search, for example, whether those processes which we have identified here as malicious occur on other stations, and so on. The point is to make sure that this station is clean, and that all other stations that may have similar symptoms are also clean and we do not have this malware nested in our resources. That’s what this visibility is for, so that when we decide on the right responses, we can be sure that they are the right ones and we are no longer in danger, so that this time to detection and time to removal is reduced to the maximum, so that no further exfiltration occurs.

We’ve done our homework, we’ve had detection, we’ve had metadata, we’ve looked at what was going on, who was involved, what for. We’ve covered this network and endpoint part of incident response. What’s cool is that if we have such a tool with some input, as a result of its operation we can change a lot, add more data to the analytics so that the system sees more, can do more, can detect earlier, and so we are able to get the job done. It’s a sort of Internal Threat Intel – that’s probably the best way to put it.

 

— Questions — 48:20

“Does this system have a sandbox? Does it use a sandbox?”

— Robert Dąbroś —

The system has Sandbox by default in the cloud form and that’s a thousand URL/file detonations a day – that’s the limit of free service. You can of course buy further packages to increase the number of detonations, but there is of course an on-prem option. It will run at the capacity it can handle, roughly estimated at 20,000 executions per day, and sometimes it can handle more. The architecture of the tool is very flexible.

“How is this system licensed?”

— Robert Dąbroś —

Fidelis Network is licensed for throughput rates measured on all sensors connected to the system, aggregated to one hour. Aggregation in one hour allows us to delete all the files and this traffic is really relatively small. And data retention time – 30, 60, 180, 365 days – and that’s it. As for endpoint, each endpoint installed, whether it’s on Windows Server, on a workstation, on a Mac, or on Linux – it’s one license. As for the Deception system, here we have a choice – we can use the number of villains or the number of users in the organisation, whichever is more favourable.

— Sławomir Pyrek —
50:14

As Robert said – we could have several separate presentations on each of these topics, because the system is highly developed and the number of features is really impressive. What can we offer you as EXATEL? We can of course offer support in the implementation of the system, starting with an analysis of needs, we can do a proof-of-concept at your site. We have special devices dedicated to do test deployments. Of course, we make the technical project, implement the solutions and provide post-implementation support for these systems. What is very important, we support the client’s personnel, e.g. in tuning the policies, their design, in integrations with third-party solutions. Also very important is that our specialists from the Security Operations Centre can perform a review of alerts generated by the system,

 

or review the metadata that has been stored by the system, whether with respect to endpoints or to a network module. We also have an in-house SOC that operates on a 24/7/365 basis. One of the tasks that it performs is incident response monitoring and support. We use the Fidelis platform on a daily basis and are therefore quite familiar with it. We also have quite substantial experience analysing alerts that pop up in Fidelis, but also in many other systems.

 

I strongly encourage you to contact us, to get acquainted with our offer and, especially, to get to know the Fidelis technology in depth.

 

The importance of DDoS attacks has increased significantly in recent times. Of the attacks requiring any technological involvement and actually occurring in cyberspace, this type of attacks is by far the most popular, disregarding Internet crime related to unlawful threats and hate which are addressed in a slightly different way and do not require the support of technology or expert knowledge.

 

Today we are going to talk about our experience with cyber security, DDoS attacks and how they affect business continuity. We will also mention how to defend against these attacks using the platform we have developed.

 

Sławomir Pyrek
Sławomir Pyrek
Project Manager – Sales Support Team, EXATEL
Robert Dąbroś
Robert Dąbroś
Senior Presales System Engineer, Fidelis Cybersecurity