Podcast | Cyber trends – what should we know?
Cyber security is a process that we need to approach comprehensively. On-the-spot action can give the illusion that ‘som...
Learn ways and tools that will allow you to detect anomalies quickly. The webinar was conducted as part of the JST track of the Remote Work Security series. Webinar transcript: What will you gain by monitoring your corporate network?
Security of remote work: What will you gain by monitoring your corporate network? (09.06.2020)
Paweł Deyk, Project Manager, EXATEL S.A.
Klaudyna Busza-Kujawska, Senior Presales Engineer
— Paweł Deyk —
EXATEL has been providing high quality telecommunications services and ensuring network security to its customers for many years. Service quality and stability are truly in our DNA. Our clients include organisations from many important sectors of the economy – like energy or banking industry – as well as telecommunications operators. We spare no effort to make our clients feel secure by entrusting provision of services to a trusted partner.
— Klaudyna Busza-Kujawska —
Although Flowmon Networks is a Czech company, it operates on the international market and has customers in more than forty countries worldwide. Flowmon has been on the market since 2007 and provides the ability to view and monitor what is happening inside the corporate network, on the LAN and WAN, and in network communications. On the one hand, we offer tools dedicated to network departments that allow for diagnostics and monitoring of network and application performance, and on the other hand, we perform behavioural analysis of this data, so we can detect any irregularities, anomalies and undesired behaviour. These are important issues from the point of view of security departments, therefore it can be said that we actually provide solutions that achieve the common goal of both these worlds, as providing a properly operating, available and secure network infrastructure.
06:16
— Paweł Deyk —
One can say that our companies have similar goals, so we work together well. Let’s move on to what is the focus of our webinar series – the transition from office work to home office. Starting with what the situation was before the pandemic, most of the users in our company were using local resources via LAN. These were of course some application servers (database servers), very often we also had dedicated link to our Data Center. Everything was happening within our network and our whole working environment was prepared for this traffic. Of course, some connections to the Internet were made too, but it was distributed more or less equally.
The announcement of the pandemic changed the whole environment a bit. Most of the employees were forced to work via the Internet (because they worked in very different locations), so we had to switch to emergency mode, which was most often home office. What changed? First, the entire volume of traffic on the internal network dropped drastically (from user computers, for that matter). However, to access local resources, Internet users must use VPN, i.e. virtual private network, which causes significant load on the devices that establish such [VPN] sessions. It doesn’t matter whether a user wants to connect to Data Center resources, to internal servers or to the Internet – each time they have to pass through one interconnection point in our network which is simply heavily loaded. Thus, various problems typically related to network performance arise. The first problem is that there is a large number of remote users in relation to the number of local users (the previous ratio was the opposite).
09:45
— Klaudyna Busza-Kujawska —
The second problem that is associated with users moving to remote work is the load on the VPN concentrator. It’s not just about the number of sessions you have guaranteed in your VPN licenses, it’s also about the amount of network traffic that is analysable and passable for each user. There may be issues here related to how many simultaneous users we have connected via VPN and it also depends on what kind of traffic and how much load these users generate, because surely it won’t be the same for each of them. Here is the problem of how to analyse the communication of each user and verify whether they all have the same good and efficient connection to the network.
— Paweł Deyk —
Another important aspect is that the new situation forced new network configurations. There were often new devices that needed access to the network. There might have been some changes to the network configuration itself, so there was a good chance that there were also misconfiguration issues.
This was all done in a big hurry so as not to interrupt the company’s standard workflow.
— Klaudyna Busza-Kujawska —
There were also new – though not necessarily new to the market – work tools. The way meetings were conducted and various conferences were organised also changed. Everything has been shifted to video conference type tools. As a result, even though these tools have been on the market for a really long time, we are now probably familiar with all of them. We already know the tools we need to connect and conduct conference online, share a desktop or transfer control. Users, including those inside the company, began to organise their meetings in a different way than before when they were held on-site in a conference room. It also means that, on the one hand, the traffic has changed for us. There were new ways of communication that should also be verified in relation to efficiency of its connection. If you have your own video conference systems, it would be a good idea to verify their performance to see if they can handle all of your in-house meetings.
— Paweł Deyk —
Performance should also be measured for our business systems, whether internal or web applications, which are critical in today’s business – there’s no doubt about that. This performance needs to be measured in order to estimate whether users can use those systems without any problem, whether there are any complications with the network or with the application itself. This is worth keeping an eye on as it’s s a pretty common problem. The existing traffic pattern is changing and problems may arise that were not previously present.
14:07
— Klaudyna Busza-Kujawska —
One more challenge connected to the changes that have taken place in network communication is the appropriately sized and thoughtful purchases intended to at least change the tools associated with VPN connection or broadening the Internet bandwidth. Verification is important. If you have already made some purchases – a lot of companies made various orders and changes inside the network at the very beginning of the pandemic – it is worth verifying how those relate to the current situation. Will we have sufficient resources if we find ourselves in a similar situation in the future when the amount of communication in the network changes, the applications change and new users come? At some point, we should plan to properly develop the tools we already have.
— Paweł Deyk —
It is important to know what is going on in our network, also in terms of employee activity, and I am not talking here about some huge system of supervision – the employees themselves may want to know how they are doing in these new conditions and whether they are able to meet all the requirements set by their employer. I assume not so many employees have worked outside the office before, so it’s worth monitoring what their work looks like. It’s also worth considering what you can change based on this information. Some employees may work less efficiently without the support and contact with a boss. One should observe how these employees adapt to new conditions.
The use of company equipment in a way other than in accordance with the with company policy is also a significant issue. It seems to me that when we take equipment home, we are much more likely to feel an inner temptation to use it for some sort of entertainment. Moreover, it may also be that we are not organisationally equipped to verify the work quality and time of our employees. Of course, this also applies to those employees who tend to overwork themselves and the boss should look at them sternly and ask them to take a break. It’s hard to separate work from leisure time these days, so I think this can be a useful tool, too. Another issue is the use of private equipment for business purposes. In many cases this could not have been arranged otherwise and it is necessary to use private equipment to connect to company resources.
Another group of issues worth discussing is monitoring unusual user behaviour. First of all, this is a new environment, so we are more likely to get distracted and make mistakes that we wouldn’t make when visiting different websites. Cybercriminals have increased their activity during the pandemic, so we are extremely vulnerable in recent times to various phishing campaigns or malware. It’s worth increasing your vigilance and tools that show any suspicious activity or anomalies could be helpful. Remember that our home networks are not as well secured as corporate networks. We simply can’t provide that level of security, so admins need to look more carefully at how connections from home users are handled. And one more, quite general rule – it is worth supplementing the operation of signature-based systems (firewalls, IPSs, antiviruses) that are not able to detect all new threats. Their operation is based on what it already known, so it makes sense to supplement your tools with solutions that don’t rely on signatures.
21:00
— Klaudyna Busza-Kujawska —
So far Paweł has outlined what issues and challenges come with users moving primarily to working remotely. What to do in this case? First and foremost, really monitor how remote users communicate to various internal company resources. If we already have such an infected home host which connects via VPN to the network, it immediately has access to what can be found inside (devices, servers), so it is also possible to perform reconnaissance or move and infect other devices. Therefore, it is important to monitor what is happening inside the network. If you want to use Flowmon specifically for internal network monitoring, the primary component will be the Flowmon Collector which collects and stores data. This is a central point that admins can access and where they have tools for data analysis and reporting. Speaking of collecting information, if devices such as routers, firewalls and other equipment which are able to generate flows and deliver them via a NetFlow, JFlow, sFlow protocol to a collector in your network, this information is really enough for us to find out which IP address goes with which address, on which ports, how much data was sent and what kind of communication took place. If, on the other hand, we would like to obtain information regarding network performance i.e. connection setup time, retransmission time, number of retransmitted packets, server response time, network latency, then a sensor is needed. The sensor is a separate device that collects packets, analyses them, processes and sends to the collector but because the probe needs to see the packets to be able to perform this analysis, we usually connect it to SPAN/Mirror port and besides this analysis of network performance, it also adds additional information from layer seven. This way you get a lot of additional elements.
Both the collector and the sensor can be installed as a hardware or as virtual machines, depending on how you want to use them. You may have thought that when you are interested in network performance, you will need a lot of such sensors distributed at various points. Not necessarily. In fact, it is often only one or two sensors – if, for example, we have two Data Centers – only set in the right places. So, please don’t be intimidated by the extra hardware – probably it is just one collector, one or two sensors, sometimes more, but it depends on the network architecture. Other Flowmon components are functional modules that are installed on the collector. Hardware-wise, these are the two devices, and the rest are modules to be installed and licensed to run on the collector, depending on whether we additionally need application analysis, packet recording or maybe behavioural analysis, i.e. anomaly detection, or additionally a module detecting volumetric attacks.
The Monitoring Center module is an embedded, so anyone with a virtual or hardware collector will acquire this module without any additional licenses.
— Paweł Deyk —
This module is primarily used to visualise the collected data using various types of visualisation – charts, tables, statistics for specific users. There is a sizable set of predefined views available without any additional configuration item. Of course, you can also define your own profiles, alarms and thresholds at which statistics or alarms will be displayed.
The second important module is the Flowmon Anomaly Detection System (ADS) which is used to detect anomalies, of course. We are talking about various anomalies, which include specific attacks such as port scanning or dictionary attacks, traffic anomalies (anomalies with DNS protocol, DHCP, etc.) or network security anomalies (here, communication to known malware sites is detected). Additionally, we can monitor traffic to unwanted applications. By default, we can talk about communication using TOR networks, P2P or downloading torrents. There is, of course, more of those definitions.
28:37
— Klaudyna Busza-Kujawska —
It is a system that is based on behavioural analysis, i.e. analysis of the behaviour of devices in the network. This is important as it is complementary to all signature tools that analyse packets. All you need is a fleet of network devices, switches, routers. A sensor is not necessary but it is a useful additional tool, although not required for behavioural analysis. All you need to do is start sending flows from your network devices and have a collector with ADS. Then, the analysis of the behaviour of the devices inside the network will be performed complementing the tools you probably have: firewalls, IPS or antivirus software.
— Paweł Deyk —
In addition to these anomalies, you can also detect all sorts of problems like lags or interrupted updates. This is also important for day-to-day work. As an important element from the security administrators’ point of view, all of these detected events can be fed into our SIEM system and we can export them without much of a problem.
This is important to be able to correlate these events e.g. with signature systems.
What does internal network monitoring give us? First and foremost, we need to make sure that our network is operable and efficient. We need to know that network and application resources are used in a proper and efficient way. We need to know how to detect unwanted activity in our devices and have the ability to quickly diagnose it. Moreover, we need monitoring to ensure stable operation of our company as well as cost optimisation. We may suddenly purchase a lot of solutions that seem essential today, and by the time employees return to the office, it may turn out that this purchase was rather excessive. To sum up, it is important to monitor our network regularly with the use of new tools.
32:12
DEMO:
— Paweł Deyk —
We’ll tell you a bit about how this system works in practice. First, we have a Dashboard which is a module that primarily displays the most important information that an administrator needs to have at their fingertips. Here we can set specific widgets that will be shown in this view.
— Klaudyna Busza-Kujawska —
Before you get to the views, I’d like to say that each user can have their own set of these tabs and charts of both communication over time and other data. The view is refreshed every five minutes, so you always have up-to-date data here – and you can go deeper into the analysis of it. The Dashboard, on the other hand, is the main view where we see the key data. It is easy to switch views between time ranges. We can see how certain data looked like in the last hour, but also the diagrams comprising weeks and moths etc. So, it is a convenient tool to compare how it was and how it is at this point.
— Paweł Deyk —
For each view, of course, you can go into more detail by clicking on “more info”. As Klaudyna has already mentioned, we can very quickly switch the range of statistics monitoring. Moreover, we can show them in terms of traffic not only in megabits per second but also packets and flows. For each of these five-minute time slots, we can display detailed statistics, while the entire chart is drawn with all of the statistics we select below for all channels. These views can of course be scaled and changed in scope and range if we want to look at any of the elements in detail.
— Klaudyna Busza-Kujawska —
This is a view prepared for outbound and inbound communications, from and to the company’s Internet and users connecting through the VPN, informing how much traffic they generate and how much traffic goes to them. This configuration is not embedded because each company will have a different definition of VPN traffic and different address range. Consequently, this is a view that each of you can configure yourself. You might as well prepare views of data related to your critical infrastructure or some critical servers. If we have a sensor, the performance metrics will be drawn independently. If we have all traffic channels enabled, then we will get an average value. If you select only e.g. VPN out, you will only get measurement values related to the communication element you are currently analysing.
— Paweł Deyk —
Additionally, if you want to limit the view to a particular IP address or subnet, or define additional conditions, you can use the group filter, which will simply recalculate the results taking into account the conditions of this field. One of the more interesting things on this dashboard which just come in handy in the home office is a chart related to the number of active devices over time. It is a quick preview but you can also expand the information to include active devices. We can view information about what operating systems its users have and detect OSs that are not allowed in our company but for some reason appear on the list (so we can suspect that these are private devices). Same for hardware manufacturers – this is information based on MAC addresses. You can view a list of the most popular equipment manufacturers in our network. Of course, if we have a company policy with a requirement that all company laptops are made by a particular manufacturer, and a completely different one turns up on the list, then we might suspect it is a private device.
This chart that Paweł showed us, with the number of concurrent users over time, can be defined in many different ways. You can verify this in the number of active MAC addresses (how this was distributed over time) but you can also see the number of IP addresses that are used with a specific subnet. This can also be useful to verify our DHCP pool we have prepared for a particular subnet, how saturated it is and whether we still have spare capacity there or not. Additionally, this chart can also be prepared for authenticated users. Information about authenticated users must be provided separately to Flowmon because this data is not available neither in flows nor in packets. We don’t have User ID there, we only have IP and MAC addresses, therefore, if this data is provided from a system that authenticates users by: AD (Active Directory), ISE (Identity Services Engine), RADIUS, then in addition to the IP address, we will also have the User ID and information (even with verification) how the IP addresses of a particular user were changed.
— Paweł Deyk —
It is also possible to preview these authorised users e.g. by our AD and here we have a preview from the last day visible on the dashboard. When expanded, we can see a pie chart and more detailed information. This can include information about the time a user first logged on to the network that day, how much traffic they generated, how much data they sent and how many flows they generated.
— Klaudyna Busza-Kujawska —
When you right click on any User ID, under the button itself you can see that you have the option to drill down – if you select TOP10 Statistics by, you can see, for example, Conversation IP – IP or Port, just like we have here. What happens if we choose Port? This means that we will see (for this user) which ports communication was performed on and what they were connecting with. We can put this together in such a way that we can see the user and the ports or the user and their IP address. This is already a matter of configuration, but such a drill down is an available option.
— Paweł Deyk —
I want to show you one more feature related to the statistics you might find interesting. This is another demo dashboard which can be viewed as a suggestion for our users who will implement such a solution. In the Top Hostnames we can verify which hostnames have been queried by users. We can display a detailed view and show additional statistics in the context for a given hostname.
As we also wanted to show what monitoring and anomaly detection looks like, we’ll move on to Flowmon ADS. Here we have a view of recent network events that have been categorised as anomalies and we have divided them into different categories. Those categories are associated with methods of detecting anomalies. Here we have, for example, port scanning and we can expand the list and see all the events that have been so categorised. Additionally, we can immediately see how many IP addresses caused such events. We can develop such basic statistics for specific source addresses as well. We can display a lot of information like this, not only for scans but also for many other anomalies, such as dictionary attack. We can very quickly detect it here and see the contextual data that will be important in order to enhance security.
— Klaudyna Busza-Kujawska —
There are blue lines above the event description which show when the certain event occurred. Of course, if it’s an ongoing event that suspiciously may not be an anomaly but simply an under-configured part of the system, then we’re going to see it happen all the time. On the timeline here we can see when these events occurred. This makes it easy to see how often they happen – and this data is shown independently for each IP address.
— Paweł Deyk —
Of course, just like in case of the Monitoring Center, we can change the time ranges, in different perspectives, from different sources, for specific addresses, so it’s really easy and quick to get detailed information. For starters, we have many anomaly detection methods – such as BitTorrent traffic, the possibility to define blacklists of specific IP addresses that users should not connect to, DHCP anomaly detection, DNS anomaly detection and DOS attack detection.
These methods do not mean at all that it is only one type of detected event. There are several different types of events categorised under DNS anomalies. These include using a DNS server that is not on our DNS server list. If a company’s network security policy states that only internal DNS servers should be used – and not for example Google’s ones – then when someone uses Google’s server, it will immediately be seen as an anomaly. And it doesn’t apply to servers only. There are a lot of aspects that are verified there i.e. the number and size of connections, the size of packets that are used, so all sorts of anomalies related to DNS communication. This is the case for each of these methods.
— Paweł Deyk —
We can also determine our own patterns for defining these anomalies. Here, for example, you can see a pattern related to Dropbox traffic handling and there is also information on how this query is structured. Another thing is that we can define such a pattern for specific types of malware such as WannaCry. We know what the typical traffic model of such software looks like, so we can define it for ourselves as well. We can of course create a lot of those patters and for different profiles and channels.
— Klaudyna Busza-Kujawska —
In fact, these custom patterns are especially useful if you wish to customize it for your own network as e.g. WannaCry is used here only as an example and is an integrated BPATTERN element. These are network behaviour patterns of various types of malware. We can’t show here exactly what they look like because it’s a demo and you don’t have access to the configuration but if you have such a collector and you have administrator privileges, you will be able to access these details, analysis and configuration of each of these methods. You’ll see there all the configured elements such as WannaCry and many other types of malware and ransomware that are already described. Then you don’t have to create such templates yourself, they will be automatically downloaded from our platform. As long as gold support (i.e. manufacturer support) is active, these new patterns will appear.
— Paweł Deyk —
We also have the blacklists I mentioned earlier which can be imported from some external sources or manually added if needed. We can also create perspectives in which, for example, we determine the significance for given types of anomalies. We can customise the items we need according to the specific legal or compliance requirements of our company. This is a plus that can come in handy for example for the operators of the key service within the National Cyber Security System.
— Klaudyna Busza-Kujawska —
Perspectives are not only a simple way to prioritise detected events and anomalies but with their use we can specify the attack vector. If there’s an SSH Dictionary attack, servers on servers, it’s a critical priority for us, and if there’s an SSH Dictionary attack but from LAN on servers, or from LAN on LAN – it’s a high priority. In each of these perspectives we can specify exactly from which part of the network to which part of the network the attack was made and what priority will be assigned to the event. We can tailor them to the given organisation and on the basis on these perspectives. For example, you can receive e-mail notifications only about critical events while all other notifications are sent to SIEM.
— Paweł Deyk —
There are also fields that are connected to the configuration of notifications or actions taken. We can get e-mail notifications, record traffic in the Traffic Recorder module or run some scripts in order to block, for example.
There’s also an APM (Application Performance Monitoring) module that might still come in handy. It is used to monitor performance of our web or database applications. We can quickly check the overall condition of our systems using the APM index which will display 100 for all transactions that meet the SLA we have set so we can quickly verify our app – if that number is much less than a hundred, there is something wrong with our application. This allows you to find out whether it is a network problem or an internal problem (with the application structure) and verify which element of our system introduces latency or transmission errors.
55:47
— Questions —
“What is the easiest way to trace the actions of a particular address?”
Klaudyna: This can be done in several ways and it depends on what is more convenient for you. Let’s go back to ADS, let’s open a randomly detected event. Now, next to the IP address top right you can see “Related Events”. If particular communication for an IP address is detected, we can see where else this communication or IP address was the source of the event or where it was the target at it can actually be a continuation of an action performed on a larger scale. Then we get a list where the address appears and if we click on what we’re interested in, we get a preview again. We can check the details of each of these events, see each flow and each connection associated with that detection.
But there is another way. Let’s copy the IP address and go back to the dashboard. This is a convenient method as well because it shows immediately what was happening over time. Here we also have a list and we can display the events chronologically, not the way it was presented before. We paste the IP address and apply changes. Now we can see filtered events related to this specific address as well as the information when and what type of events it was involved in and where the address was the source of one. So, if we want to see in which cases the address was targeted (maybe as a part of some action on a larger scale), then we’ll check the event list. If we want to see what types of activity were performed using the address and specify the period in which they occurred, this option will be the most convenient one. These grey graphs show us a certain trend specifying certain activities taken compared to an earlier, identical period. We also have some information on the right about whether there was an increase or decrease. If we’re reviewing the last 24 hours, we know that when there is an up arrow, it means that there have been new events in comparison to the previous period. If there is a green down arrow, it means there have been some events but the situation has improved and nothing new is showing up. I think this is the easiest way to track what was going on with a particular IP address.
59:31
“Can you create your own anomaly definitions such as web communications containing a download URL?”
Klaudyna: Yes, absolutely. This is exactly what Paweł showed you on this custom pattern. Check the example of WannaCry where there was simply some type of communication with a specific hostname. We can also create such a definition here. It doesn’t matter whether it’s in the URL or in the Host Name but it has to include the “download” word which means we have to select the word “like”, then that communication can be caught. If you want to present it as an anomaly, then you will know that this type of communication took place.
Cyber security is a process that we need to approach comprehensively. On-the-spot action can give the illusion that ‘som...
How can Big Data from mobile devices help you in urban planning? What can WLAN be used for besides mobile internet acces...