| The author of the article is Dariusz Grabowski
Lead SDN Developer at EXATEL
In 2019, researchers from Palo Alto Unit 42 found out that a specific docker image was running on more than 2,000 machines. It would seem to be just a regular Centos like many on the Docker Hub. However, there was something unusual about it: it was mining the Monero cryptocurrency in the background. But, it wasn’t doing it for the owners of the machines on which it was run. The cybercriminals cleverly wrapped all the crypto mining scripts in a Docker image, put this in a public repository and over 10,000 users downloaded it, completely unaware of its content.
More than half of the docker images available on Docker Hub have vulnerabilities or contain malware. How can we be sure that the downloaded image is not harmful to the company? How can you detect a dangerous situation and nip it in the bud? You’ll find out in a moment, but first let’s take a closer look at these threats.
Docker is convenient, but is it also secure?
Different repositories on Docker Hub
Docker Hub is a public repository of docker images. It contains millions of objects, different versions of applications or distributions available for everyone. You can discover 3 types of images there. Official images, generated and published directly by Docker. The most popular Linux distributions (Ubuntu, Debian, Alpine) and popular applications (Nginx, Postgres, Traefik, and many others) can be found among them. The second type are images from Verified Publishers, i.e. images provided by trusted companies, usually providers of specific software (e.g. Oracle, Elastic or IBM). The last group of images are those provided through private user accounts. They usually include less frequently used applications or systems. Special care should be taken when downloading images:
- analyse the shared Dockerfile
- verify the software provider
- check the popularity of the image – choose the ones that have been downloaded millions of times.
Instructions for running the image
Another source of danger is the instructions for use published on the page of a particular image. Such instructions are usually helpful when you are not familiar with the software and are running it for the first time. This usually boils down to calling the docker run command with specific parameters. These parameters can be dangerous to our system. Running a container in privileged mode (–privileged), adding capabilities (–cap-add), or mounting system directories (-v /var, /etc, /usr) can disrupt the separation between the base system and the container. Because the docker daemon is usually started with administrator authorisations, container processes gain almost unlimited access to the host machine. This can lead to system damage/infection or leakage of sensitive information.
How to protect ourselves against such a threat? As in the previous case, it is worth to first verify the software provider. Another solution is to run the image in an isolated environment, such as a virtual machine, and verify its operation manually.
The third source of threats are vulnerabilities in applications embedded in the image. Applications such as Kibana are usually based on another image with a popular Linux distribution (Debian or Alpine). Therefore, such an image contains not only application of interest, but also manydifferent libraries and binaries. They are often not even related to the application in use. Instead, they may contain bugs that allow you to escalate authorisations to root or cause a critical error in the container. These types of bugs are called vulnerabilities.
One solution to this problem is to use distroless images. Such images do not contain package managers, shell, and many of the basic programs we normally expect in Linux. This significantly reduces the number of binaries for which vulnerabilities may exist. Unfortunately, the use of this type of image is not always possible due to the nature of the application. In this case, it is worth using one of the existing vulnerability detection scanners.
What to use for vulnerability detection?
There are many vulnerability scanners on the market. They all use public databases such as nvd.nist.gov. They use different subsets of such databases hence differ in the number of errors reported. The list of vulnerabilities found for popular images is presented in the chart below.
The most popular open-source vulnerability scanner. On GitHub, it runs in a client-server architecture which causes some inconvenience in continuous integration (CI) pipelines. Its use also requires going through a complex installation process. Some solutions that slightly simplify the installation process and improve usage in CI such as clair-local-scan. However, it requires remembering to constantly update the database, which is delivered in a standalone image.
It is an open-source tool from Anchore. Its special feature is the possibility to scan all intermediate image layers. Grype unpacks the image, decomposes it and points out all known vulnerabilities for the found applications. This solution received 809 stars on GitHub. The listing from the launch is shown in the animation below.
Docker also provides its mechanism for vulnerability detection. Unfortunately, this is a paid service, only for registered users.
Trivy is a software developed by Aquasec. It has earned 8.6k stars on GitHub. This application can run standalone or in client-server mode. Trivy scans not only the installed packages in the container, but also the dependencies of the produced applications. This allows us to check whether the dependencies used in the projects have any security vulnerabilities. This applies to dependency managers such as Bundler, Composer, npm, yarn, etc.
Trivy can also analyse Dockerfile, Kubernetes, and Terraform configuration files. This is a true Swiss Army Knife among vulnerability tools. The app detects a similar number of errors as Clair, yet is easy to configure. This makes it often used in continuous integration environments. An example launch can be seen in the animation below.
Based on the above information, Trivy seems to be the best tool on the market. In the next chapter, you will learn how to implement this into your continuous integration pipeline using Jenkins.
How to add Trivy to the continuous integration pipeline?
Trivy can be used in several ways. The authors provide the option to install this solution with the package manager, using the installation script or a docker container version. For CI/CD pipelines, the latter is the most desirable option.
To use Trivy in Jenkins, simply install the “Warnings next generation” plugin, which contains parsers for the reports of many static analysis tools. In this case, we will focus on 2 functionalities of this tool:
- testing the Dockerfile configuration
- testing vulnerabilities in an image built on a Dockerfile
Examining the configuration will verify that the Dockerfile is prepared according to best practices. In this case, it is checked for example if a user has been switched to other one than root, if the “latest” tag is not used, if dangerous ports are not exposed, etc. Vulnerability testing, on the other hand, will allow us to verify all packages installed in the image and their versions. This will enable checking whether a critical bug has been made public for a particular version.
Below is a sample code that implements the described functionality. After the image is built, it is checked using Trivy (trivy image…). A JSON report will be generated as a result of this process. This report is parsed by Jenkins using the recordIssues command.
Docker file verification is done with “trivy config…” Using this command you can also generate a report in JSON format. Unfortunately, the Jenkins parser is not designed for this type of reporting. Hopefully, such functionality will be added soon. At this point if the Dockerfile contains any bug, the pipeline will fail. The run listing is shown in the graphic below.
The widespread use of containers has also resulted in drawing the attention of cybercriminals. We update a lot of system packages every day. The update carries the risk of introducing a new vulnerability into the system. In line with the spirit of DevSecOps, systems that automatically detect vulnerabilities early in development should be used. This allows for quicker resolution of issues and reduces risk to the entire company.