Docker is an open platform for Sys Admins and developers to build, ship and run distributed applications. Applications are easy and quickly assembled from reusable and portable components, eliminating the silo-ed approach between development, QA, and production environments.
Individual components can be microservices coordinated by a program that contains the business process logic (an evolution of SOA, or Service Oriented Architecture). They can be deployed independently and scaled horizontally as needed, so the project benefits from flexibility and efficient operations. This is of great help in DevOps.
At a high-level, Docker is built of:
– Docker Engine: a portable and lightweight, runtime and packaging tool
– Docker Hub: a cloud service for sharing applications and automating workflows
There are more components (Machine, Swarm) but that’s beyond the basic overview I’m giving here.
Containers are lightweight, portable, isolated, self-sufficient “slices of a server” that contain any application (often they contain microservices).
They deliver on full DevOps goal:
– Build once… run anywhere (Dev, QA, Prod, DR).
– Configure once… run anything (any container).
Docker Features
- Multi-arch, multi-OS ? Stable control API ? Stable plugin API ? Resiliency ? Signature ? Clustering
Docker: ? Is easy to install ? Will run anything, anywhere ? Gives you repeatable builds
Deploy efficiently
- Containers are lightweight – Typical laptop runs 10-100 containers easily
– Typical server can run 100-1000 containers
High level approach
it’s a lightweight VM ? own process space ? own network interface ? can run stuff as root ? can have its own /sbin/init (different from the host)
How does it work?
Isolation with namespaces ? pid ? mnt ? net ? uts ? ipc ? user
docker run -i -t \
\
–net=none \
–lxc-conf=’lxc.network.type=veth’ \
–lxc-conf=’lxc.network.ipv4=172.16.21.112/16′ \
–lxc-conf=’lxc.network.ipv4.gateway=172.16.255.254′ \
–lxc-conf=”lxc.network.link=br0″ \
–lxc-conf=’lxc.network.name=eth0′ \
–lxc-conf=’lxc.network.flags=up’ \
# docker attach [CONTAINER ID] # ps axufwwUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDroot 1 0.0 0.0 14728 1900 ? S 02:17 0:00 /bin/bashroot 83 0.0 0.0 177340 3860 ? Ss 02:20 0:00 /usr/sbin/httpdapache 85 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 86 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 87 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 88 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 89 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 90 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 91 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdapache 92 0.0 0.0 177340 2472 ? S 02:20 0:00 _ /usr/sbin/httpdroot 93 0.0 0.0 16624 1068 ? R+ 02:20 0:00 ps axufww # ifconfigeth0 Link encap:Ethernet HWaddr … inet addr:172.16.21.112 Bcast:172.16.255.255 Mask:255.255.0.0 inet6 addr: fe80::a46d:79ff:fe20:ea7e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1668 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:222716 (217.4 KiB) TX bytes:468 (468.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES7baceac4e139 mohan/centos6:latest “/bin/bash” 25 seconds ago Up 25 seconds 8a6311dbdbb0 mohan/centos6:latest “/bin/bash” About an hour ago Up About an hour
Compute efficiency
Almost no overhead ? processes are isolated, but run straight on the host ? CPU performance = native performance ? memory performance = a few % shaved off for (optional) accounting ? network performance = small overhead; can be reduced to zero
Docker can help Developer
- inside my container: – my code – my libraries – my package manager – my app – my data
Locking Down and Patching Containers
A regular system often contains software components that aren’t required by its applications. In contrast, a proper Docker container includes only those dependencies that the application requires, as explicitly prescribed in in the corresponding Dockerfile. This decreases the vulnerability surface of the application’s environment and makes it easier to lock it down. The smaller footprint also decreases the number of components that need to be patched with security updates.
When patching is needed, the workflow is different from a typical vulnerability management approach:
Traditionally, security patches are installed on the system independently of the application, in the hopes that the update doesn’t break the app.
Containers integrate the app with dependencies more tightly and allow for the container’s image to be patched as part of the application deployment process.
Rebuilding the container’s image (e.g., “docker build”) allows the application’s dependencies to be automatically updated.
The container ecosystem changes the work that ops might traditionally perform, but that isn’t necessarily a bad thing.
Running a vulnerability scanner when distributing patches the traditional way doesn’t quite work in this ecosystem. What a container-friendly approach should entail is still unclear. However, it promises the advantage of requiring fewer updates, bringing dev and ops closer together and defining a clear set of software components that need to be patched or otherwise locked down.
Security Benefits and Weaknesses of Containers
Application containers offer operational benefits that will continue to drive the development and adoption of the platform. While the use of such technologies introduces risks, it can also provide security benefits:
Containers make it easier to segregate applications that would traditionally run directly on the same host. For instance, an application running in one container only has access to the ports and files explicitly exposed by other container.
Containers encourage treating application environments as transient, rather static systems that exist for years and accumulate risk-inducing artifacts.
Containers make it easier to control what data and software components are installed through the use of repeatable, scripted instructions in setup files.
Containers offer the potential of more frequent security patching by making it easier to update the environment as part of an application update. They also minimize the effort of validating compatibility between the app and patches.
Not all is peachy in the world of application containers, of course. The security risks that come to mind when assessing how and whether to use containers include the following:
The flexibility of containers makes it easy to run multiple instances of applications (container sprawl) and indirectly leads to Docker images that exist at varying security patch levels.
The isolation provided by Docker is not as robust as the segregation established by hypervisors for virtual machines.
The use and management of application containers is not well-understood by the broader ops, infosec, dev and auditors community yet.
Recent Comments