Docker containers are a form of “lightweight” virtualization They allow a
process or process group to run in an environment with its own file system,
somewhat like chroot
jails , and also with its own process table, users and
groups and, optionally, virtual network and resource limits. For most purposes,
the processes in a container think they have an entire OS to themselves and do
not have access to anything outside the container (unless explicitly granted).
This lets you precisely control the environment in which your processes run,
allows multiple processes on the same (virtual) machine that have completely
different (even conflicting) requirements, and significantly increases isolation
and container security.
In addition to containers, Docker makes it easy to build and distribute images that wrap up an application with its complete runtime environment.
For more information, see What are containers and why do you need them? and What Do Containers Have to Do with DevOps, Anyway?.
The difference between the “lightweight” virtualization of containers and “heavyweight” virtualization of VMs boils down to that, for the former, the virtualization happens at the kernel level while for the latter it happens at the hypervisor level. In other words, all the containers on a machine share the same kernel, and code in the kernel isolates the containers from each other whereas each VM acts like separate hardware and has its own kernel.
Containers are much less resource intensive than VMs because they do not need to be allocated exclusive memory and file system space or have the overhead of running an entire operating system. This makes it possible to run many more containers on a machine than you would VMs. Containers start nearly as fast as regular processes (you don’t have to wait for the OS to boot), and parts of the host’s file system can be easily “mounted” into the container’s file system without any additional overhead of network file system protocols.
On the other hand, isolation is less guaranteed. If not careful, you can oversubscribe a machine by running containers that need more resources than the machine has available (this can be mitigated by setting appropriate resource limits on containers). While containers security is an improvement over normal processes, the shared kernel means the attack surface is greater and there is more risk of leakage between containers than there is between VMs.
For more information, see Docker containers vs. virtual machines: What’s the difference? and DevOps Best Practices: Immutability
There are, broadly, two areas where containers fit into your devops workflow: for builds, and for deployment. They are often used together, but do not have to be.
For more information, see:
Containers can be integrated into your DevOps toolchain incrementally. Often it makes sense to start with the build environment, and then move on to the deployment environment. This is a very broad overview of the steps for a simple approach, without delving into the technical details very much or covering all the possible variations.
Many CI/CD systems now include built-in Docker support or easily enable
it through plugins, but docker
is a command-line application which
can be called from any build script even if your CI/CD system does not
have explicit support.
Dockerfile
based on an existing Docker image, which is the
specification used to build an image for build containers. If you
already use a configuration management tool, you can use it within
the Dockerfile. Always specify precise versions of base images and
installed packages so that image builds are consistent and upgrades
are deliberate.docker build
and push it to the Docker
registry using docker push
.Dockerfile
for the application that is based on the build
image (specify the exact version of the base build image). This file
builds the application, adds any required runtime dependencies that
aren’t in the build image, and tests the application. A multi-stage
Dockerfile
can be used if you don’t want the application
deployment image to include all the build dependencies.It is best to also integrate building the build image itself into your devops automation tools.
This can be easier if your CD tool has support for Docker, but that is by no means necessary. We also recommend deploying to a container orchestration system such as Kubernetes in most cases.
Half the work has already been done, since the build process creates and pushes an image containing the application and its environment.
docker run
on the application server with the
image and tag that was pushed in the previous section (after
stopping any existing container). Ideally your application accepts
its configuration via environment variables, in which case you use
the -e
argument to specify those values depending on which
stage is being deployed. If a configuration file are used, write it
to the host file system and then use the -v
argument to mount
it to the correct path in the container.kubectl set image
, a Helm chart, or better yet, a
kustomization
.).Once deployed, tools such as Prometheus are well suited to docker container monitoring and alerting, but this can be plugged into existing monitoring systems as well.
FP Complete has implemented this kind of DevOps workflow, and significantly more complex ones, for many clients and would love to count you among them! See our Devops Services page.
For more information, see How to secure the container lifecycle and Containerizing a legacy application: an overview.
Subscribe to our blog via email
Email subscriptions come from our Atom feed and are handled by Blogtrottr. You will only receive notifications of blog posts, and can unsubscribe any time.