Popularized by Docker, containerization is a mechanism for packaging together an application with its full set of system dependencies to create a reproducible experience. Containerization is similar to using a virtual machine, but has some distinct advantages:
- It’s easier to share system resources (especially RAM) with the host OS
- It requires less overall CPU to run (since it shares a kernel with the host)
- The startup time is less
- It’s typically easier to share files and network interfaces with the host
The downside of containerization is that it’s not a complete silo like virtualization, since you’re still sharing some of the host OS’s components. That said, for many common workflows containerization provides more than enough functionality.
This article will talk at a high level about some of the workflows you’d want to consider containainerization for. We have many more detailed blog posts as well. We’ll link to some of them below.
Deployment
The most common case of using containers is deployment. In a non-container world, deployment typically involves some significant configuration of server machines, potentially with configuration management tools like Chef and Puppet. This introduces lots of room for your servers to end up in an unexpected state.
With container-based deployment, your CI pipeline defines a complete formula for creating a pristine image from scratch. Your CI pipeline can further test this image to make sure it’s working as expected. Then, instead of reconfiguring your servers with updated system libraries, static assets, and an executable, you can atomically swap out which image is running.
Orchestration tools like Kubernetes make it easy configure a cluster to run multiple copies of your images for scalability and fault tolerance, and to perform red/black deployments to upgrade your entire cluster to new versions. It also makes it possible to roll back to a previous version of the image in case a bug is discovered.
Some of our blog posts on deploying with containers:
- Deploying Postgres based Yesod web application to Kubernetes using Helm
- Deploying Haskell Apps with Kubernetes
- Deploying Rust with Docker and Kubernetes
Development
Let’s say you’re working on an application. It depends on some system libraries, using some code generation tools, and needs a locally running database. Getting this set up manually on Linux can be a bit of a pain, but not too bad. Now some pain points?
- Which distribution did you get it set up on? What if another team member wants to use a different distro?
- What if another project needs different versions of the tools or system libraries?
- What if you need to work on Windows or OS X?
Setting up development environments can be a time consuming prospect. When onboarding new team members, it can represent a significant delay. And we often end up in a situation where a new team member pushes some code that “works for them” but breaks on someone else’s machine.
Containers can be a solution to this. Instead of installing all the appropriate tools on your operating system, the typical workflow is:
- A DevOps team member sets up a CI job to build a Docker image with all necessary tools and libraries
- The project’s CI build uses this Docker image for building and testing the project
- On your local machine, you perform builds inside a Docker container using the same image used on CI
Docker’s built-in support for bind mounting and sharing network interfaces makes this kind of workflow convenient. And build tools like Stack can automate this process even more.
Learn more
Interesting in learning more about how to use containerization on your team? Learn more about FP Complete’s training services.
Subscribe to our blog via email
Email subscriptions come from our Atom feed and are handled by Blogtrottr. You will only receive notifications of blog posts, and can unsubscribe any time.