FP Complete has been working with containerization (or OS-level virtualization) since before it was popularized by Docker. What follows is a brief history of how and why we got started using containers, and how our use of containerization has evolved as new technology has emerged.
Our first foray into containerization started at the beginning of the company, when we were building a web-based integrated development environment for Haskell. We needed a secure and cost-effective way to be able to compile and run Haskell code on the server side. While giving each active user their own virtual machine with dedicated CPU and memory would have satisfied the first requirement (security), it would have been far from cost effective. GHC, the de-facto standard Haskell compiler, is notoriously resource hungry, so the VM would have to be quite large (it’s not uncommon to need 4 GB or more of RAM to compile a fairly straightforward piece of software). We needed a way to share CPU and memory resources between multiple users securely and be able to shift load around a cluster of virtual machines to keep usage balanced and avoid one heavy user from impacting the experience of others users on the same VM. This sounds like a job for container orchestration! Unfortunately, Docker didn’t exist yet, let alone Kubernetes. The state of the art for Linux containers at the time was LXC, which was mostly a collection of shell scripts that helped with using the Linux kernel features that underly all Linux container solutions, but at a much lower level than Docker. On top of this we built everything we needed to distribute “images” of a base filesystem plus overlay for local changes, isolated container networks, and ability to shift load based on VM and container utilization — that is, many of the things Docker and Kubernetes do now, but tailored specifically for our application’s needs.
When Docker came on the scene, we embraced it despite some early growing pains, since it was much easier to use and more general purpose than our “bespoke” system and we thought it likely that it would soon become a de-facto standard, which is exactly what happened. For internal and customer solutions, Docker allowed us to create much more nimble and efficient deployment solutions that satisfied the requirement for immutable infrastructure. Prior to Docker, we achieved immutability by building VM images and spinning up virtual machines; a much slower and heavier process than building a Docker image and running it on an already-provisioned VM. This also allowed us to run multiple applications isolated from one another on a single VM without worry of interference with each other.
Finally Kubernetes arrived. While it was not the first orchestration platform, it was the first that wholeheartedly standardized on Docker containers. Once again we embraced it, despite some early growing pains, due to its ease of use, multi-cloud support, fast pace of improvement, and backing of a major company (Google). We once again bet that Kubernetes would become the de-facto standard, which is again exactly what happened. With Kubernetes, instead of having to think about which VM a container would run on, we can have a cluster of general-purpose nodes and let the orchestrator worry about what runs on which node. This lets us squeeze yet more efficiency out of our resources. Due to its ease of use and built-in support for common rollout strategies, we can give developers the ability to deploy their apps directly, and since it is so easy to tie into CI/CD pipelines we can drastically simplify automated deployment processes.
Going forward, we continue to keep up with the latest developments in containerization and are constantly evaluating new and alternative technologies, to stay on the forefront of DevOps.
From FP Complete:
From the web:
Subscribe to our blog via email
Email subscriptions come from our Atom feed and are handled by Blogtrottr. You will only receive notifications of blog posts, and can unsubscribe any time.