At the start of a new year, it is traditional to summarize what we have learned over the past year and apply those lessons to the coming year. This past year has been an outlier in so many ways. Hence, the usual platitudes about lessons learned may seem out of place. Nonetheless, people in a leadership position do not have the luxury of ignoring the difficult situation we all face; they have an urgent responsibility to move their organization to safety in these very trying times.
This article will focus on one specific resource that organizational leadership must manage extraordinarily well: information technology. Every year, pundits point out that IT is more important than ever before. This past year drove home how true this tired cliché really is. Organizational leaders don’t need to hear this repeated. They want and need to hear from us technologists the answer to “how do we make this most vital tool truly effective?”
Before providing an answer, we need to define what effective IT means. To do that, let’s look at some of the problems IT faced during pandemic 2020:
To meet these challenges, IT programs and projects that, in the past, might have been implemented over many months or even years had to be completed in weeks or days. This was necessary to ensure all organizational operations, both in- and out-facing, continued to function at all, let alone smoothly.
Hence, in 2021 effective IT means IT that can build applications that are
These measures of effectiveness are not new. What is new is that they are no longer a “nice to have” goal but a matter of organizational survival.
The good news is that how to achieve this type of effectiveness is something that technologists have been talking about for the past decade. The rest of this article will show you how to stop talking about effective IT in your organization and start doing it now.
Let’s start with “scalable instantly and reliably.” By now, most of us have heard the metaphor that our IT applications need to be cattle, not pets. Suppose we want extremely high reliability and scalability with minimum stress on IT resources. In that case, we don’t want our IT staff fiddling around and wasting time figuring out why an app stopped working on a particular pet VM. We want them to be able to immediately kill the non-functioning “cattle” and redeploy an exact replica to get the application right back up. That is precisely what containerized applications allow us to do.
Remember we said we want minimum stress on IT resources? That means we don’t want IT staff redeploying a new container when an old one goes down. We want our systems to be self-healing. In other words, we want redeployments of failed applications to happen automatically. Even more than that, we want multiple containers with the exact same application to be deployed to handle spikes in demand automatically. Then, we want an automatic scale down of that deployment after peak demand passes to conserve compute resources. Even better, we want the underlying infrastructure resources themselves to automatically scale up and down to handle peaks and valleys of demand.
Kubernetes is the tool that manages container “cattle” and can do all of the above and more. Despite there being several alternatives out there, whether, in the cloud or on-premise, Kubernetes has become the de-facto standard tool for container orchestration. All the major cloud and software vendors now fully support Kubernetes, precisely because it is not “owned” by any of them. For this reason, along with its rapid adoption, there has been enormous growth of ancillary tooling available for Kubernetes.
In sum, containerizing applications and having Kubernetes orchestrate these containers is now the baseline tool requirement for effective IT. You can read more about the advantages of Kubernetes here.
Let’s now turn to the next measures of effective IT, “adaptable to rapidly changing requirements and deployable almost instantaneously.” We’ve all heard about the magic of continuous integration/continuous deployment (CI/CD). We are told that CI/CD is a technique developed by the millennial generation of tech giants. It allows organizations to continuously add new features to applications while remaining confident that the new and/or improved features work as defined without interfering with the rest of the application’s functioning.
While almost always lumped together, the two halves of CI/CD have different roles. The role of CI is integration, which means integration testing – add some code and make sure everything works just as before, except for the new functionality, which also works as described. Once CI has confirmed that we have a well-tested version of the application, it automatically packages it up. It stores it with a stamp indicating what version of our code it represents. The second role is that of CD, which is deployment – getting our packaged application out into the world where it can be used.
Deployment can happen in many ways and many places for different purposes. Hence it is useful to separate CI from CD. This allows us to use a CD system tailored to our deployment environment’s specific needs, which, for our current purposes, is Kubernetes. There are multiple options, but the two most widely used CD options for Kubernetes have joined together to create a CD system known as Argo CD. Best-of-breed CI/CD systems are based on the following critical DevOps principles:
Argo CD uses Git repositories to implement all these principles, so it is known as a GitOps tool. You can learn much more about Argo CD and its many advantages here.
The last measure of effective IT is not something new. Everyone knows security and privacy are critical. What is new are the levels of threat organizations face when the whole world is interconnected. The recent Solarwinds episode demonstrates how IT is now literally the front line for warring nations.
Businesses don’t have the luxury of disconnecting from the internet — but security and privacy access controls can make using online systems difficult and even unpleasant. The result is that organizational staff often take shortcuts to avoid the barriers and save themselves time. Unfortunately, these shortcuts then serve as vectors of attack for hackers.
The key lesson is that making security easy to use is critically important in making security effective. FP Complete has created a security tool for Kubernetes (and other platforms) that makes security much easier to implement and use. Among its key features:
You can learn more about our authentication tool here.
In our shortlist of measures that define effective IT in 2021, we left out all the standard measures that have long been best practice, for example:
By now, it should be evident that meeting these measures will require you to add even more tools to your Kubernetes cluster. We all know from experience that choosing the right tool and integrating many different tools is extremely time-consuming and expensive. We seem to be stuck in an unsolvable paradox: on the one hand, we need a multi-headed hydra tool to implement effective IT in our organizations; on the other hand, building this tool will be too time-consuming and therefore mean we won’t be effective the way we need to be.
Wouldn’t it be nice if there was an off-the-shelf tool with all the “batteries included”? We need a tool that allows us to easily deploy Kubernetes clusters in the cloud and on premise, which includes and integrates the many tools we need to meet all the measures of effective IT we’ve discussed thus far. Fortunately, there is such a tool. It’s FP Complete’s Kube360. You can learn more about it here.
Subscribe to our blog via email
Email subscriptions come from our Atom feed and are handled by Blogtrottr. You will only receive notifications of blog posts, and can unsubscribe any time.