Our UDig Software Engineering team has always been a cloudfirst engineering group, but while inventorying our internal application and cloud usage we were disappointed at our spend and over-usage of EC2 instances with AWS. Through research and experience we knew if we could containerize our applications with Docker, we’d then be able to consolidate and maximize our servers output. Before we go any deeper let’s look at why this is the case. 

First off, you’ve likely heard before from teams leveraging Docker that it is much faster than virtual machines (VM) or other setups. While this may be true for some applications, what I think users are anecdotally recognizing is the efficiencies of Docker and how it can allow applications to use more of the horsepower their servers carry. One of the key reasons here is unlike virtual machines running on a piece of hardware with fixed CPU and memory limits, Docker doesn’t need to know these things per se. Docker is smart enough to automatically allow the containers who need the most horsepower to leverage more when they need it and containers not in use or between runs to release and idle using little to no resources on the server. This fundamental paradigm shift in virtualization is one of the many benefits of Docker and containerized applications. 

So back to our UDig story and our journey to use Docker and containerize all our internal applications, services and batch jobs. The first part of our process was to catalog all our applications / capabilities and ensure that Docker was a viable home for them to live and run. Upon completion of this exercise we had a list of applications mapped with any call outs that were of concern or required further investigation. Fortunately for us all our applications could make the transition. Only a couple needed some adjustments to optimize them for running within containers. With this information in hand we were able to quickly map out an approach and begin the process of setting up proper container repos, CI/CD pipeline builds and ultimately deploying out applications to their new home. With this effort, we took a series of applications (10 in total) and consolidated them from across 6 instances to a single instance. The new single instance was setup with more horsepower, but the cost of this single instance was less than half our spend for running the various multiple servers. All in all, our total spend on AWS was decreased by half. 

Understanding how Docker can leverage your hardware more efficiently than VMs is easier to understand once you review the diagram below. By simply removing the Guest Operating System from the equation we allow the Docker framework to handle all sub-process (applications) with maximum efficiency. Containers share the entire hardware’s resources with the administrator’s ability to prioritize containers over one another to ensure the most critical services are available and responsive when you need them most. This brings up another point, resiliency. By building containers with a health check, Docker can also automatically respawn hung or fail processes allowing for developers to handle auto recovery scenario and more. 

leveraging docker

Stack comparison between Virtual Machines (left) and Docker (right) 

Benefits of Docker 

Docker simplifies your deployment strategies through the ease of: 

Automation 

Docker allows for the use of many administrative tools. Pick a flavor but we’ll recommend Kubernetes. The power of Kubernetes allows for remote administration of Docker clusters, nodes, networks, volumes, containers and more. Basically, everything within the Docker eco-system can be managed by a Kubernetes admin. Another huge benefit is Kubernetes Control Platforms such as KublrPortainer and GCP Kubernetes Engine. These platforms allow for quick creation of clusters within your favorite cloud provider to enable almost instant access to a production deployable environment. 

Scalability 

Docker has tremendous scaling abilities both at the container resource level, in the number of container instances per node and even across clusters. This means your administrators can fine tune environments to provide maximum benefit to users and clients. 

Concurrency 

As mentioned above in scalability, being able to run containers across clusters (read geographic regions) we can enable fail safe measures if any one node fails in a cluster. 

Recovery 

If it hasn’t become clear yet, Docker’s ability to automate all aspects of an environment allows for rapid recovery strategies when multi region outages occur. Moreover, the ability to leverage health checks and event default restart parameters for each container / service ensures total control over how your environments handle anything from hung processes to regional power outages. 

Now What? 

If we haven’t sold you on Docker yet then we may never, but for those that are wondering how to get started, it starts with a proper application assessment / readiness exercise. With this effort we’re looking to identify the environment requirements for each application, whether it is Java, Python, PHP or another application, we want to ensure Docker can handle the framework. The good news here, apart from .NET standard, which requires a Windows only image, almost every major web technology can run within Docker. This is great news and another reason we’re seeing such high adoption / usage rates with Docker. 

Choosing to leverage Docker is a cultural technology change. One we think can benefit any sized organization through its simplicity, ease of management and superior enterprise enablements. Our Software Engineering teams build and deploy critical business applications leveraging Docker all the time. If you’re struggling to embrace this new world of hosting and DevOps, we have professionals capable of demystifying the process and helping you modernize applications with minimal ramp. 

Andrew Duncan

You May Also Like These Topics