The Ways Of the Past
If you been living outside of the burn cycle in production, you may not know what is the fad of app containers and container orchestration. However, if you had, you may forget why we use them now.
In the following post, continuing the series of #StrikingABalance, we will explore how we would run a service, in legacy infrastructure. Brightening the shadow we've made using app containers and the actual simplicity container orchestration brings.
Container orchestrations are an answer to a problem created from the technological introduction of app containers. App containers are an excellent form of making reproducible artifacts. However, with its use, a requirement for something to run these new artifacts arises. While running locally is quite trivial. Especially with tools like docker-compose, running a container image in any form on the cloud will most certainly not be. The reason is fairly easy to describe, but not so easy to implement. You need a way to be able to make your deployment ephemeral and idempotent.
Let's take the approach of running an app container without a container orchestration.
The first you have to account for is how your app container is going to run. Let's say you will use a VM instance to run the container. Yet, before you can even answer how you will set up that instance, you need to figure out what is going to run the container image consistently. The easiest approach here probably be to use an init daemon. A simple systemd service unit file to keep the app container running can suffice. Allowing you to retrieve logs and status of the service, fairly quickly for the app container's runtime.
Now, back to the how of the app container's runtime will function. You now need to provision the VM instance before you can reach the stage of running the app container on systemd. A simple BASH script could work here, but remember the end goal here is for something that's idempotent and ephemeral. If your VM instance shuts down, you need a way to get back the setup you had prior exactly as it was before. Or if you introduce changes, you need a way to have proper configuration drift resolution. Writing an idempotent BASH script is non-trivial. Probably can make any grown man cry at the sight of its existence.
Most certainly, the second complexity introduced is some configuration manager like Ansible, Chef, or Salt to cover the provisioning of the instance. Take note, once you've completed your provision, you are now half way to your end goal of running an app container. The next stage here is now how are you going to retrieve the app container image for your runtime. The options can grow quite large. However, to keep it as simple, while skipping massive chunks of the implementation details, you can create a continuous deliver pipeline to run your configuration manager.
The continuous delivery pipeline would run a configuration manager, which fetches your container image, applies the systemd unit file, and starts up the service. One of the requirements to make the systemd runtime work is you need to run a container registry and another VM instance running said container registry. You will also need a CI/CD service as hinted before, if you don't already have one.
Lastly, you need to manage all of the VM instances you spun up for that one single app container you want to run on the cloud. You are now managing both an entire operating system to run that single app container and a fleet of VM instances to manage that app container runtime. The complexity doesn't stop there. You also need to track many other portions, like ssh privilege, security group isolation, system resource management, and service management.
The past way worked when we required only a few instances to run for our services. It becomes completely unmanageable when you have full fleet you require to run. Container orchestration allows you to take all of the complexity described here and apply it to a single standard structure on how you define an app container to run. Allowing for better abstractions, but also keeping the level of complexity built prior at a hopeful minimum.