enjoys technical writing, and a cheeky drink 🥃

Terraform is not always the right hammer

As a staff-level support engineer, one of my responsibilities is to empower my teammates in better reproduction of customer environments.

CircleCI does offer an on-prem solution, CircleCI Server, that comes as a Helm chart.

Beyond a Kubernetes cluster (e.g., AWS EKS), you would also need to provision external object stores (e.g., AWS S3), IAM entities (e.g., AWS IAM user/roles) and etc.

My original goal was to try to provision everything within 1 Terraform module. By everything, we are referring to the EKS cluster, the S3 bucket, the Helm release and etc.

However, as I designed along, I then realized this was not ideal in many ways:

  1. Helm releases are re: application deployments while Terraform applies are re: infrastructure deployments. Trying to piggyback a [Helm release]() as part of infrastructure changes did feel odd for me. (This article, specifically anti-pattern 4, describes this conflict better than I can.)
  2. Terraform's philosophy requires all resources managed in Terraform to be strictly managed within Terraform. As an administrator, this means any updates I want to make to my Helm release has to again be done within the Terraform module. The documented notes on upgrades suggests that any drift detected can produced unintended changes if the administrator is not careful.
  3. I still want to define my EKS cluster with eksctl and YAML; There is a eksctl provider indeed but it still requires eksctl explicitly on the host machine nonetheless.

The various discussions on Reddit (1, 2) also convinced me it was better to avoid shoehorning all the setup into 1 Terraform module.

I ended up splitting up the set up such that:

This meant that the administrator has to manage about 4 or more Terraform modules (instead of 1). However, I feel this is easier to manage and reason about.

#terraform #helm #kubernetes #iac

buy Kelvin a cup of coffee ☕