working hard to make tech work for the rest of us

LocalDev on a budget: Integration tests!

I've spent quite a bit of time thinking about what I want to accomplish with this blog. After a lot of back and forth & mental gymnastics with myself, I think the purpose here is to de-mystify this work and present options to readers that give us all space to avoid cloud/saas vendors with expensive plans and/or suspect privacy policies. This stuff can be cheap, easy, and ethical and I'd like to demonstrate how.

In recent weeks, I've been playing around with iterative software development. What I mean here is I want to build a comprehensive project where I can run a local and isolated test suite on my host machine (i.e. my laptop) over and over while I iterate on some code I'm writing. I believe this is usually referred to as the “inner loop” of software development feedback (vs the “outer loop” where once the code is committed to a forge, a continuous integration system runs a sequence of checks). The inner loop usually never leaves the developer's host machine and should be easy to run, quick to finish, and return the results to the developer directly. I think programs like air or skaffold exist to perform this type of work. I believe there are quite a few options available out there already (usually tied to a chosen programming language).

For my use-case, I am writing a little cli in golang that sends some text to an encrypted matrix channel. To be honest, I don't think of myself as much of a software developer. I've spent most of my career in various ops roles like sysadmin and devops/sre. So I never really got comfortable with unit tests in the traditional sense. They seem to serve a great function to validate business logic in the code but for actually ensuring the software itself can run successfully in a larger infrastructure ecosystem, I am generally disappointed with what unit tests bring to the table. I do like software testing frameworks and how they make it fairly easy to manage a series of tests inside a project. So what I'd like to do is use one of these frameworks but focus on providing real integration tests that run against real resources all while being very very ephemeral and light enough to destroy and recreate themselves and run repeatedly on a developer's host machine. Pretty tall order, eh?

For example, if I'm building software that accesses a database, then I want my tests to run against that real database. For my use-case, I want my tests to run against a fully functioning matrix server. Docker really helps here because most vendors these days already provide dockerized versions of their apps and it doesn't take too much effort to run things in docker. It also doesn't take too much effort to start, destroy, and restart things in docker. And usually, if a developer is able to identify the basic needs of the app, then they can just focus on running infrastructure that directly touches the app. A backend service isn't necessarily going to require the apps that power a frontend service so we don't need to consider this when building our test infrastructure.

Another cool thing about using docker here is the docker engine itself is built on top of an app called buildkitd. On the surface, buildkit is an engine to build container images. It takes advantage of image layering to provide parallel processing and caching. It also provides more advanced features such as calculating the most performant steps to run a sequence of commands (i.e. a dag). While technically, buildkit exists to build container images, it can also be used to run arbitrary commands from inside a container. This means buildkit can provide an environment, on the user's host machine yet isolated from the user's host machine, that can be destroyed and rebuilt fairly quickly. Instead of using a series of shell scripts or make files to manage resources that run directly on your host machine, you can use buildkit to manage the entire lifecycle of your test resources.

Okay, so how do we do this then? Buildkit is still fairly new so there's not a lot of client side options available (and I'm not quite ready to learn buildkit's llb language just yet). So far, I've found dagger and earthly. I spent some time playing around with dagger but found their implementation of cue as the scripting language kind of frustrating. Earthly, on the other hand, includes an intuitive language (its very similar to the language used to build a Dockerfile). It was pretty easy for me to get started with earthly and once I did, I never looked back. Note: its a bit beyond the scope of this article to explain earthly and its syntax but rest assured, they have excellent documentation on their website.

To put all of this together, I used earthly with golang's testing package and two custom docker images I built myself. (All of this code lives in my trix repo at the time of this writing)

Golang Testing Package: I take advantage of the TestMain function to create setup and teardown code blocks to configure my matrix host before the tests run against it. The general syntax looks something like

func TestMain(m *testing.M) {
	setUp()
	retCode := m.Run()
	tearDown()
	os.Exit(retCode)
}

where the setUp() and tearDown() functions are defined in my test file and m.Run() executes the tests. For my purposes, the setup function logs my admin user into the matrix host, creates the necessary rooms, joins the admin user to those rooms, then listens for new messages. The tests themselves use golang's os/exec package to run my newly built client binary as a non-admin bot user. The tests pass when my admin user successfully decrypts the test messages and verifies the messages are identical to the ones sent by the bot user.

Earthly: Before the golang test suite runs, my Earthfile builds the client binary from the current state of my codebase on my host machine. After that, it will bootstrap my test matrix host and execute my test suite. In order to manage the matrix host, earthly uses docker in docker to run container images. The advantage to using the somewhat Inception-like idea of a docker daemon executing inside another docker daemon is that we have a completely isolated environment that gets destroyed at the end of each test run. And earthly makes putting this all together fairly easy with its WITH DOCKER code block

WITH DOCKER --pull betch/trixtest:latest
  RUN docker run -d -p 8008:8008 betch/trixtest:latest > /dev/null && \
    go test -v
END

The above fires up a new matrix host listening on port 8080 and then executes the golang test suite. I'm only running one container here because the matrix host is using an embedded sqlite server. But the earthly config can be altered to support starting multiple docker containers (and even use a docker compose file).

Trixtest custom docker image: Trixtest is my matrix test host. I have a separate Earthfile which I use to build it. Trixtest is based off of the official matrix docker image. My additions include initializing the host configuration files and adding the admin and non-admin users. To do this, I again take advantage of earthly's WITH DOCKER code block.

WITH DOCKER --pull matrixdotorg/synapse:latest
  RUN docker run --rm -v $(pwd):/data -e SYNAPSE_SERVER_NAME=trix.meh -e SYNAPSE_REPORT_STATS=no matrixdotorg/synapse:latest generate && \
    CT=$(docker run -d -v $(pwd):/data -p 8008:8008 matrixdotorg/synapse:latest) && sleep 5 && \
    docker exec ${CT} register_new_matrix_user http://localhost:8008 -c /data/homeserver.yaml -u trix -p trix -a && \
    docker exec ${CT} register_new_matrix_user http://localhost:8008 -c /data/homeserver.yaml -u bot -p bot --no-admin
END
SAVE ARTIFACT ./

WIth the above, I am running through the instructions from the README of the matrix synapse container image. But I am also mounting the matrix host's data directory to the local filesystem used by earthly. Once generated, the configuration files are saved to the local filesystem and I can set them as earthly artifacts meaning I can reference them later on in my Earthfile. This is handy for the part where I build the final trixtest image because I can use the earthly COPY command to add these files to my final image.

Godnd custom docker image: Godnd is my shortened way of saying “golang plus docker in docker”. And that's what it is. Its a custom container built from the official golang container image that also runs earthly's script for installing a docker daemon. This is the container image I use by default in my Earthfiles. If I don't need a specific image to run a set of commands, I'll use one of my custom godnd images (I have one per golang minor version). At the time of this writing, I'm using the official golang images based off of debian bullseye and not alpine. This is only because the matrix encryption libraries are written in C (at some point in the future, I'll get this all working in alpine).

The sequence winds up looking something like....

codebase on my laptop —–> earthly cli —–> earthly run commands in the godnd docker image —–> apt install libolm C library —–> go build trix client binary —–> earthly run trixtest docker image —–> go run golang test suite

And that's kind of it, really. I put all these things together and they all get executed in the correct order when I run my single earthly command. I can make small code changes and re-run my single earthly command and get pretty speedy feedback directly in my laptop's terminal. Especially if I run the earthly command over and over because of buidkit's excellent caching of target steps. If buildkit doesn't detect a change in the files for a step, it will use the already existing cache. This can significantly speed up the time to wait for results.

I've also expanded my earthly configs to include more ops-like areas. For example, I added steps to run security tests using gosec for code and anchore for my custom docker images. In the future, I will likely add steps for pruning older images from my container repositories. And because earthly uses buildkit under the hood, its design makes it fairly easy to offload the actual processing work to a more powerful remote host. Earthly provides its own buildkit docker container but can also work with a remote docker daemon setup. this is pretty handy for me because I am now working off of a four year old Dell XPS 13. The laptop is freakin' adorable and light and very very pink but has nowhere near the power of one of the newer macbooks available. But since not all of us are going to be able to rush out and get a new new mac, a nice alternative is to use a cloud vendor to turn on and off a more performant docker daemon when you need it to run your earthly configs (I'll write up an article on this in the near future using cloud vendors that don't rhyme with blamazon, roogle, or mockosoft).

#localDev #tests #integration #budget #earthly #golang #buildkit #docker #continuous #software #development #dev #webdev #tech #matrix #cli


RSS feed: mel.sh/feed

This blog is on the fediverse: @mel@mel.sh

Subscribe to email updates:

I'm also on mastodon: @mel@social.sdf.org (but I admit I'm not very active)

My projects: codeberg.org/meh

keyoxide proofs: https://keyoxide.org/6EA5985B857ED15E1630424AC3E29D39C06F2B70

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.