16 | Written on Wed 01 April 2020. Posted in Tutorials | Richard Walker

I've recently discovered a pattern for developing software in a repeatable and consistent manner. The 3 Musketeers is an approach for testing, building and deploying applications or configuration management from anywhere, the same way.

I have a deep understanding and experience with tools such as Puppet and Ansible, which made me cautious yet also curious. The most interesting and for me in the beginning, the questionable element is with how the pattern uses containers to run ad-hoc commands.

My exploration and discovery may have broadened my array of tooling approaches, is it worth the effort? Let's find out.

The first hurdle with 3 Musketeers is that it is concerned with Docker and docker-compose. By using these tools, this approach can be "platform-independent". I don't care about for catering for Windows. In reality, client computing should be a moot point. Have a Linux management instance in your cloud to work. This way, so long as you can SSH to those you're all set. Moreover, let's remind ourselves that containers are Linux. So why try and run containers on a Windows platform anyway?

I also had a look at substituting Docker with Podman. The sticking point is with podman-compose, which, at time of writing, doesn't cut it for this guide. On the one hand, that's a shame, and on the other, it might provide a clue to the validity of the 3 Musketeer approach. I can't help but be sceptical of using Docker containers to run ad-hoc shell commands. Containers are for packaging and running applications.


AWS EC2 instance

Not getting into the detail, this guide assumes you have a new Red Hat Enterprise Linux 8 EC2 instance. At the time of writing, I used SSD Volume Type, ami-0a0cb6c7bcb2e4c51 of Type t2.micro, which is eligible for the free tier.


The first thing you'll need is a new git repository. Go ahead and create one in your GitHub/GitLab. I created [email protected]:richardwalkerdev/buildkite-demo.git and initialised it with an initial .buildkite/pipeline.yml file.

mkdir buildkite-demo
cd buildkite-demo 
mkdir .buildkite
vi .buildkite/pipeline.yml

      - label: ":demo: Demo"
        name: "demo"
        command: echo "Buildkite demo"
git init
git add .buildkite/
git commit -m "first commit"
git remote add origin [email protected]:richardwalkerdev/buildkite-demo.git
git push -u origin master


To tackle this head-to-head, we'll deal with the shared component first, which is Buildkite. Sign-up and log into Buildkite at https://buildkite.com/login. Create a new organisation, I've called mine "Demo", and a screen providing instructions to set up your first agent will greet you.


Selecting the RH/CentOS environment provides the commands to implement the Buildkite agent on your EC2 instance. I've included them here for completeness but use the details supplied by Buildkite because it will consist of your private token.

sudo sh -c 'echo -e "[buildkite-agent]\nname = Buildkite Pty Ltd\nbaseurl = https://yum.buildkite.com/buildkite-agent/stable/x86_64/\nenabled=1\ngpgcheck=0\npriority=1" > /etc/yum.repos.d/buildkite-agent.repo'
sudo yum -y install buildkite-agent

Configure your agent token:

sudo sed -i "s/xxx/evbqbxx6ku0sxfu4meg5evbqbxx6ku0sxfu4meg5/g" /etc/buildkite-agent/buildkite-agent.cfg

And enable and start the agent:

sudo systemctl enable buildkite-agent 
sudo systemctl start buildkite-agent

Once you start the agent your Buildkite web console should automatically respond with the following message:


Select "Manage your pipelines" takes you to a page to create a new pipeline.


When you create the pipeline, information on setting up GitHub Webhooks is displayed, and it's worth following so that the pipeline triggers every time you push changes to the repository.

Next, You'll need to add a new "API Access token", this is found under you Buildkite personal settings via the web console. Add a new token by selecting the correct organisation access. For demonstration and development purposes, I picked all the available permissions.

A new token is revealed, keep it safe!

From a terminal, export the token:

 export TOKEN=3ff68aafe7279328e218f7eb2fdcf7b62bb18e77

For the manual curl smoke test to work, you'll also need to add an SSH key. On your EC2 instance, change to the buildkite-agent user, generate a key and insert the public key to your GitHub account.

sudo su - 
su - buildkite-agent
cat ~/.ssh/id_rsa.pub 

With all this now in place, we can test the pipeline using curl to trigger it:

curl -H "Authorization: Bearer $TOKEN" "https://api.buildkite.com/v2/organizations/demo-16/pipelines/demo/builds" \
  -X "POST" \
  -F "commit=HEAD" \
  -F "branch=master" \
  -F "message=First build :rocket:"


Great, that's all for Builkite, it's quite simple to see that the pipeline.yml includes steps which run commands, these commands can be ad-hoc shell commands, bash scripts, make or ansible-playbook, for example.

Two approaches

Buildkite seems like a helpful pipeline tool, elegant and concise without the bloat of Jenkins. Fair enough. The next section will cover the 3 Musketeer approach using Docker and make to run a bash script which writes some output to a file. The last part will repeat the same exercise using Ansible, and then I can conclude my opinion on what approach makes more sense.

Docker and Make

The example I'll use is the task of executing a bash script that writes something to a file on the EC2 instance. This example is a simple task yet will demonstrate that the Docker approach becomes a little more involved due to the requirement of binding a volume to a mount point.

The following steps walk through you through the 3 Musketeer pattern.

Install Docker

Docker and docker-compose on RHEL8 are not officially supported, but we can still install it via the CentOS repository.

Add repository

sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo

Install docker-ce

sudo dnf install docker-ce --nobest -y

Start and enable the docker service

sudo systemctl start docker
sudo systemctl enable docker

Check the version

docker --version
Client: Docker Engine - Community
 Version:           19.03.8

Add user to group

You need to make sure your user, or in this case the buildkite-agent user is a member of the docker group:

usermod -aG docker buildkite-agent

To be 100% sure the buildkite-agent user has its environment, the docker service starts up correctly and eliminate possible errors with the pipeline such as:

ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

You can restart the buildkite-agent or reboot your EC2 instance.

systemctl restart buildkite-agent



Install docker-compose

As mentioned previously, I would have used Podman and podman-compose, but I couldn't get podman-compose to work in the same way as docker-compose works in the 3 Musketeer examples.

Download docker-compose:

sudo curl -L https://github.com/docker/compose/releases/download/1.25.4/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

Make it executable:

sudo chmod +x /usr/local/bin/docker-compose

Check version:

docker-compose --version
docker-compose version 1.25.4, build 8d51620a

Custom alpine image

The alpine container image is a minimal Docker image based on Alpine Linux with a complete package index and only 5 MB in size, making it perfect for these ad-hoc commands. Due to its minimalism, we'll have to build a custom version of it for our requirements.

On the EC2 instance, switch user to buildkite-agent and create a working directory for this exercise.

su - buildkite-agent
mkdir docker
vi Dockerfile
FROM alpine:latest
RUN apk add --no-cache bash
RUN apk add --no-cache curl
RUN mkdir -p /opt/demo
RUN mkdir -p /opt/app
VOLUME /opt/demo
VOLUME /opt/app
WORKDIR /opt/app

And build the image:

docker build -t demo/alpine-custom:latest .

The resulting image can be observed:

docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
demo/alpine-custom   latest              783b85d31e16        29 minutes ago      8.95MB

Final preparation

Finally, we'll require the /opt/demo/ directory creating on the EC2 instance so Docker can bind it to a mount point.

mkdir /opt/demo

The EC2 instance will also need make installing if not already present.

dnf install make

The 3 Musketeer pipeline

To recap, we now have a Buildkite pipeline attached to a GitHub repository and configured to trigger whenever new changes are committed and pushed. When the pipeline runs, it uses its agent on the EC2 instance to clone the repository and execute a series of steps on the EC2 instance.

With the 3 Musketeer approach, the Buildkite steps hand-off to the make command, which itself has corresponding steps or tasks. Each of those make steps/tasks uses docker-compose to execute a bash command, in this case, running a script. We've primed an alpine image for docker-compose to use and meet our requirements.


The image depicts the essential components, and we'll need a .buildkite/pipeline.yaml, a .env file if you wish to consume environment variables defined via the Buildkite web console (probably a bad idea, I'll cover that in the Ansible section and conclusion). We'll also need a Makefile and a docker-compose.yml which defines a service which specifies the alpine image and required volume.

We can now add those necessary files to our git repository and get the pipeline to perform a step using this pattern.


We already added this file. Let's add a step:

vi .buildkite/pipeline.yml
  - label: ":demo: Demo"
    name: "demo"
    command: echo "Buildkite demo"

  - label: ":demo: Demo using make and docker-compose"
    name: "demo-make-docker-compose"
    command: make demo-make-docker-compose


Next, we'll include a Buildkite environment variable, under the pipeline settings I included foo=bar, to exploit this add a .env:

vi .env
# Read in from Buildkite Environment Variables under Pipeline Settings


The Makefile includes steps/tasks that make uses as defined in pipeline.yml.

Make sure you use tabs not spaces in the Makefile. The makefile has a very stupid relation with tabs. Tabs identify all actions of every rule, four spaces don't make a tab. Only a tab makes a tab.

If you have this issue you'll see an error like:

Makefile:4: *** missing separator.  Stop.

Create the file as follows:

vi Makefile

    docker-compose run --rm $(COMPOSE_SERVICE) ./scripts/demo.sh

COMPOSE_SERVICE refers to a docker-compose service we'll add next, demo-make-docker-compose: defines the step as referenced in the line command: make demo-make-docker-compose in pipeline.yml.


As mentioned the Makefile reference COMPOSE_SERVICE='alpine-custom' which is a reference to a service defined in docker-compose.yml, as follows:

vi docker-compose.yml
version: '3.4'
    build: .
    image: demo/alpine-custom
    command: bash
    env_file: .env
      - type: bind
        source: /opt/demo
        target: /opt/demo
      - type: bind
        source: .
        target: /opt/app
    working_dir: /opt/app

The crucial observation in this file is the service name apline-custom, the image, env_file and volumes. The volume /opt/app is mounted to the same location as where Buildkite clones the Git repository and executes scripts, thus the same path is also defined as working_dir. I added a second volume /opt/demo to demonstrate controlling the destination of the output from the bash script.


The Makefile step uses docker-compose to spin up the alpine image and execute a bash script. We'll add that script to our repository now.

mkdir scripts
vi scripts/demo.sh
echo "Demo using the 3 Musketeers pattern" > /opt/demo/3musketeer-demo.txt
chmod +x scripts/demo.sh


Your git repository should now contain the following structure and files:

├── .buildkite
│   └── pipeline.yml
├── .env
├── Makefile
├── docker-compose.yml
└── scripts
    └── demo.sh

With all this in place, and assuming it's content is correct when you commit and push them into Git, the Buildkite webhook should trigger a build. If successful, you should have a new file /opt/demo/3musketeer-demo.txt with the following contents:

Demo using the 3 Musketeers pattern
The value of foo = bar

Hopefully, this has all made sense? There are quite a few moving parts.


Next, we are repeating the objective of executing a bash script to create a file on our EC2 instance, this time using Ansible. There are several approaches with how to use Ansible, yet at least Ansible does comes with mature best practices. This guide is not an in-depth Ansible how-to.

I've changed my mind, I've just said I was going to repeat the task of executing a bash script and you might say that would be a fair comparison. While it is feasible to run ad-hoc with Ansible, with some thought, it seemed kind of dumb and pointless. Ad-hoc commands are quick and easy, but they are not reusable, and Ansible will issue warnings, suggesting to use equivalent modules instead.


Ansible can be installed via an official Red Hat subscription or using the Open Source version merely using the Python package installer (pip). Here, we'll use the latter.

Install Python

dnf install python3 python3-pip

Install Ansible

pip3 install ansible
ansible --version
    ansible 2.9.6

As the buildkite-agent user, test you can run an ansible command:

su - buildkite-agent
ansible localhost -m setup

Configure Ansible

For efficiency on your EC2 instance, enable sudo to work with no password required for the buildkite-agent user account, which involves editing the sudoers file and adding the user to the wheel group. Comment out the existing %wheel line and uncomment the NOPASSWD line:

# %wheel  ALL=(ALL)     ALL
%wheel  ALL=(ALL)       NOPASSWD: ALL

And add buildkite-agent to the wheel group:

usermod -aG wheel buildkite-agent

Restart the agent of the change to take effect:

systemctl restart buildkite-agent

Add Ansible configuration and compositions

Next, it's time to add some Ansible to our Git repository, and a new step to the Buildkite pipeline.

Firstly, let's demonstrate the most minimalistic approach possible using Ansible modules.

In your Git repository, add the following configuration file:

vi ansible.cfg
inventory = inventories/local
roles_path = roles
host_key_checking = False
retry_files_enabled = False

and create these two directories:

mkdir -p inventories/local
mkdir -p playbooks/local

Add hosts file under inventories/local:

vi inventories/local/hosts

And add the following playbook:

vi playbooks/local/demo_ansible.yml
- name:  demo playbook
  hosts: localhost
  connection: local
  become: yes

    - name: Add a line to a file
        path: /opt/demo/ansible-demo.txt
        line: Demo using Ansible from Git
        create: yes 

    - name: Add another line to file using environment variable
        path: /opt/demo/ansible-demo.txt
        line: "The value of foo = {{ lookup('env','foo') }}"

Add Buildkite step

Finally, all we need now is to add the step to the Buildkite pipeline:

vi .buildkite/pipeline.yml
  - label: ":demo: Demo using Ansible"
    name: "demo-ansible"
    command: ansible-playbook playbooks/local/demo_ansible.yml

Run the pipeline

Once again, with all this in place, and assuming the content is correct, you can commit these new Ansible files, and the pipeline will trigger. For this step, we added three files, two of which in subdirectories. (The following listing excludes the previous 3 Musketeer files which are still required for the pipeline to run all its steps.)

├── ansible.cfg
├── inventories
│   └── local
│       └── hosts
└── playbooks
    └── local
        └── demo_ansible.yml



I have worked hard to remain neutral working through all this article. I'm happy to say I do like Buildkite and will likely use it for my website deployments, and it does make light work of setting up web-hooks in GitHub and running Ansible on target hosts.

As for 3 Musketeers? I do not understand why anyone would continue to work with such an involved and convoluted process. Shell scripting is challenging to maintain due to the lack of any opinionated standards. Using, make to hand-off to docker-compose to then run bash commands and scripts on a Linux host seems utterly pointless as far as I'm concerned.

Don't forget, the Ansible compositions here can evolve to manage AWS resources and cloud infrastructure, deploy applications and provided complete configuration management of target hosts across as many environments you wish to define.

Ansible provides a clean, comfortable to read approach both during composition and run executions, making debugging easier. It also has a whole structure and hierarchy for maintaining environment variables and parameters. I'm not sure adding environment variables directly into the Buildkite web console is a bright idea, this needs to be in version control, possible somewhere central and externalised, maintaining one version of the truth.

The only counter-argument I can see is that using Docker makes it cross-platform. However, you still need to install Docker, docker-compose and make as pre-requisites and if your targeting Linux hosts in the cloud, then the value in using Docker is truly lost on me. Not to mention, Ansible supports a broad range of platforms.

Ansible is a far more sensible route and its a dam sight more beautiful to use. Why do you need 3 Musketeers when you only need 1? Ansible is our d'Artagnan who can ditch the other three.