Continuous integration/Docker

As of August 2017, the CI system is experimenting with using Docker containers to run tests.

Overview
The Docker images we use in CI are currently run by Jenkins. We hope that in the future, this will run on a Kubernetes cluster instead.

Administrative tasks are currently handled solely by Jenkins. The containers you create should be self-sufficient and leave nothing behind, except logs. The behaviour of the container should rely only on environment variables provided by Jenkins (, , etc).

The Docker images for CI are stored on  under the   namespace.

These images are created from Dockerfiles in the integration/config repository.

Jenkins Agent
To create an additional Jenkins node that can run Docker-based Jenkins jobs.

sudo rm -fR /var/lib/puppet/ssl sudo mkdir -p /var/lib/puppet/client/ssl/certs sudo puppet agent -tv sudo cp /var/lib/puppet/ssl/certs/ca.pem /var/lib/puppet/client/ssl/certs sudo puppet agent -tv sudo docker pull docker-registry.wikimedia.org/releng/castor:latest sudo docker pull docker-registry.wikimedia.org/releng/quibble-stretch:latest sudo docker pull docker-registry.wikimedia.org/wikimedia-stretch:latest
 * Create a new VM instance in Horizon with a name following the pattern 'integration-slave-docker-100X'.
 * Wait for the first puppet run to complete and log in.
 * Run the following to finish switching to the integration puppet master:
 * Add the 'role::ci::slave::labs::docker' class to the instance in horizon
 * For larger instance types ( and  ) specify   for the   parameter.
 * Run a final update for puppet 'sudo puppet agent -tv'
 * Pull an initial set of docker images onto the host (using latest tags) to avoid doing this in test runs:
 * Add the slave in the jenkins UI

Build images locally
is a python3 program that is used to build Docker images, via Jinja templating.

Installing docker-pkg

 * Clone the code from  and install via pip3:
 * Clone the  project:

At this point, the  command should be available. By default, it will build all images you don't yet have cached in your docker installation. To get started you have three options:


 * 1) Run it normally and let it build the latest version of all images in the CI system (this may take several hours, takes ± 40GB disk space).
 * 2) Download the latest versions of these images from wikimedia.org instead (if your connection is 20MB/s or better then this is probably faster, downloads and stores ± 40 GB).
 * 3) Use the    option to only build a subset of images, for example just the image you're changing or testing. The string that the GLOB parameter for this option applies to is the full reference name of the image. E.g. "docker-registry.wikimedia.org/releng/node10-test:0.3.0". As such, the value   would not match any image, but   would.

Run the build
This will scan the  folder for   files. For each one, it will find the last version tag in, and then if you don't have that version present in your local Docker installation, it will build start building it.

Example output:

Step 0: scanning dockerfiles
Will build the following images:
 * docker-registry.wikimedia.org/releng/ci-stretch:0.1.0
 * docker-registry.wikimedia.org/releng/operations-puppet:0.1.0
 * docker-registry.wikimedia.org/releng/ci-jessie:0.3.0

Step 1: building images
=> Building image docker-registry.wikimedia.org/releng/ci-stretch:0.1.0 => Building image docker-registry.wikimedia.org/releng/operations-puppet:0.1.0 => Building image docker-registry.wikimedia.org/releng/ci-jessie:0.3.0

Step 2: publishing
NOT publishing images as we have no auth setup

Build done!
You can see the logs at ./docker-pkg-build.log Troubleshooting notes: If you don't see any output in between "Step 1: building images" and "Step 2: publishing", check the following:


 * "git status" in integration/config should show a change to the changelog file for the image you want to build
 * Look for any errors in "docker-pkg-build.log"
 * Make sure that you ran "docker-pkg -c dockerfiles/config.yaml dockerfiles" and not "docker-pkg -c dockerfiles/config.yaml dockerfiles/path-to-image"; docker-pkg will figure out which images to build by detecting modifications to the changelog.

Download images
Download any missing images from wikimedia.org to your local installation (source): $ cd integration/config $ ack -o -h -s 'docker-registry.*:[.\d]+' jjb/ | sort | uniq | xargs -n1 docker pull

Manage local images
List local images: $ docker images Remove local images from wikimedia.org (source): $ docker rmi $(docker images --format ' :  ' | grep 'wikimedia.org')

Publishing docker-pkg images
The change to the Dockerfile in the integration/config repository should first be merged in Gerrit.

After that, deploy it to the CI infrastructure. To do this, ensure you have Fabric installed. We currently still use Fabric 1. Package managers like apt-get or Homebrew likely offer you Fabric 2, which is incompatible with our fable.py, and is also incompatible with Python3. If you have python2 and associated pip installed, use. If you have python2 without pip (like on macOS), but do have python3 available, use. The fabric3 package is a port of Fabric 1 to Python 3.

In the integration/config directory, run. This connects to the  server and instruct it to build newer versions of Docker images in.

Test a container locally
Use the below steps to test a docker image locally. This can be unpublished image you've built locally with, or one that was pulled from the wikimedia.org repository.

Note that the below uses urls for the names of the images, but these refer to the ones you have locally (either created or pulled), they do not need to have been deployed or uploaded there yet. You can list the images you have locally using the  command. $ cd my-gerrit-project $ mkdir -m 777 cache log $ docker run \ --rm --tty \ --volume /"$(pwd)"/log://var/lib/jenkins/log \ --volume /"$(pwd)"/cache://cache \ --volume /"$(pwd)"://src \ docker-registry.wikimedia.org/releng/node10-test:0.3.0

Debug a container locally
The  script can be used to run docker in this way.

The default behaviour for  is to start the container and execute the entrypoint/cmd specified in the Dockerfile. To inspect the container instead, specify  to make it interactive, and override   to a shell (such as  ). For example: $ cd my-gerrit-project/ $ docker run \ --rm --tty \ --interactive --entrypoint /bin/bash \ docker-registry.wikimedia.org/releng/node10-test:0.3.0 nobody@5f4cdb0ab167:/src$ nobody@5f4cdb0ab167:/src$ env LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 CHROMIUM_FLAGS=--no-sandbox PWD=/src HOME=/nonexistent NPM_CONFIG_CACHE=/cache XDG_CACHE_HOME=/cache PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Test a container in CI
Once the new container is pushed to docker hub it should be tested on one of the  machines. As of August 2017 there are 4 such machines:.

To test


 * 1) ssh to one of the   machines and   to the   user.
 * 2) Create a new directory and an environment file that contains the information passed from Jenkins in the form of   variables
 * 3) Run the new docker container with the environment file and ensure that it runs correctly
 * 4) If everything is working as anticipated, update JJB with the Dockerfile version that has been pushed to the Wikimedia Docker registry.