Continuous integration/Docker

Support for Docker containers was added to Wikimedia CI in August 2017. These containers run from a cluster of persistent Jenkins agents in Wikimedia Cloud Services. We hope that in the future, these images will run from a Kubernetes cluster instead.

Overview
Each Docker image is designed to be self-sufficient, expecting nothing on disk, and expecting nothing to remain on disk afterward. Containers and their filesystem are destroyed after each run (except for the designated logs directory, which is preserved as the Jenkins build artefact for a few days). The behaviour of a container may only vary on environment variables provided by Jenkins (, , etc).

Administrative tasks must be handled solely by Jenkins, outside the container.

The Docker images for CI are published under the releng namespace at https://docker-registry.wikimedia.org/,.

The source code for each Dockerfile resides in the integration/config.git repository.

Build images locally
We use  to build Docker images, with Jinja for additional templating.

Installing docker-pkg
docker-pkg is a python3 program that can be cloned from operations/docker-images/docker-pkg and installed via pip3: Also clone the  repository At this point, the   command should be available in your terminal.

Download existing images
You can significantly speed up the building of images, by first downloading the existing latest version of the image you are interested in changing (or, if creating a new image, its parent image). Building all images from scatch may take several hours and consume ±40GB disk space.

The following command will download the latest versions of WMF CI's Docker images from. Note that this naturally skiips images you have previously pulled or build already (source): $ cd integration/config $ ack -o -h -s 'docker-registry.*:[.\d]+' jjb/ | sort | uniq | xargs -n1 docker pull Alternatively, you can download individual images like so: $ docker pull docker-registry.wikimedia.org/releng/node20:latest If the image in question has a debug script, you can also run that, which will naturally download it as-needed.

Build the images
To build one or more images, run  with the   option. The "select" option takes a glob parameter that applies to the full reference name of the image (URL and version). This will scan the  folder. For each one, it will find the last version tag in, and then if you don't have that version present in your local Docker registry, it will build start building it from the Dockerfile.

For example,  would build   as represented by.

You can also use this to build a large number of related images, for example  (note the absence of a colon) would rebuild all images that have new versions in your working copy, of images that contain "quibble" in their name.

Alternatively, to build the entire catalog, omit the  option like so: Example output:

Step 0: scanning dockerfiles
Will build the following images:
 * docker-registry.wikimedia.org/releng/ci-stretch:0.1.0
 * docker-registry.wikimedia.org/releng/operations-puppet:0.1.0
 * docker-registry.wikimedia.org/releng/ci-jessie:0.3.0

Step 1: building images
=> Building image docker-registry.wikimedia.org/releng/ci-stretch:0.1.0 => Building image docker-registry.wikimedia.org/releng/operations-puppet:0.1.0 => Building image docker-registry.wikimedia.org/releng/ci-jessie:0.3.0

Step 2: publishing
NOT publishing images as we have no auth setup

Build done!
You can see the logs at ./docker-pkg-build.log

Could not find a suitable TLS CA certificate bundle
The following error is known to affect macOS: 500417 ERROR - Could not load image in integration/config/dockerfiles/…: Could not find a suitable TLS CA certificate bundle, invalid path: /etc/ssl/certs/ca-certificates.crt (builder.py:244) Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 228, in cert_verify "invalid path: {}".format(cert_loc)) OSError: Could not find a suitable TLS CA certificate bundle, invalid path: /etc/ssl/certs/ca-certificates.crt To workaround this issue, run the following in your Terminal window before running a  command: $ export REQUESTS_CA_BUNDLE=; $ docker-pkg …

No images are built
If you don't see any output in between "Step 1: building images" and "Step 2: publishing", this means docker-pkg did not find any images or did not find images that have unbuilt newer versions. Review the following:


 * "git status" in integration/config should show a change to the changelog file for the image you are updating.
 * Make sure that the name used in the changelog file is correct, and matches your intended image name.
 * Look for any errors in "docker-pkg-build.log".
 * Make sure that you ran "docker-pkg -c dockerfiles/config.yaml dockerfiles" and not "docker-pkg -c dockerfiles/config.yaml dockerfiles/path-to-image"; docker-pkg will figure out which images to build by detecting modifications to the changelog.

Manifest not found
When adding a new version of an image, and also incrementing versions of dependant images, you may encounter the following error: Build failed: manifest for example/my-parent-image:0.4.0 not found This happens because docker-pkg (by default) fetches parent images from wikimedia.org and only rebuilds the child locally. If you are updating a parent from 0.3.0 to 0.4.0 and also incrementing the child images' versions, the parent will update fine, but then it will fail to build the children, despite the newer versions existing locally.

To mitigate this, pass. Like so: docker-pkg --no-pull -c dockerfiles/config.yaml dockerfiles

Adjusting an image
Sometimes you need to edit an image, e.g. to add a new dependency or to update an existing one.

To do this, make your changes to the image's  file, and then run the following command:

$ docker-pkg -c dockerfiles/config.yaml --info update \ --reason "Briefly explain your change" \ --version NewImageVersion \ ImageName \ dockerfiles/

This will add a properly-formatted entry in the  of the image you're changing, and all dependent images. You can then locally build the images to check that they build correctly, and the  command to check that it works as intended. Once you're happy with your fix, bundle the changes into a git commit for review and merging.

Manage local images
List local images: $ docker images Remove local images from wikimedia.org (source): $ docker rmi $(docker images --format ' :  ' | grep 'wikimedia.org')

Deploy images
Deploying a change to CI Dockerfiles requires shell access to the Docker registry on contint1001.wikimedia.org (shell group: contint-docker). Ask Release Engineering team for help).

The change to integration/config repository should first be merged in Gerrit.

After that, deploy it to the CI infrastructure. To do this, in the integration/config directory, run:. This connects to the  server and instruct it to build newer versions of Docker images in.

Test an image locally
Use the below steps to test a docker image locally. This can be unpublished image you've built locally with, or one that was pulled from the wikimedia.org repository.

Note that the below uses urls for the names of the images, but these refer to the ones you have locally (either created or pulled), they do not need to have been deployed or uploaded there yet. You can list the images you have locally using the  command. $ cd my-gerrit-project $ mkdir -m 777 cache log $ docker run \ --rm --tty \ --volume /"$(pwd)"/log://var/lib/jenkins/log \ --volume /"$(pwd)"/cache://cache \ --volume /"$(pwd)"://src \ docker-registry.wikimedia.org/releng/node10-test:0.3.0

Debug an image locally
The  script can be used to run a RelEng docker image locally. $ cd integration-config/dockerfiles $ ./debug-image node10-test nobody@bfee2a999b20:/src$ The default behaviour for  is to start the container and execute the entrypoint/cmd specified in the Dockerfile. To inspect the container instead, specify  to make it interactive, and override   to a shell (such as  ). For example: $ cd my-gerrit-project/ $ docker run \ --rm --tty \ --interactive --entrypoint /bin/bash \ docker-registry.wikimedia.org/releng/node10-test:0.3.0 nobody@5f4cdb0ab167:/src$ nobody@5f4cdb0ab167:/src$ env CHROMIUM_FLAGS=--no-sandbox XDG_CACHE_HOME=/cache

Test an image in CI
Once the new image is pushed to docker hub it should be tested on one of the  machines. As of August 2017 there are 4 such machines:.

To test


 * 1) ssh to one of the   machines and   to the   user.
 * 2) Create a new directory and an environment file that contains the information passed from Jenkins in the form of   variables
 * 3) Run the new docker image with the environment file and ensure that it runs correctly
 * 4) If everything is working as anticipated, update JJB with the Dockerfile version that has been pushed to the Wikimedia Docker registry.

Jenkins Agent
To create an additional Jenkins node that can run Docker-based Jenkins jobs.

sudo rm -fR /var/lib/puppet/ssl sudo mkdir -p /var/lib/puppet/client/ssl/certs sudo puppet agent -tv sudo cp /var/lib/puppet/ssl/certs/ca.pem /var/lib/puppet/client/ssl/certs sudo puppet agent -tv sudo docker pull docker-registry.wikimedia.org/releng/castor:latest sudo docker pull docker-registry.wikimedia.org/releng/quibble-stretch:latest sudo docker pull docker-registry.wikimedia.org/wikimedia-stretch:latest
 * Create a new VM instance in Horizon with a name following the pattern 'integration-agent-docker-100X'.
 * Wait for the first puppet run to complete and log in.
 * Run the following to finish switching to the integration puppet master:
 * Add the 'role::ci::slave::labs::docker' class to the instance in horizon
 * For larger instance types ( and  ) specify   for the   parameter.
 * Run a final update for puppet 'sudo puppet agent -tv'
 * Pull an initial set of docker images onto the host (using latest tags) to avoid doing this in test runs:
 * Add the agent in the jenkins UI