Continuous integration/Docker

As of August 2017, the CI system is experimenting with using Docker containers to run tests.

Overview
There is currently no kubernetes infrastructure on which to run containers for testing, nor is there a timeline to create a kubernetes cluster. As such, administrative tasks are handled solely by Jenkins. As a result, our containers should be self-sufficient and tidy. That is, a container should leave behind nothing but logs and rely on nothing but environment variables provided by jenkins (, , etc).

Currently, the canonical repository for Docker images for CI is on docker hub. There is nothing private in these images and docker agents cache images locally so actual reliance on Docker hub is minimal.

Slave creation
sudo rm -fR /var/lib/puppet/ssl sudo mkdir -p /var/lib/puppet/client/ssl/certs sudo puppet agent -tv sudo cp /var/lib/puppet/ssl/certs/ca.pem /var/lib/puppet/client/ssl/certs sudo puppet agent -tv sudo docker pull wmfreleng/ci-src-setup:latest sudo docker pull wmfreleng/operations-puppet:latest sudo docker pull wmfreleng/mediawiki-phpcs:latest sudo docker pull wmfreleng/mediawiki-phan:latest
 * Create a new instance in horizon with a name following the pattern 'integration-slave-docker-100X'
 * Wait for the first puppet run to complete and log in
 * Run the following to finish switching to the integration puppet master:
 * Add the 'role::ci::slave::labs::docker' class to the instance in horizon
 * Run a final update for puppet 'sudo puppet agent -tv'
 * Pull an initial set of docker images onto the host (using latest tags) to avoid doing this in test runs:
 * Add the slave in the jenkins UI

Container Creation
Containers hosted on the wmfreleng docker hub come from the integration/config repository. As of August 3rd, 2017 rebuilding containers and pushing new containers to dockerhub is all done manually.

Each project wishing to use docker should create a subdirectory in the integration/config repository called. Resulting images will be named. Docker projects use the multistage builds to cache resources. The first dockerfile stage should be used to fetch resources that will be used inside the final container. This keeps the container that is used by the Jenkins agents small. The container generated from the  inside   can (and should) use muiltiple stages to cache dependencies to speed up build times.

Building Containers
This process can be done by anyone with access to the  dockerhub account (anyone on the WMF Release Engineering team). The build process will result in local images named, like,. By convention we try not to use the  tag for docker images, but assign them specific versions. Assigning versions is a manual process, and is left to the discretion of the person building the images. The use of versions allows us to test the most recent version of a container before it is made live for use in testing.


 * 1) Clone the   project
 * 2) Run the   script
 * 3) Tag and push the resulting images

Testing new containers
Once the new container is pushed to docker hub it should be tested on one of the  machines. As of August 2017 there are 4 such machines:.

To test


 * 1) ssh to one of the   machines and   to the   user.
 * 2) Create a new directory and an environment file that contains the information passed from Jenkins in the form of   variables
 * 3) Run the new docker container with the environment file and ensure that it runs correctly
 * 4) If everything is working as anticipated, update JJB with the dockerfile version you pushed to dockerhub.