Extension:WikiLambda/Development environment

This page contains a detailed guide to have a fully functional local installation of Wikifunctions, either for its use or for development purposes.

Environment overview
The Wikifunctions architecture includes the WikiLambda extension as well as the back-end services: function-orchestrator and function-evaluator

There are different levels of environment complexity that we can work with:


 * A local MediaWiki installation with the WikiLambda extension with the back-end services running remotely. You can follow the the MediaWiki-Docker instructions for WikiLambda.
 * A local MediaWiki+WikiLambda installation (same as previous point), with the back-end services running locally on docker containers.
 * A local MediaWiki+WikiLambda installation and with the back-end services running on a local Kubernetes cluster. This is the most complex environment setup and only advised if you need to replicate a production-like environment.

Using registry images
You can locally run containers built from images of our back-end services that have been already merged and pushed to the Wikimedia docker-registry. This will allow you to run functions locally without having to clone our back-end repositories and build the images locally.

Copy the contents of the  block in WikiLambda's   file to the analogous   block in your   file. Replace the  entries in the stanza you just copied with the latest builds from the Docker registry for the orchestrator and the evaluator.

Then, run the set of containers with. Once everything is done, you should be able to do  and see all your containers with a running status.

Finally, get the full URL of your function orchestrator by inspecting again this data and gathering the : details: If your MediaWiki checkout is called "core", this url will most likely be, but make sure that that's the correct name generated by docker compose.

Edit your LocalSettings.php file from your mediawiki installation folder and add:

Test your installation
You can automatically test your installation by editing your local copy of  to remove , and running the PHPUnit test suite as described in the MediaWiki install instructions, or by using the following command: You can manually evaluate a function call by navigating to  ,  selecting a function from the wiki, and choosing your inputs. If successful, the function response will be presented, having traversed the orchestrator and the evaluator to be run in one of the code executors.

You can also visit http://localhost:8080/wiki/Special:ApiSandbox and try out one or more tests as follows:


 * In the  drop-down menu, select
 * Switch from the  to the   section on the left sidebar
 * Click on {} Examples, and select any of the listed examples
 * Click the blue  button
 * In the Results box, look for the word "success".

🎉 Congratulations! 🎉

Using local images
If you want to modify and run our back-end services, you will have to clone our repositories and build the images locally.

1. Requirements

 * 1) Ensure that you are using docker compose version 2.x or above. If you have 1.x, you will need to upgrade, or, if using Docker Dashboard, you may be able to select a preference for "Use Composer v2".
 * 2) Install Blubber

2. Clone the repositories
Clone the back-end services locally. You can clone both of them, or only the one that you wish to alter.

For function-orchestrator service: For the function-evaluator service:

3. Build the local images using blubber
Blubber is an abstraction for container build configurations that outputs Dockerfiles. To build local images for our services we will need to run blubber and then build the images using the output Dockerfile. We will then use the newly created image name to inform our mediawiki docker-compose where to build the serves from.

To build the function-orchestrator docker image, go into the repo root directory and run: To build the function-evaluator docker image, you can do the same as with the orchestrator. Just remember to alter the image name: Now if you run  you will see your newly built   and   images, both of them tagged as.

4. Build the containers
Alter the docker-compose.override.yml file from your mediawiki installation directory to change the image from which your back-end service(s) are built. The image field will need to be set to the value, where the image name is the one given in step 3. Do this for both or for only the one you wish to alter.

For example, if you want to use both locally built images, your docker-compose.override.yml file should be:

You are now ready to test your installation following the steps from above

Logging from the back-end services
While developing or modifying any of the back-end services, you might want to log messages and use them for debugging. You can do that simply by using console.log or console.error anywhere in the code, but to see these outputs, you must rebuild the project, reinitialize the docker containers, and view docker logs.

For example, after adding  statements in   or in its submodule , run blubber in the function-orchestrator root directory. Once the image is rebuilt, restart your MediaWiki docker containers with. Use   or   to view the logs: To log exceptions from the python executor (function-evaluator/executors/python3/executor.py): And similarly, rebuild the function-evaluator with blubber and reinitialize the MediaWiki docker containers.

Testing the back-end services
To test function-orchestrator you can use npm: To test function-evaluator you will need to use the docker variants that run in CI:

Service deployment to production-like Kubernetes
For expert development mode, you might need to replicate a production-like environment in your local machine using Kubernetes. For that you will need to install


 * Docker: https://docs.docker.com/get-docker/
 * Minikube: https://minikube.sigs.k8s.io/docs/start/
 * Helm: https://helm.sh/docs/intro/install/

You will also need to clone the following repositories:


 * operations/deployment-charts


 * releng/local-charts

Similarly to the case with docker, this environment also allows us to create the service containers from the official images pushed to the registry or from images built locally. Let's review both options.

Using registry images
Local-charts is a tool to run a Mediawiki ecosystem in Minikube using the Helm charts from the deployment-charts repo. First, run the installation script by running  from the repo's root directory and start minikube with.

Edit the file  and at the end of the list of dependencies add the following: If the back-end service charts are not available in the ChartMuseum, you can also point to local charts by replacing the repository with the relative path of the function-orchestrator and function-evaluator charts from the deployments-charts repo. Now create a  file from the example file   and edit it. You should add the new added services in the global.enabled section. Also, set all the other services to false, as we won't be needing them: You are ready to start the Kubernetes cluster in minikube now. Do: Once your services are deployed, you should probably test that everything went well. 🎉 Success! 🎉

Create new service chart
operations/deployment-charts/README.md

If you want to create a new chart use the  script, test it and upload a change to Gerrit. Then wait for a review. Running create_new_service.sh for function-orchestrator

port > 6254 name > function-orchestraotr image label > wikimedia/mediawiki-services-function-orchestrator Running create_new_service.sh for function-evaluator

port > 6927 name > function-evaluator image label > wikimedia/mediawiki-services-function-evaluator Change in the chart

function-orchestrator: * main_app.version: ff7fb9f7ccdd9d9f9e635ccbc0269ae76cd828b9 * main_app.readiness_probe.httpGet.path: /_info * tls.public_port: 4970

function-evaluator: * main_app.version: fffdeacd512acc72dc7f73b1feaf988dcfed198a * main_app.readiness_probe.httpGet.path: /_info * tls.public_port: 4971

Now you are ready to run using releng/local-charts

Test charts using local-charts
Local-charts tutorial: https://wikitech.wikimedia.org/wiki/Local-charts/Tutorial

Followed this to test deployment chart for function-orchestrator: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial

make deploy values=values.example.yaml

cp values.example.yaml values.yaml make deploy minikube ip kubectl get svc
 * 1) Or with values.yaml

make deploy release=wikifunctions Delete all:
 * 1) Let's name our release

helm del default
 * 1) Without naming release:

helm del wikifunctions To make changes and update the deployment do:
 * 1) But with the name

make update
 * 1) For unnamed release

make update release=wikifunctions Test deployment:
 * 1) For named release

minikube ip > 192.168.58.2

kubectl get svc > NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE > function-orchestrator-default  NodePort    10.111.200.0          6254:30642/TCP   18s

curl 192.168.58.2:30642/_info > {"name":"function-orchestrator","version":"0.0.1","description":"A Wikifunctions service to orchestrate WikiLambda function executors","home":"http://meta.wikimedia.org/wiki/Abstract%20Wikipedia"} This configuration allows us to deploy the function-orchestrator and function-evaluator images pushed to the wikimedia registry, identified as, but what if we want to test local changes without having to merge and push to the remote registry?

This way we could: * Use local images of function-orchestrator and function-evaluator, which we can alter, deploy and test inside the pods * Use deployment-charts to edit config parameters, and add new environment variables into the services * Configure our locally running installation of mediawiki and alter the config variables so that we can point at the services running in Kubernetes instead to the ones running in docker containers * [QUESTION] Can we have the function-orchestrator service deployed inside of kubernetes, make GET requests to the mediawiki installation running over docker on the host machine?

To use local images for the services instead of the ones in the registry, modify deployment-charts  for each service:

For development purposes:

docker: # registry: docker-registry.wikimedia.org registry: localhost:5000 pull_policy: IfNotPresent and:

main_app: # image: wikimedia/mediawiki-services-function-orchestrator # version: ff7fb9f7ccdd9d9f9e635ccbc0269ae76cd828b9 # we use image: local-orchestrator version: latest And the same changes for function-evaluator.

Continue:

Use local docker

https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube

As the README describes, you can reuse the Docker daemon from Minikube with eval $(minikube docker-env).

So to use an image without uploading it, you can follow these steps:

Set the environment variables with eval $(minikube docker-env) Build the image with the Docker daemon of Minikube (eg docker build -t my-image .) Set the image in the pod spec like the build tag (eg my-image) Set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image. Important note: You have to run eval $(minikube docker-env) on each terminal you want to use, since it only sets the environment variables for the current shell session. cd function-orchestrator
 * 1) From the function-orchestrator directory:

eval $(minikube docker-env)
 * 1) Set environment variables so that minikube and host share the same docker daemon

blubber .pipeline/blubber.yaml development | docker build -t local-evaluator -f -.
 * 1) Build the image with the Docker daemon of minikube

docker build -t local-evaluator.
 * 1) Or if the Dockerfile has been created, just do:


 * 1) Now, can we deploy local images using development-charts?
 * 2) NOPE, it requires we specify the registry url

docker run -d -p 5000:5000 --name registry registry:2
 * 1) Okay let's create a local registry

docker image tag local-orchestrator localhost:5000/local-orchestrator docker image tag local-evaluator localhost:5000/local-evaluator
 * 1) And tag our local images


 * 1) YES!!
 * 2) They are responding
 * 3) Wohoooo
 * 4) Evaluator: curl 192.168.58.2:31318/_info
 * 5) Orchestrator: curl 192.168.58.2:30741/_info

docker ps -a | grep node | grep function-*
 * 1) Now let's see if we can see the container logs
 * 2) Find the name of the containers that we want to log

docker logs  -f So that we don't have to re-tag images every time that we generate them, and change the image tag in deployment-charts, we are going to edit the deployment-charts template so that helm always creates a new container for function-orchestrator from  whenever we do
 * 1) And log

For this, we have added in the function-orchestrator  template:

spec: template: metadata: annotations: # FIXME: Remove 'rollme', development only: force roll every time we do helm update rollme: And we can set this variable in :

function-orchestrator: config: development: true Setting development value to true will force helm update to always roll the function-orchestrator image, and setting to false will only roll if the chart or values have changed. This is useful for development purposes, in which we don't wanna be changing the tags and all the related parameters in deployments-charts, but we want to be able to make local changes to function-orchestrator, create the local image, tag it, and redeploy it inside minikube.

Example of a CURL request for testing:

curl 192.168.58.2:30316/1/v1/evaluate -X POST -d '{ "zobject": { "Z1K1": "Z7", "Z7K1": "Z885", "Z885K1": "Z502" }, "doValidate": true}' -H 'Content-Type: application/json' Can we connect from minikube to localhost? https://stackoverflow.com/questions/55164223/access-mysql-running-on-localhost-from-minikube

YES, minikube directly creates two host names:  and , which means that we can use these names from INSIDE the kubernetes cluster, so in the function-orchestrator variables in deployment-charts:

config: public: FUNCTION_EVALUATOR_URL: http://minikube:31318/1/v1/evaluate/ WIKI_URL: http://host.minikube.internal:8080/w/api.php And WikiLambda will need to have the URL of the orchestrator this way:

Finally we need to be able to make requests to that IP from the mediawiki docker composer. If we have minikube running with docker (we should), we will have a network called  already created:

docker network ls

NETWORK ID    NAME           DRIVER    SCOPE f9c9960881a6  bridge         bridge    local b7e0ca8c4fcd  core_default   bridge    local 00911e6f166e  host           host      local c9521850bca9  minikube       bridge    local 2c7f44adeb7d  none           null      local We can inspect the networkd data with

docker network inspect minikube Where we can see what containers are attached to this network. We would need our mediawiki docker containers to connect to that network directly, for which we need to edit the  file in our   directory and add the following:

version: '3.7'


 * 1) We can also comment the previously used services here,
 * 2) because we are going to use the kubernetes ones from now on


 * 1) services:
 * 2)   function-orchestrator:
 * 3)     image: local-orchestrator:latest
 * 4)     ports:
 * 5)       - 6254:6254
 * 6)   function-evaluator:
 * 7)     image: local-evaluator:latest
 * 8)     ports:
 * 9)       - 6927:6927


 * 1) Make the containers connect to the minikube network by default

networks: default: name: minikube Once the containers are run again, we can do  and we should see how our mediawiki containers are now part of the Containers map:

[   {        "Name": "minikube", "Id": "c9521850bca9ec76afc26dd27eaf4df1d8a7a24a91fa79aecf58721c9fb11250", "Created": "2022-01-26T13:30:06.035450274+01:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ {                   "Subnet": "192.168.58.0/24", "Gateway": "192.168.58.1" }           ]        },        "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" },       "ConfigOnly": false, "Containers": { "1db44ceb36e8500fe47e907196f23392219e46c4d4b4246a7a6431607212ef33": { "Name": "core-mediawiki-1", "EndpointID": "fa429f8daf17ca903a88c51213f828e261588f024e6da2d6c3bc1cd0184d2ab4", "MacAddress": "02:42:c0:a8:3a:05", "IPv4Address": "192.168.58.5/24", "IPv6Address": "" },           "71e83dc2f6f2ee887de0995f52a050fe9f5ce77bc622ed9bfb58aaa385d5776c": { "Name": "minikube", "EndpointID": "640126a25a270fb71807308cf1af8377af4e2f6c3f54246e17bb297bb297379a", "MacAddress": "02:42:c0:a8:3a:02", "IPv4Address": "192.168.58.2/24", "IPv6Address": "" },           "cae02884e78ec7c4e6090dcc452cdde1c5e840fbb7c2929d42cd6bad7ec9d8c9": { "Name": "core-mediawiki-jobrunner-1", "EndpointID": "82792dd0267ff1d1b134dfcf4498f3d4080b456f9a94607bc620816eefb911c6", "MacAddress": "02:42:c0:a8:3a:04", "IPv4Address": "192.168.58.4/24", "IPv6Address": "" },           "ff33fc0cc0297eb80b99771378132e6f927e9ced8a8782b80cf8a25c2bfdc205": { "Name": "core-mediawiki-web-1", "EndpointID": "fab671c0eadb0584cd0117d1e0282d04e38179f5fca969670cbfd138f3c9c887", "MacAddress": "02:42:c0:a8:3a:03", "IPv4Address": "192.168.58.3/24", "IPv6Address": "" }       },        ...    } ] From inside any of these containers, we should be able to successfully ping the IPs of the others.

How to test?

 * 1) Go to mediawiki/core and run the mediawiki web containers with
 * 2) Go to releng/local-charts and run the services on kubernetes:



 See function-orchestrator logs



 Get IP and port of function orchestrator



 Go to mediawiki/core/LocalSettings.php and make sure that the orchestrator path is correct:



 Go to API sandbox and make a call to wikilambda_function_call</ol>

Expected outcomes: * You should see logs being printed with the API call * The API Sandbox should receive a successful response

Things to solve:

 * [x] Function-orchestrator to communicate with function-evaluator
 * [x] Function-evaluator to return stuff to function-orchestrator
 * [x] Function-orchestrator to know how to access mediawiki installation
 * [x] Mediawiki to be able to communicate with function-orchestrator
 * [ ] Service requets?
 * https://phabricator.wikimedia.org/project/profile/1305/
 * https://phabricator.wikimedia.org/T297314
 * [ ] Function-evaluator to be read-only
 * [ ] Function-evaluator to not have network access
 * [ ] Label function-evaluator and function-orchestrator (latest? stable?)
 * function-evaluator/orchestrator:./pipelines/config.yaml define tag latest
 * https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial#Publishing_Docker_Images
 * Need to add publish pipeline to integration/config:jjb/project-pipelines.yaml
 * Any other configuration needed for integration/config:zuul/layout.yaml ???
 * Currently ${setup.tag} https://wikitech.wikimedia.org/wiki/PipelineLib/Reference#Setup
 * [x] Change info output from function-evaluator

Production TODO

 * [x] Create production services helm charts
 * [ ] Ask SRE to set up a service proxy for it
 * Rationale: https://phabricator.wikimedia.org/T244843
 * Proxies setup: operations/puppet.git:/hieradata/common/profile/services_proxy/envoy.yaml
 * [ ] Set wikifunctions-evaluator and wikifunctions-orchestrator hostnames/ips
 * mediawiki-config/wmf-config/ProductionServices.php:109
 * these are the common services for all clusters
 * Kubernetes: Add a new service:
 * https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
 * [ ] Service ports
 * Ensure the service has it's ports registered at: Service ports
 * https://wikitech.wikimedia.org/wiki/Kubernetes/Service_ports
 * [ ] Create deployemnt user/tokens in the puppet private and public repos
 * hieradata/common/profile/kubernetes/deployment_server.yaml edit

Overall steps for production deployment

 * [ ] Read https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
 * [ ] Register the public ports that your services will use in https://wikitech.wikimedia.org/wiki/Service_ports
 * [ ] Use operations/deployment-charts/create_new_service.sh script to generate the chart(s) for your new service(s)
 * Follow instructions from the section above Create new service chart
 * Test your charts using minikube, kubectl and helm
 * Can use releng/local-charts to test
 * Add function orchestrator and function evaluator to values.yaml
 * Change mariadb repository URL (bitnami is a possibility)
 * Do
 * FOllow instructions from the section above Test services using local-charts

Useful resources
TODO: Filter and order reference material TODO: Take reference material to a more generalist help document

Wikimedia clusters:

 * https://wikitech.wikimedia.org/wiki/Clusters
 * Core services: eqiad and codfw
 * Edge catching: esams, ulsfo, eqsin, drmrs

Beta Cluster:
Wikifunctions URL: https://wikifunctions.beta.wmflabs.org/wiki/MediaWiki:Main_Pag


 * In Cloud VPS, Beta Cluster is deployment-prep
 * Project page: https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep
 * Deployment prep: https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/Overview
 * URL https://meta.wikimedia.beta.wmflabs.org/wiki/Main_Page
 * . .beta.wmflabs.org
 * Logs: Various server logs are written to the remote syslog server deployment-mwlog01 in /srv/mw-log
 * Logs in production: https://wikitech.wikimedia.org/wiki/Logs

Cloud Services:

 * Glossary: https://wikitech.wikimedia.org/wiki/Help:Glossary
 * General landing page https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS
 * How to access our instances: https://wikitech.wikimedia.org/wiki/Help:Accessing_Cloud_VPS_instances (Beta Cluster is deployment-prep)
 * FAQ for the Web control system, Horizon: https://wikitech.wikimedia.org/wiki/Help:Horizon_FAQ

Accessing Cloud Services:

 * Created another ssh key with Key<> pass
 * Saved this key into .ssh/wikitech, Wikitech settings and Gerrit settings

Production services:

 * How to add a new service: https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
 * Example helm chart: https://gerrit.wikimedia.org/r/plugins/gitiles/operations/deployment-charts/+/refs/heads/master/charts/chromium-render/values.yaml

About Horizon

 * Official tool for managing OpenStack deploys
 * The node definitions for a VPS instances are configured via OpenStack Horizon user interface
 * https://wikitech.wikimedia.org/wiki/Help:Horizon_FAQ
 * https://horizon.wikimedia.org/project/
 * Access credentials (same as wikitech)
 * Genoveva Galarza
 * Tech<>
 * 2FA for wikitech

About Puppet

 * https://wikitech.wikimedia.org/wiki/Puppet
 * Puppet is our configuration management system.
 * Puppet is not being used as a deployment system at Wikimedia
 * Public puppet repo https://gerrit.wikimedia.org/r/p/operations/puppet
 * Puppet hiera: https://wikitech.wikimedia.org/wiki/Puppet_Hiera
 * Configuration variables for puppet to be stored outside of manifests
 * Hiera is a powerful tool to decouple data from code in puppet.
 * Rules:
 * The code should be organized in modules, profiles and roles, where
 * Modules should be basic units of functionality (e.g. "set up, configure and run HHVM")
 * Profiles are collection of resources from modules that represent a high-level functionality ("a webserver able to serve mediawiki"),
 * Roles represent a collective function of one class of servers (e.g. "A mediawiki appserver for the API cluster")
 * Any node declaration must only include one role, invoked with the role function. No exceptions to this rule. If you need to include two roles in a node, that means that's another role including the two.
 * Puppet manifests
 * operations/puppet/manifests/site.pp

Charts Museum

 * Docs: https://wikitech.wikimedia.org/wiki/ChartMuseum
 * Repository URL: https://helm-charts.wikimedia.org/stable/
 * All stable charts: https://helm-charts.wikimedia.org/api/stable/charts

Blubber and PipeLine

 * About Blubber https://wikitech.wikimedia.org/wiki/Blubber/Tutorial
 * About PipeLine https://wikitech.wikimedia.org/wiki/PipelineLib/Tutorial
 * How to configure CI for your project: https://wikitech.wikimedia.org/wiki/PipelineLib/Guides/How_to_configure_CI_for_your_project

Additional links
Starting point: https://wikitech.wikimedia.org/wiki/Kubernetes General Kubernetes deployment documentation: https://wikitech.wikimedia.org/wiki/Kubernetes/Deployments To deploy a new service documentation: https://phabricator.wikimedia.org/project/profile/1305/
 * Deployment pipeline:
 * Uses PipelineLib to quickly build images with Blubber, integrate those images with Helm, and deploy to Kubernetes with Helmfile
 * https://wikitech.wikimedia.org/wiki/Deployment_pipeline
 * https://wikitech.wikimedia.org/wiki/Deployment_pipeline#/media/File:Containerized_continuous_delivery_2017_concept.png
 * Tech talk https://www.youtube.com/watch?v=i0FTcG7PxzI
 * Migration tutorial: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial