Extension:WikiLambda/Development environment

Create new service chart
operations/deployment-charts/README.md

If you want to create a new chart use the  script, test it and upload a change to Gerrit. Then wait for a review. Running create_new_service.sh for function-orchestrator

port > 6254 name > function-orchestraotr image label > wikimedia/mediawiki-services-function-orchestrator Running create_new_service.sh for function-evaluator

port > 6927 name > function-evaluator image label > wikimedia/mediawiki-services-function-evaluator Change in the chart

function-orchestrator: * main_app.version: ff7fb9f7ccdd9d9f9e635ccbc0269ae76cd828b9 * main_app.readiness_probe.httpGet.path: /_info * tls.public_port: 4970

function-evaluator: * main_app.version: fffdeacd512acc72dc7f73b1feaf988dcfed198a * main_app.readiness_probe.httpGet.path: /_info * tls.public_port: 4971

Now you are ready to run using releng/local-charts

Test charts using local-charts
Local-charts tutorial: https://wikitech.wikimedia.org/wiki/Local-charts/Tutorial

Followed this to test deployment chart for function-orchestrator: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial

make deploy values=values.example.yaml

cp values.example.yaml values.yaml make deploy minikube ip kubectl get svc
 * 1) Or with values.yaml

make deploy release=wikifunctions Delete all:
 * 1) Let's name our release

helm del default
 * 1) Without naming release:

helm del wikifunctions To make changes and update the deployment do:
 * 1) But with the name

make update
 * 1) For unnamed release

make update release=wikifunctions Test deployment:
 * 1) For named release

minikube ip > 192.168.58.2

kubectl get svc > NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE > function-orchestrator-default  NodePort    10.111.200.0          6254:30642/TCP   18s

curl 192.168.58.2:30642/_info > {"name":"function-orchestrator","version":"0.0.1","description":"A Wikifunctions service to orchestrate WikiLambda function executors","home":"http://meta.wikimedia.org/wiki/Abstract%20Wikipedia"} This configuration allows us to deploy the function-orchestrator and function-evaluator images pushed to the wikimedia registry, identified as, but what if we want to test local changes without having to merge and push to the remote registry?

This way we could: * Use local images of function-orchestrator and function-evaluator, which we can alter, deploy and test inside the pods * Use deployment-charts to edit config parameters, and add new environment variables into the services * Configure our locally running installation of mediawiki and alter the config variables so that we can point at the services running in Kubernetes instead to the ones running in docker containers * [QUESTION] Can we have the function-orchestrator service deployed inside of kubernetes, make GET requests to the mediawiki installation running over docker on the host machine?

To use local images for the services instead of the ones in the registry, modify deployment-charts  for each service:

For development purposes:

docker: # registry: docker-registry.wikimedia.org registry: localhost:5000 pull_policy: IfNotPresent and:

main_app: # image: wikimedia/mediawiki-services-function-orchestrator # version: ff7fb9f7ccdd9d9f9e635ccbc0269ae76cd828b9 # we use image: local-orchestrator version: latest And the same changes for function-evaluator.

Continue:

Use local docker

https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube

As the README describes, you can reuse the Docker daemon from Minikube with eval $(minikube docker-env).

So to use an image without uploading it, you can follow these steps:

Set the environment variables with eval $(minikube docker-env) Build the image with the Docker daemon of Minikube (eg docker build -t my-image .) Set the image in the pod spec like the build tag (eg my-image) Set the imagePullPolicy to Never, otherwise Kubernetes will try to download the image. Important note: You have to run eval $(minikube docker-env) on each terminal you want to use, since it only sets the environment variables for the current shell session. cd function-orchestrator
 * 1) From the function-orchestrator directory:

eval $(minikube docker-env)
 * 1) Set environment variables so that minikube and host share the same docker daemon

blubber .pipeline/blubber.yaml development | docker build -t local-evaluator -f -.
 * 1) Build the image with the Docker daemon of minikube

docker build -t local-evaluator.
 * 1) Or if the Dockerfile has been created, just do:


 * 1) Now, can we deploy local images using development-charts?
 * 2) NOPE, it requires we specify the registry url

docker run -d -p 5000:5000 --name registry registry:2
 * 1) Okay let's create a local registry

docker image tag local-orchestrator localhost:5000/local-orchestrator docker image tag local-evaluator localhost:5000/local-evaluator
 * 1) And tag our local images


 * 1) YES!!
 * 2) They are responding
 * 3) Wohoooo
 * 4) Evaluator: curl 192.168.58.2:31318/_info
 * 5) Orchestrator: curl 192.168.58.2:30741/_info

docker ps -a | grep node | grep function-*
 * 1) Now let's see if we can see the container logs
 * 2) Find the name of the containers that we want to log

docker logs  -f So that we don't have to re-tag images every time that we generate them, and change the image tag in deployment-charts, we are going to edit the deployment-charts template so that helm always creates a new container for function-orchestrator from  whenever we do
 * 1) And log

For this, we have added in the function-orchestrator  template:

spec: template: metadata: annotations: # FIXME: Remove 'rollme', development only: force roll every time we do helm update rollme: And we can set this variable in :

function-orchestrator: config: development: true Setting development value to true will force helm update to always roll the function-orchestrator image, and setting to false will only roll if the chart or values have changed. This is useful for development purposes, in which we don't wanna be changing the tags and all the related parameters in deployments-charts, but we want to be able to make local changes to function-orchestrator, create the local image, tag it, and redeploy it inside minikube.

Example of a CURL request for testing:

curl 192.168.58.2:30316/1/v1/evaluate -X POST -d '{ "zobject": { "Z1K1": "Z7", "Z7K1": "Z885", "Z885K1": "Z502" }, "doValidate": true}' -H 'Content-Type: application/json' Can we connect from minikube to localhost? https://stackoverflow.com/questions/55164223/access-mysql-running-on-localhost-from-minikube

YES, minikube directly creates two host names:  and , which means that we can use these names from INSIDE the kubernetes cluster, so in the function-orchestrator variables in deployment-charts:

config: public: FUNCTION_EVALUATOR_URL: http://minikube:31318/1/v1/evaluate/ WIKI_URL: http://host.minikube.internal:8080/w/api.php And WikiLambda will need to have the URL of the orchestrator this way:

Finally we need to be able to make requests to that IP from the mediawiki docker composer. If we have minikube running with docker (we should), we will have a network called  already created:

docker network ls

NETWORK ID    NAME           DRIVER    SCOPE f9c9960881a6  bridge         bridge    local b7e0ca8c4fcd  core_default   bridge    local 00911e6f166e  host           host      local c9521850bca9  minikube       bridge    local 2c7f44adeb7d  none           null      local We can inspect the networkd data with

docker network inspect minikube Where we can see what containers are attached to this network. We would need our mediawiki docker containers to connect to that network directly, for which we need to edit the  file in our   directory and add the following:

version: '3.7'


 * 1) We can also comment the previously used services here,
 * 2) because we are going to use the kubernetes ones from now on


 * 1) services:
 * 2)   function-orchestrator:
 * 3)     image: local-orchestrator:latest
 * 4)     ports:
 * 5)       - 6254:6254
 * 6)   function-evaluator:
 * 7)     image: local-evaluator:latest
 * 8)     ports:
 * 9)       - 6927:6927


 * 1) Make the containers connect to the minikube network by default

networks: default: name: minikube Once the containers are run again, we can do  and we should see how our mediawiki containers are now part of the Containers map:

[   {        "Name": "minikube", "Id": "c9521850bca9ec76afc26dd27eaf4df1d8a7a24a91fa79aecf58721c9fb11250", "Created": "2022-01-26T13:30:06.035450274+01:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ {                   "Subnet": "192.168.58.0/24", "Gateway": "192.168.58.1" }           ]        },        "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" },       "ConfigOnly": false, "Containers": { "1db44ceb36e8500fe47e907196f23392219e46c4d4b4246a7a6431607212ef33": { "Name": "core-mediawiki-1", "EndpointID": "fa429f8daf17ca903a88c51213f828e261588f024e6da2d6c3bc1cd0184d2ab4", "MacAddress": "02:42:c0:a8:3a:05", "IPv4Address": "192.168.58.5/24", "IPv6Address": "" },           "71e83dc2f6f2ee887de0995f52a050fe9f5ce77bc622ed9bfb58aaa385d5776c": { "Name": "minikube", "EndpointID": "640126a25a270fb71807308cf1af8377af4e2f6c3f54246e17bb297bb297379a", "MacAddress": "02:42:c0:a8:3a:02", "IPv4Address": "192.168.58.2/24", "IPv6Address": "" },           "cae02884e78ec7c4e6090dcc452cdde1c5e840fbb7c2929d42cd6bad7ec9d8c9": { "Name": "core-mediawiki-jobrunner-1", "EndpointID": "82792dd0267ff1d1b134dfcf4498f3d4080b456f9a94607bc620816eefb911c6", "MacAddress": "02:42:c0:a8:3a:04", "IPv4Address": "192.168.58.4/24", "IPv6Address": "" },           "ff33fc0cc0297eb80b99771378132e6f927e9ced8a8782b80cf8a25c2bfdc205": { "Name": "core-mediawiki-web-1", "EndpointID": "fab671c0eadb0584cd0117d1e0282d04e38179f5fca969670cbfd138f3c9c887", "MacAddress": "02:42:c0:a8:3a:03", "IPv4Address": "192.168.58.3/24", "IPv6Address": "" }       },        ...    } ] From inside any of these containers, we should be able to successfully ping the IPs of the others.

How to test?

 * 1) Go to mediawiki/core and run the mediawiki web containers with
 * 2) Go to releng/local-charts and run the services on kubernetes:



 See function-orchestrator logs



 Get IP and port of function orchestrator



 Go to mediawiki/core/LocalSettings.php and make sure that the orchestrator path is correct:



 Go to API sandbox and make a call to wikilambda_function_call</ol>

Expected outcomes: * You should see logs being printed with the API call * The API Sandbox should receive a successful response

Things to solve:

 * [x] Function-orchestrator to communicate with function-evaluator
 * [x] Function-evaluator to return stuff to function-orchestrator
 * [x] Function-orchestrator to know how to access mediawiki installation
 * [x] Mediawiki to be able to communicate with function-orchestrator
 * [ ] Service requets?
 * https://phabricator.wikimedia.org/project/profile/1305/
 * https://phabricator.wikimedia.org/T297314
 * [ ] Function-evaluator to be read-only
 * [ ] Function-evaluator to not have network access
 * [ ] Label function-evaluator and function-orchestrator (latest? stable?)
 * function-evaluator/orchestrator:./pipelines/config.yaml define tag latest
 * https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial#Publishing_Docker_Images
 * Need to add publish pipeline to integration/config:jjb/project-pipelines.yaml
 * Any other configuration needed for integration/config:zuul/layout.yaml ???
 * Currently ${setup.tag} https://wikitech.wikimedia.org/wiki/PipelineLib/Reference#Setup
 * [x] Change info output from function-evaluator

Production TODO

 * [x] Create production services helm charts
 * [ ] Ask SRE to set up a service proxy for it
 * Rationale: https://phabricator.wikimedia.org/T244843
 * Proxies setup: operations/puppet.git:/hieradata/common/profile/services_proxy/envoy.yaml
 * [ ] Set wikifunctions-evaluator and wikifunctions-orchestrator hostnames/ips
 * mediawiki-config/wmf-config/ProductionServices.php:109
 * these are the common services for all clusters
 * Kubernetes: Add a new service:
 * https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
 * [ ] Service ports
 * Ensure the service has it's ports registered at: Service ports
 * https://wikitech.wikimedia.org/wiki/Kubernetes/Service_ports
 * [ ] Create deployemnt user/tokens in the puppet private and public repos
 * hieradata/common/profile/kubernetes/deployment_server.yaml edit

Overall steps for production deployment

 * [ ] Read https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
 * [ ] Register the public ports that your services will use in https://wikitech.wikimedia.org/wiki/Service_ports
 * [ ] Use operations/deployment-charts/create_new_service.sh script to generate the chart(s) for your new service(s)
 * Follow instructions from the section above Create new service chart
 * Test your charts using minikube, kubectl and helm
 * Can use releng/local-charts to test
 * Add function orchestrator and function evaluator to values.yaml
 * Change mariadb repository URL (bitnami is a possibility)
 * Do
 * FOllow instructions from the section above Test services using local-charts

Useful resources
TODO: Filter and order reference material TODO: Take reference material to a more generalist help document

Wikimedia clusters:

 * https://wikitech.wikimedia.org/wiki/Clusters
 * Core services: eqiad and codfw
 * Edge catching: esams, ulsfo, eqsin, drmrs

Beta Cluster:
Wikifunctions URL: https://wikifunctions.beta.wmflabs.org/wiki/MediaWiki:Main_Pag


 * In Cloud VPS, Beta Cluster is deployment-prep
 * Project page: https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep
 * Deployment prep: https://wikitech.wikimedia.org/wiki/Nova_Resource:Deployment-prep/Overview
 * URL https://meta.wikimedia.beta.wmflabs.org/wiki/Main_Page
 * . .beta.wmflabs.org
 * Logs: Various server logs are written to the remote syslog server deployment-mwlog01 in /srv/mw-log
 * Logs in production: https://wikitech.wikimedia.org/wiki/Logs

Cloud Services:

 * Glossary: https://wikitech.wikimedia.org/wiki/Help:Glossary
 * General landing page https://wikitech.wikimedia.org/wiki/Portal:Cloud_VPS
 * How to access our instances: https://wikitech.wikimedia.org/wiki/Help:Accessing_Cloud_VPS_instances (Beta Cluster is deployment-prep)
 * FAQ for the Web control system, Horizon: https://wikitech.wikimedia.org/wiki/Help:Horizon_FAQ

Accessing Cloud Services:

 * Created another ssh key with Key<> pass
 * Saved this key into .ssh/wikitech, Wikitech settings and Gerrit settings

Production services:

 * How to add a new service: https://wikitech.wikimedia.org/wiki/Kubernetes#Add_a_new_service
 * Example helm chart: https://gerrit.wikimedia.org/r/plugins/gitiles/operations/deployment-charts/+/refs/heads/master/charts/chromium-render/values.yaml

About Horizon

 * Official tool for managing OpenStack deploys
 * The node definitions for a VPS instances are configured via OpenStack Horizon user interface
 * https://wikitech.wikimedia.org/wiki/Help:Horizon_FAQ
 * https://horizon.wikimedia.org/project/
 * Access credentials (same as wikitech)
 * Genoveva Galarza
 * Tech<>
 * 2FA for wikitech

About Puppet

 * https://wikitech.wikimedia.org/wiki/Puppet
 * Puppet is our configuration management system.
 * Puppet is not being used as a deployment system at Wikimedia
 * Public puppet repo https://gerrit.wikimedia.org/r/p/operations/puppet
 * Puppet hiera: https://wikitech.wikimedia.org/wiki/Puppet_Hiera
 * Configuration variables for puppet to be stored outside of manifests
 * Hiera is a powerful tool to decouple data from code in puppet.
 * Rules:
 * The code should be organized in modules, profiles and roles, where
 * Modules should be basic units of functionality (e.g. "set up, configure and run HHVM")
 * Profiles are collection of resources from modules that represent a high-level functionality ("a webserver able to serve mediawiki"),
 * Roles represent a collective function of one class of servers (e.g. "A mediawiki appserver for the API cluster")
 * Any node declaration must only include one role, invoked with the role function. No exceptions to this rule. If you need to include two roles in a node, that means that's another role including the two.
 * Puppet manifests
 * operations/puppet/manifests/site.pp

Charts Museum

 * Docs: https://wikitech.wikimedia.org/wiki/ChartMuseum
 * Repository URL: https://helm-charts.wikimedia.org/stable/
 * All stable charts: https://helm-charts.wikimedia.org/api/stable/charts

Blubber and PipeLine

 * About Blubber https://wikitech.wikimedia.org/wiki/Blubber/Tutorial
 * About PipeLine https://wikitech.wikimedia.org/wiki/PipelineLib/Tutorial
 * How to configure CI for your project: https://wikitech.wikimedia.org/wiki/PipelineLib/Guides/How_to_configure_CI_for_your_project

Additional links
Starting point: https://wikitech.wikimedia.org/wiki/Kubernetes General Kubernetes deployment documentation: https://wikitech.wikimedia.org/wiki/Kubernetes/Deployments To deploy a new service documentation: https://phabricator.wikimedia.org/project/profile/1305/
 * Deployment pipeline:
 * Uses PipelineLib to quickly build images with Blubber, integrate those images with Helm, and deploy to Kubernetes with Helmfile
 * https://wikitech.wikimedia.org/wiki/Deployment_pipeline
 * https://wikitech.wikimedia.org/wiki/Deployment_pipeline#/media/File:Containerized_continuous_delivery_2017_concept.png
 * Tech talk https://www.youtube.com/watch?v=i0FTcG7PxzI
 * Migration tutorial: https://wikitech.wikimedia.org/wiki/Deployment_pipeline/Migration/Tutorial