User:Kmenger/ToolLabsGuide

What is Tool Labs
Tool Labs is a reliable, scalable hosting environment for community developers working on tools and bots that help users maintain and use wikis. The cloud-based infrastructure was developed by the Wikimedia Foundation and is supported by a dedicated group of Wikimedia Foundation staff and volunteers.

Tool Labs is a part of the Labs project, which is designed to make it easier for developers and system administrators to try out improvements to Wikimedia infrastructure, including MediaWiki, and to do analytics and  bot work.

Rationale
Tool Labs was developed in response to the need to support external tools and their developers and maintainers. The system is designed to make it easy for maintainers to share responsibility for their tools and bots, which helps ensure that no useful tool gets ‘orphaned’ when one person needs a break. The system is designed to be reliable, scalable and simple to use, so that developers can hit the ground and start coding.

Features
In addition to providing a well supported hosting environment, Tool Labs provides:
 * support for Web services, continuous bots, and scheduled tasks
 * access to replicated production databases
 * easily shared management of tool accounts, where tools and bots are stored
 * a grid engine for dispatching jobs
 * support for mosh, SSH, SFTP without complicated proxy setup
 * time-travel backups for short-term data recovery
 * version control via Gerrit and Git
 * support for Redis

Architecture and terminology
Tool Labs has essentially four components: the bastion hosts, the grid, the web cluster, and the databases. Users access the system via one of two Tool Lab projects: ‘tools’ or ‘toolsbeta’. To request an account on the ‘tools’ project, where most tool and bot development is hosted and maintained, please see Tools Access Request.

Bastion hosts, grid, web cluster, databases
The four main components of Tool Labs, in a nutshell:

Bastion hosts

The bastion host is where users log in to Tool Labs. Currently, Tool Labs has two bastion hosts:


 * tools-login.wmflabs.org


 * tools-dev.wmflabs.org

The two hosts are functionally identical, but we request that heavy processing (compiles, etc) be done only on tools-dev.wmflabs.org to keep interactive performance on tools-login.wmflabs.org snappy.

The grid

The Tool Labs grid, implemented with Open Grid Engine (the open-source fork of Sun Grid Engine) permits users to submit jobs from either a log-in account on the bastion host or from a Web service. Submitted jobs are added to a work queue, and the system finds a host to execute them. Jobs can be scheduled synchronously or asynchronously, continuously, or simply executed once. If a continuous job fails, the grid will automatically restart the job so that it keeps going.For more information about the grid, please see Submitting, managing and scheduling jobs on the grid.

The Web cluster

The Tool Labs Web cluster is fronted by a Web proxy, which supports SSL and is open to the Internet. Any of the servers in the cluster can serve any of the hosted Web tools as Tool Labs uses a shared storage system; the proxy distributes between the Web servers. The cluster uses suPHP to run scripts and CGI, and will soon support WSGI. Note that individual tool accounts have both a  ~/public_html/  and a ~/cgi-bin/ directory in the home directory for storing Web files. For more information, please see Web services.

The databases 

Tool Labs supports two sets of databases: the production replicas and user-created databases, which are used by individual tools. The production replicas follow the same setup as production, and the information that can be accessed from them is the same as that which normal registered users (i.e.: not  +sysop or other types of advanced permissions) can access on-wiki or via the API. Note that some data has been removed from the replicas for privacy reasons.User-created databases can be created by either a user or a tool on the replica servers or on a local ‘tools’ project database.

Projects: Tools and Toolsbeta
Like the rest of Labs, Tool Labs is organized into ‘projects’. Currently, Tool Labs consists of two projects: ‘tools’ and ‘toolsbeta’, which are described in more detail here: The ‘tools’ project is where tools and bots are developed and maintained. ‘toolsbeta’ is used for experiments in the Tool Labs environment itself--things like new systems or experimental versions of system libraries that could affect other users. In general, every tool maintainer should work primarily on the "tools" project, only doing work on toolsbeta when changes to Tool Labs itself need to be tested to support their tool.
 * tools  project: https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools
 * toolsbeta  project: https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta

Instances
Developers working in Tool Labs do not have to create or set up virtual machines (i.e., Lab ‘instances') as the Tool Lab project admins create and manage them. The term will come up in the Labs documentation; otherwise, don’t worry about it.

Tool Labs policies
All tools and bots developed and maintained on Tool Labs must adhere to the terms of use that will be available here when they are finalized:


 * Tool Labs > Rules

Specifically, tools must be Private information must be handled carefully, if at all. Note that private user information has been redacted from the replicated databases provided by the system.
 * Open source
 * Open data

As the Tool Labs environment is shared, we ask that you strive not to break things for others, and to be considerate when using system resources.

Individual wiki policies (these differ!)
When developing on Tool Labs, please adhere to the bot policies of the wikis your bot interacts with. Each wiki has its own guidelines and procedures for obtaining approval. The English Wikipedia, for example, requires that a bot be approved by the Bot Approvals Group before it is deployed, and that the bot account be marked with a ‘bot’ flag. See |Wikipedia Bot policy for more information on the English Wikipedia.

For general information and guidelines, please see Bot policy.

Contact
We’d love to hear from you! You can find us here:


 * On IRC: #wikimedia-labs on Freenode A great place to ask questions, get help, and meet other Tool Lab developers. See Help:IRC for more information.


 * Via mailing list: Labs-l@lists.wikimedia.org A list for announcements and discussion related to the Wikimedia Labs project. You can find the archives here: http://lists.wikimedia.org/pipermail/labs-l/
 * Found a bug?: Bugs can be posted to Bugzilla: https://bugzilla.wikimedia.org/enter_bug.cgi?product=Wikimedia%20Labs

Getting access to Tool Labs
Anyone can view the source code and the output of most tools and bots, and anyone can get an account of their own as well.

To access Tool Labs you need:


 * to create a Labs account, which provides shell access (you must upload an SSH key)
 * to request access to the 'tools' project

Steps for creating a Labs account, creating and uploading an SSH key, and for requesting access to the 'tools project' are described in the next sections.

Creating a Labs account on Wikitech
Before you can access Tool Labs, you must create a Labs account on Wikitech, which is the general interface for everything Labs.

Sign up for a Labs account here: Request account (you will be asked to enter the new account's information)

The "Instance shell account name" you specify in the Create Account form will be your Unix username on all Labs projects. If you forget your username, you can always find it under Preferences > Instance shell account name.

Once you have created a Labs account you will be added to a list of users to be approved for shell access, which you can see here: Shell Access Requests.

Generating and uploading an SSH key
In order to access Labs servers using SSH, you must provide a public SSH key. Once you have created a Labs account, you can specify a public key on the 'OpenStack' tab of your Wikitech preferences.

Specify the SSH key here: |OpenStack Preferences

Generating a key in Windows
To generate an SSH key in Windows:


 * 1)     Open PuttyGen
 * 2)     Select an SSH-2 RSA key
 * 3)     Click the Generate button
 * 4)     Move your mouse around until the progress bar is full
 * 5)     Type in a passphrase (you will need to remember this) and confirm it
 * 6)     Save the private key and public key onto your local machine
 * 7)     From the text field 'Public key for pasting into OpenSSH authorized_keys file' right click and copy
 * 8)     Insert this into your 'OpenStack' tab of your Wikitech preferences

Generating a key in Linux
Modern Unix systems include the OpenSSH client (if not then install it). To generate a key, use:

ssh-keygen -t rsa

This will store your private key in $HOME/.ssh/id_rsa, and your public key in $HOME/.ssh/id_rsa.pub. You can use different filenames (with -f parameter), but these are the default filenames, so it's easiest to not change them.

Requesting access to the 'tools' project
Once you have created a Labs account, you must request access to the ‘tools’ project by submitting a Tools Access Request.

Submit a request here: Tools Access Request

Requests for access are generally dealt with within the day (often faster), though response-time may be longer depending on admin availability. If you need immediate assistance, please contact us on IRC.

Receiving access to the 'tools' project
Once your 'tools' project access request has been processed, you will become a member of the 'tools' project, and will be able to access it using the "Instance shell account name" provided when creating your Labs account and the private key matching the public key you supplied for authentication. For more information about accessing the project, please see Accessing Tool Labs.

Notification
You will be notified on Wikitech that your user rights were changed, that your request was linked from 'Nova Resource:Tools', and that you have been added to the project Nova Resource:Tools. You will also receive email explaining that your user rights have been changed, and that you are now a member of the group 'shell'. In other words, your Tool Labs account is ready for you to use!

Storage and use
Although you access Tool Labs via your Labs account,  we strongly recommend against saving data or tools in any space  that is accessible to individuals only. Tools and bots should be maintained in Tool accounts, which have flexible memberships (i.e., multiple people can help maintain the code!). For more information about Tool accounts, please see Joining and creating a Tool account.

Accessing Tool Labs
Tool Labs can be accessed in a variety of ways--from its public IP to a GUI client. Please see Help:Access (https://wikitech.wikimedia.org/wiki/Help:Access ) for general information about accessing Labs. Pointers to more information on specific means of access below.

Tools home page
The Tools home page:http://tools.wmflabs.org/

The Tools home page is publicly available and contains a list of all currently hosted Tool accounts along with the name(s) of the maintainers for each. Individual tool accounts that have an associated web page will appear as links. Users with access to the 'tools' project can create new tool accounts here, and add or remove maintainers to and from existing tool accounts.

SSH/SFTP/SCP
Users can SSH to the 'tools' project via its bastion host: tools-login.wmflabs.org, provided that a public SSH key has been uploaded to the Labs account.

ssh yourshellaccountname@tools-login.wmflabs.org

Note that if you plan to do heavy processing (compiling, etc), you should SSH to tools-dev.wmflabs.org.

Using 'take' to transfer ownership of uploaded files
Once you have logged in via SSH, you can transfer files via sftp and scp. Note that the transferred files will be owned by you. You will likely wish to transfer ownershihp to your tool account. To do this:

1. Become your tool account using 'become':

maintainer@tools-login:~$ become toolaccount local-toolaccount@tools-login:~$

2. As your tool account, 'take' ownership of the files:

local-toolaccount@tools-login:~$ take FILE

The 'take' command will change the ownership of the file(s) and directories recursively to the calling user (in this case, the tool account).

Using multiple ssh agents
If you use multiple ssh-agents (to connect to your personal or company system, for example), see Managing Multiple SSH Agents for more information about setting up a primary and a Labs agent.

Putty and WinSCP
Note that instructions for accessing Tool Labs with Putty and WinSCP differ from the instructions for using them with other Labs projects. Please see Help> Access to ToolLabs instances with PuTTY and WinSCP for information specific to Tool Labs.

Other graphical file managers (e.g., Gnome/KDE)
For information about using a graphical file manager (e.g., Gnome/KDE), please see Accessing Tool Labs > Accessing instances with a graphical file manager

What is a Tool account?
Tool accounts, which can be created by any ‘tools’ project member, are fundamental to the structure and organization of Tool Labs. Although each tool account has a user ID, they are not personal accounts (like a Labs account), rather services that consist of a user and group ID (i.e., a unix uid-gid pair) that are intended to run the actual tool or bot.


 * Unix user: local-toolname
 * Unix group: local-toolname

Members of the Unix group include:


 * the tool account creator
 * the tool account itself
 * (optionally, but encouraged!) additional tool maintainers

Maintainers may have more than one tool account, and tool accounts may have more than one maintainer. Every member of the group has the authorization to sudo to the tool account. By default, only members of the group have access to tool account's code and data.

A simple way for maintainers to switch to the tool account is with ‘become’:

maintainer@tools-login:~$ become toolname local-toolname@tools-login:~$

In addition to the user/group pair, each tool account includes:


 * A home directory on shared storage: /data/project/toolname
 * A ~/public_html/ and ~/cgi-bin/ directory, which are visible at http://tools.wmflabs.org/toolname/  and http://tools.wmflabs.org/toolname/cgi-bin, respectively
 * Database access credentials: replica.my.cnf, which provide access to the production database replicas as well as to project-local databases.
 * Access to the continuous and task queues of the compute grid

Joining an existing Tool account
All tool accounts hosted in Tool Labs are listed on the Tools home page. If you would like to be added to an existing account, you must contact the maintainer(s) directly.

If you would like to add (or remove) maintainers to a tool account that you manage, you may do so with the 'add' link found beneath the tool name on the Tools home page.

Creating a new Tool account
Members of the ‘tools’ project can create tool accounts from the Tools home page:


 * 1) Navigate to the Tools home page: http://tools.wmflabs.org/
 * 2) Select the “create new tool” link (found beside “Hosted tools” near the top of the page
 * 3)  Enter a  “Service group name”. The service group name will be used as the name of your tool account.

Do not prefix your service group name with local-. The management interface will do so automatically where appropriate, and there is a known issue that will cause the account to be created improperly if you do.

Note: If you have only recently been added to the ‘tools’ project, you may get an error about not having appropriate credentials. Simply log out and back in to Wikitech to fix this

The tool account will be created and you will be granted access to it within a minute or two. If you were already logged in to your Labs account through SSH, you will have to log off then back in before you can access the tool account.

Deleting a Tool account
You can't delete a tool account yourself, though you can delete the content of your directories. If you really want a tool account to be deleted, please contact an admin.

Using Toolsbeta
Nearly all tool development is done on the 'tools' project, and 99.9% of the time, creating a tool account on this project will serve your needs. However, if your tool or bot requires an experimental library or a significant change to the 'tools' infrastructure--anything that could potentially negatively impact existing tools--you should experiment with the new infrastructure on toolsbeta. To request access to toolsbeta, please visit #wikimedia-labs on IRC. You can also request access via the labs-l mailing list or via Bugzilla.

Customizing a Tool account
Once you have created a tool account, there are a few things that you can customize to make the tool more easily understood and used by other users. These include: Tool Labs will soon support mail to both Labs users and tool accounts (mail to a tool account will go to all maintainers by default). You can customize mail settings as well.
 * adding a tool account description (the description will appear on the Tools home page beside the tool name)
 * creating a home page for your tool (if you create a home page for the tool, it will be linked from the Tools home page automatically)

Creating a tool web page
To create a web page for your tool account, simply place an index.html file in the tool account's ~/public_html/ directory. The page can be a simple description of the tool or bot with basic information on how to set it up or shut it down, or it contain an interface for the web service. To see examples of existing tool web pages, click any of the linked Tool names on the Tools home page.

Note that some files, such as PHP files, will give a 500 error unless the owner of the file is tool account.

Creating a tool description
To create a tool description:

1.    Log into your Labs account and become your tool account:

maintainer@tools-login:~$ become toolname

2.    Create a ‘.description’ file in the tool account’s home directory. Note that this file must be HTML:

local-toolname@tools-login:~$ vim .description

3.    Add a brief description (no more than 25 words or so) and save the file.

4.    Navigate to the Tools home page. Your tool account description should now appear beside your tool account name.

Configuring mail -- mail forwarding
Mail from system daemons (grid, cron, etc.) is delivered to tool and user accounts. By default, tool accounts forward their mail to their maintainers' accounts, while user accounts store mail locally and users can read it (e.g., with mail).

To forward mail to your personal mail address from a Labs account:

1. Log in to your Labs account

2. In your home directory, create a file ‘.forward’

maintainer@tools-login:~$ vim .forward

3. Add the forwarding email address on a single line, e.g.

me@example.invalid

4. Ensure that .forward is only writable only by you, the account user. If ‘.forward’ is writable by anyone other than you, mail is not delivered at all!

maintainer@tools-login:~$ chmod 600 ~/.forward

You can also use a .forward file in a tools account to redirect mail to a specified address (e.g., a mailing list) instead of sending all messages to the individual maintainers (which is the default).

Configuring bots and tools
Tools and bot code should be stored in your tools account, where it can be managed by multiple users and accessed by all execution hosts. Specific information about configuring web services and bots, along with information about licensing, package installation, and shared code storage, is available here.

Note that bots and tools should be run via the grid, which finds a suitable host with sufficient resources to run each. Simple, one-off jobs can be submitted to the grid easily with the jsub command. Continuous jobs, such as bots, can be submitted with jstart.

Setting up code review and version control
Although it's possible to just stick your code in the directory and mess with it manually every time you want to change something, your future self and your future collaborators will thank you if you instead use source control, a.k.a. version control and a code review tool. Wikimedia Labs makes it pretty easy to use Git for source control and Gerrit for code review, but you also have other options.

Gerrit/Git
Access to Git is managed via Wikimedia Labs and integrated with Gerrit. In order to use them for code review and version control with your tool accounts, you must request access. For more information, please see Gerrit/New repositories https://www.mediawiki.org/wiki/Gerrit/New_repositories

For more information about using Git and Gerrit in general, please see Gerrit.

Database access
Tool and Labs accounts are granted access to replicas of the production databases. Private user data has been redacted from these replicas (some rows are elided and/or some columns are made NULL depending on the table), but otherwise the schema is, for all practical purposes, identical to the production databases, and are sharded into clusters in much the same way.

Database credentials (user name/password) are stored in the 'replica.my.cnf' file found in the tool account’s home directory. To use these credentials with command-line tools by default, copy 'replica.my.cnf' to '.my.cnf'.

Naming conventions
As a convenience, each mediawiki project database (enwiki, bgwiki, etc) has an alias to the server it is hosted on. The alias has the form:


 * project.labsdb

where 'project' is the name of a hosted mediawiki project (enwiki bgwiki bgwiktionary cswiki enwikiquote enwiktionary eowiki fiwiki idwiki itwiki nlwiki nowiki plwiki ptwiki svwiki thwiki trwiki zhwiki commonswiki dewiki wikidatawiki arwiki eswiki... for a complete list, look at the /etc/hosts file on tools-login).

The database names themselves consist of the mediawiki project name, suffixed with _p (an underscore, and a p), for example:


 * enwiki_p (for the English Wikipedia replica)

Connecting to the database replicas
You can connect to the database replicas by specifying access credentials and the host of the replicated database. For example:

To connect to the English Wikipedia replica: mysql --defaults-file="${HOME}"/replica.my.cnf -h enwiki.labsdb enwiki_p

To connect to Wikidata: mysql --defaults-file=~/replica.my.cnf -h wikidatawiki.labsdb

To connect to Commons: mysql --defaults-file=~/replica.my.cnf -h commonswiki.labsdb

There is also a shortcut for connecting to the replicas: sql [_p]   The _p is optional, but implicit (i.e. the sql tool will add it if absent).

To connect to the English Wikipedia database replica using the shortcut, simply type:

sql enwiki

Creating new databases
User-created databases can be created on the database hosting the replica servers or on a database local to the 'tools' project: tools-db. The latter tends to be a bit faster since that server has less heavy activity, and tools-db is the recommended location for user-created databases when no interaction with the production replicas is needed. Users have all privileges on the created database and grant options.

Database names must start with the name of the credential user, which can be found in your ~/replica.my.cnf file (the name looks something like 'p50252g21636'), followed by two underscores and then the name of the database: 'username__DBName'

Note that users are granted complete control over there username__, but nothing else.

Steps to create a user database on the replica servers
If you would like your database to interact with the replica databases (i.e., if you need to do actual SQL joins with the replicas, which can only be done on the same cluster) you can create a database on the replica servers.

To create a database on the replica servers:

1. Connect to the replica servers with the replica.my.cnf credentials. You must specify the host of the replica (e.g., enwiki.labsdb):

mysql --defaults-file="${HOME}"/replica.my.cnf -h xxwiki.labsdb

2. In the mysql console, create a new database (where USERNAME is your credentials user and DBNAME the name you want to give to your database):

MariaDB [(none)]> CREATE DATABASE USERNAME__DBNAME

You can then connect to your database using: mysql --defaults-file="${HOME}"/replica.my.cnf -h xxwiki.labsdb USERNAME__DBBAME

Steps to create a user database on tools-db
To create a database on tools-db:

1. Connect to tools-db with the replica.my.cnf credentials:

mysql --defaults-file="${HOME}"/replica.my.cnf -h tools-db

2. In the mysql console, create a new database (where USERNAME is your credentials user and DBNAME the name you want to give to your database):

MariaDB [(none)]> CREATE DATABASE USERNAME__DBNAME

You can then connect to your database using: mysql --defaults-file="${HOME}"/replica.my.cnf -h tools-db USERNAME__DBBAME

Joins between commons and wikidata and other project databases
??????? -- can you help with this section---???? <<<<<<<<<<<<<<<<<<            That needs to be written from scratch. I'll sit down and look at it tonight.

Submitting, managing and scheduling jobs on the grid
Every non-trivial task performed in Tool Labs should be dispatched by the grid engine, which ensures that the job is run in a suitable place with sufficient resources. The basic principle of running jobs is fairly straightforward:

Jobs can be scheduled synchronously or asynchronously, continuously, or simply executed once. If a continuous job fails, the grid will automatically restart the job so that it keeps going.
 * You submit a job to a work queue from a submission server (e.g., -login) or web server
 * The grid engine master finds a suitable execution host to run the job on, and starts it there once resources are available
 * As it runs, your job will send output and errors to files until the job completes or is aborted.

What is the grid engine?
The grid engine is highly flexible system for assigning resources to jobs, including parallel processing. The Tool Labs grid engine is implemented with Open Grid Engine (the open-source fork of Sun Grid Engine). You can find more documentation on the Open Grid Engine website. Commonly used Grid Engine commands include:

You can find detailed information about these commands in the Grid Engine Manual The Open Grid Engine commands are very flexible, but a little complex at first – you might prefer to use the helper scripts instead (jsub, jstart, jstop) described in more detail in the next sections.
 * qsub: submit jobs to the grid
 * qalter: modify job settings (while the job is waiting or running)
 * qstat: get information about a queued or running job
 * qdel: abort or cancel a job

Submitting simple one-off jobs using 'jsub'
Jobs with a finite duration can be submitted to the work queue with either Open Grid’s 'qsub' command or the 'jsub' helper script, which is simpler to use and described in this section. (For information about qsub, please see the Open Grid Engine Manual.

To run a finite job on demand (at interval from cron, for instance, or from a web tool or the command line), simply use the 'jsub' command:

$ jsub [options…] program [args…]

By default, jsub will schedule the job to be run as soon as possible, and print the eventual output to files (‘jobname.out’ and ‘jobname.err’) in your home directory. Unless a job name is explicitly specified with jsub options, the job will have the same name as the program, minus extensions (e.g., if you had a program named foobot.pl which you started with jsub, the job's name would be foobot.)

Once your jobs has been submitted to the grid, you will receive an output similar to the one below, which includes the job id and job name. Your job 120 ("foobot") has been submitted

Example: The following example uses the jsub command to run mybot.sh. The 'qstat' command returns job status information. By default, job output is placed in the 'mybot.out' and 'mybot.err' files in the home directory. local-shtest@tools-login:~$ jsub mybot.sh prog?: Your job 105033 ("mybot") has been submitted local-shtest@tools-login:~$ qstat job-ID prior   name       user         state submit/start at     queue                          slots ja-task-ID - 105033 0.25000 mybot     local-shtest r     05/24/2013 08:52:00 task@tools-exec-02.pmtpa.wmfla     1 local-shtest@tools-login:~$ qstat local-shtest@tools-login:~$ ls access.log mybot.err  mybot.sh     replica.my.cnf cgi-bin    mybot.out  public_html local-shtest@tools-login:~$ cat mybot.out user_editcount 1016

jsub options
In addition to a number of customized options, jsub supports many, but not all qsub options:

Naming jobs
The job name identifies the job and can also be used to control it (e.g., to suspend or stop it). By default, jobs are assigned the name of the program or script, minus its extension. For instance, if you started a program named 'foobot.pl' with jsub, the job's name would be 'foobot'.

It's important to note that you can have more than one job, running or queued, bearing the same name. Some of the utilities that accept a job name may not behave as expected in those cases.

Specify a different name for the job using the jsub’s –N option: jsub –N NewName program [args…]

Allocating additional memory
By default, jobs are allowed 256MB of memory; you can request more (or less) with jsub’s -mem option (or qsub's -l h_vmem=memory). Keep in mind that a job that requests more resources may be penalized in its priority and may have to wait longer before being run until sufficient resources are available.

$ jsub –mem 500m program [args…]

Synchronizing jobs
By default, jobs are processed asynchronously in the background. If you need to wait until the job has completed (for instance, to do further processing on its output), you can add the -sync y (for sync y[es]!) option to the jsub command:

$ jsub -sync y program [args...]

Running a job only once
If you need to make certain that the job isn't running multiple times (such as when you invoke it from a crontab), you can add the -once option. If the job was already running or queued, it will simply mark the failed attempt in the error file and return immediately.

$ jsub -once program [args...]

Submitting continuous jobs (such as bots) with 'jstart'
Continuous jobs, such as bots, have a dedicated queue ('continuous') which is set up slightly differently from the standard queue:

For convenience, the jstart script (which accepts all the jsub options) facilitates the submission of continuous jobs: $ jstart [options…] program [args…]
 * Jobs started on the continuous queue are automatically restarted if they, or the node they run on, crash
 * In case of outage or lack of resources, continuous jobs will be stopped and restarted automatically on a working node
 * Only tool accounts can start continuous jobs

The jstart script will start the program in continuous mode (if it is not already running), and ensure that the program keeps running.

Note that the jstart script is equivalent to:

$ jsub -once -continuous program [args…]

jsub's '-once' option is important for ensuring that the job can be managed reliably with job and jstop utilities. The '-continuous' option ensures that the job will be restarted automatically until it exits normally with an exit value of zero, indicating completion.

Managing Jobs
Each job submitted to the grid has a unique job id as well as a job name (which will not be unique if you have more than one instance running). The name and id identify the job, and can also be used to retrieve information about its status.

If you don’t know the job id, you can find it with either the ‘job’ command or the ‘qstat’ command. Both of these commands can also be used to return additional status information, as described in the next sections.

Finding a job id and status with the ‘job’ command
If you know that your job has only one instance running (if you used the -once option when starting it, for example) you can use the ‘job’ command to get its job id:

local-xbot@tools-login:~$ job xbot 717898 Use the job command’s –v (‘verbose’) option to return additional status information: local-xbot@tools-login:~$ job xbot –v Job 'xbot' has been running since 2013-04-01T21:00:00 as id 717898

The verbose response is particularly useful from scripts or web services.

Once you know the job id, you can use the ‘qstat’ command to return additional information about it. See Return the status of a particular job for more information.

Using ‘qstat’ to return status information
The ‘qstat’ command returns detailed information about the status of queued jobs. If you know the job id of a particular job, you can use qstat’s ‘-j’ option to return information about that job. If you use the ‘qstat’ command without options, it will return the status of all your currently running and pending jobs. More information about running qstat without options and with the -j option is included in the following sections. For more information about qstat in general, please see the Open Grid Manual.

Returning the status of all your queued jobs
To see the status of all of your running and pending jobs (including the job number), use the ‘qstat’ command without options. ‘qstat’ will then return the job id, priority, name, owner, state (e.g., r(unning) or s(uspended)), the date and time the job was submitted or started, and the name of the assigned job queue (e.g., continuous) for each job.

For example: local-xbot@tools-login:~$ qstat job-ID prior   name       user         state submit/start at     queue                          slots ja-task-ID -     120    0.50000   xbot   local-xbot         r     04/01/2013 21:00:00 continuous@tools-exec-01.pmtpa     1

Common job states include: See the Open Grid Manual for a complete list of states and abbreviations.
 * r (running)
 * qw  (queued/waiting)
 * d (deleted)
 * E (error)
 * s (suspended)

Returning the status of a particular job
If you know the job Id of a job, you can find out more information about it using the 'qstat command's ‘-j’ option. For example, the following command returns detailed information about job id 990.

local-toolname@tools-login:~$ qstat -j 990 ============================================================== job_number:                 990 exec_file:                job_scripts/990 submission_time:           Wed Apr 13 08:32:39 2013 owner:                     local-toolname uid:                       40005 group:                     local-toolname gid:                       40005 sge_o_home:                /data/project/toolname/ sge_o_log_ name:                          local-toolname sge_o_path:                /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin sge_o_shell:               /bin/bash sge_o_workdir:             /data/project/toolname sge_o_host:                tools-login account:                   sge stderr_path_list:          NONE:NONE:/data/project/toolname//taskname.err hard resource_list:        h_vmem=256m mail_list:                 local-toolname@tools-login.pmtpa.wmflabs notify:                    FALSE job_name:                  epm stdout_path_list:          NONE:NONE:/data/project/toolname//taskname.out jobshare:                  0 hard_queue_list:           task env_list: script_file:               /data/project/toolname/taskname.py  usage    1:                 cpu=00:21:08, mem=158.09600 GBs, io=0.00373, vmem=127.719M, maxvmem=127.723M

Stopping jobs with ‘qdel’ and ‘jstop’
To stop a running job (or prevent it from being run if it has not already started), use the ‘qdel’ command with the job’s number: qdel job_number If you do not know the job number, you can find it using the ‘qstat’ command.

If you started a job with the 'jstart' command, or if you know there is only one job with the same name, then you can use the 'jstop' utility command with the job name to stop it: jstop job_name

Suspending and unsuspending jobs with ‘qmod’
Suspending a job allows it to be temporarily paused, and then resumed later. To suspend a job use: qmod -sj job_id The job will be paused (SIGSTOP). Note that the qstat command will return a state of ‘s’ for suspended jobs. If you do not know the job number, you can find it using the ‘qstat’ command.

To unsuspend the job and let it continue running use: qmod -usj job_id Unsuspended jobs should return to the 'r' state in qstat.

Scheduling jobs at regular intervals with cron
To schedule jobs to be run at specific days or time of days, you can use cron to submit the jobs to the grid.

Scheduling a command more often than every five minutes (for example * * * * * command) is highly discouraged, even if the command is "only" jsub. In these cases, you very probably want to use 'jstart' instead. The grid engine ensures that jobs submitted with 'jstart' are automatically restarted if they exit.

Creating a crontab
Crontabs are set (as on any Unix system) using "crontab -e" or "crontab FILE".

Note that the PATH is set differently for interactive shells and cron jobs, so be sure to include the line: PATH=/usr/local/bin:/usr/bin:/bin at the top of your crontab so that you can use the grid scheduling commands (e.g., jsub, qsub, etc).

Specifying time zones
The ‘tools’ project, like other hosting environments, uses the time zone UTC. If you need to schedule a job for another time zone, you can specify so in the crontab. For example, to schedule a job for midnight in Germany, you can use the crontab line: 0 22,23 * * * [ "$(TZ=:Europe/Berlin date +%H)" = "00" ] && jsub ... The above crontab line instructs the system to check on 22:00 UTC (23:00 CET and 0:00 CEST) and 23:00 UTC (0:00 CET and 1:00 CEST) whether it is midnight in Berlin, and if so, calls jsub. Note that you can't just replace "Berlin" with "Hamburg"; the values for TZ are limited to those found at /usr/share/zoneinfo. If you're unsure what the offset of your time zone to UTC is, you can run the check hourly by replacing 22,23 with *.

Licensing
All code in the ‘tools’ project must be open source. Please add a license at the beginning! Even if you have not yet deployed or finished your work, it is non-free software unless you explicitly license it.

You may use any free license of your choice, provided that it is OSI approved.

Heavy processing
If you will be doing heavy processing (e.g., compiles or tool test runs), please use the development environment (tools-dev.wmflabs.org) instead of the primary login host (tools-login.wmflabs.org) so as to help maintain the interactive performance of the primary login host.

The tools-dev host is functionally identical to tools-login.

Installing additional packages
If you need a package that is not currently installed, please submit a ticket in bugzilla and ask the admins to install the package project-wide. You're probably not be the only one missing that package! If the admins have reasons not to install the package project-wide, you can always install software locally / just for yourself.

Shared code: Use git submodules
Code that is shared between multiple tools can be stored in git submodules, which allow users to keep a git repository within another git repository.

Storing shared code in git provides maintainability advantages in addition to ease of sharing. For more information about git submodules, please see the git documentation.

Shared config files / other files
Shared config or other files may be placed in the '/shared' directory, which is readable by all.

We are currently developing a feature that would allow users to add tools to an existing tool group, which would also facilitate the sharing of files. For more information about the status of this work, please see - https://bugzilla.wikimedia.org/show_bug.cgi?id=51990.

Running scripts
All scripts are run with the permissions of the account that owns the script, which should be the tool account in almost all cases. Scripts may be placed in one of two directories:


 * ~/public_html/  (which maps to http://tools.wmflabs.org/toolname)
 * ~/cgi-bin/ (which maps to http://tools.wmflabs.org/toolname/cgi-bin)

In the ~/cgi-bin/ directory: Note that PHP and Python scripts, if placed in the ~/cgi-bin/, are unconditionally run as CGI, which is probably not what you want and will only work under certain conditions. You will most likely wish to place PHP and Python scripts in the ~/public_html/ directory.
 * all files are run as CGI scripts rather than displayed.

In the ~/public_html/ directory:
 * files that end with the .php or .php5 extensions will be run as PHP CGI scripts
 * files that end with the .py extension will be run as Python scripts

Note: As the tool's ~/public_html/ folder is in the tool's /home folder, you must allow other users to access your /home folder in order for your web service to access files in ~/public_html.

The web server allows overrides of AuthConfig DirectoryIndex FileInfo Options=IncludesNOEXEC from .htaccess.

Avoiding common CGI errors
To avoid common errors when using CGI scripts, please make sure that:
 * the CGI file is owned by the tool account
 * the CGI file has its execute bits set (You can use the chmod command to set the script as executable.)
 * the CGI starts with a Unix 'shebang' invocation. A Unix "shebang" is the first line of a script that specifies the program meant to execute that script. It has the form #! /path/to/interpreter

For perl scripts, the 'shebang' invocation would be: /usr/bin/perl

You can check the path to a language interpreter by using the 'which' command. The following example outputs the path to the python interpreter: maintainer@tools-login:~$ which python /usr/bin/python

Using cookies
Since all tools in the 'tools' project reside under the same domain, you should prefix the name of any cookie you set with your tool's name. In addition, you should be aware that cookies you set may be read by every other web tool your user visits.

Accordingly, you should avoid storing privacy-related or security information in cookies. A simple workaround is to store session information in a database, and use the cookie as an opaque key to that information. Additionally, you can explicitly set a path in a cookie to limit its applicability to your tool; most clients should obey the Path directive properly.

Web logs
Your tool's web logs are placed in the tool account's ~/access.log in common format. Please note that the web logs are anonymized in accordance with the Foundation’s privacy policy. Each user IP address will appear to be that of the local host, for example. In general, the privacy policy precludes the logging of personally identifiable information; special permission from Foundation legal counsel is required if such information is required.

Error logs, because of limitations of the Apache web server, are not made directly available to tool maintainers. Until a newer version of Apache is deployed, we recommend that you use your language's facilities to log errors to a file under the tool account's home. PHP allows per-user logging, for example, and PHP error logs are placed in the tool account’s ~/php_error.log.

PyWikipediaBot
The | Python Wikipediabot Framework (pywikipedia or PyWikipediaBot) is a collection of tools that automate work on MediaWiki sites.

Snapshots (updated daily) of the Pywikipedia framework ‘compat’ (formerly ‘trunk’) and ‘core’ (formerly ‘rewrite’) versions are maintained at ‘/shared/pywikipedia/trunk’ and ‘/shared/pywikipedia/rewrite’, respectively. Note that these are just the source files; each bot operator will need to create its own configuration files, such as ‘user-config.py’, and set up a PYTHONPATH and other environment variables. You may also choose to install the Pywikipedia framework into your tool directory, either directly or using virtualenv.

Steps for configuring and running a bot
In order to run a bot, you need to a) install it to a folder that is accessible on all instances, such as your /data/project/tool_account directory, and b) configure it. To do this, you should first 'become' your tool: maintainer@tools-login:~$ become toolname After you have become your tool, you can clone the git repository with the following command:

git clone --depth 3 https://gerrit.wikimedia.org/r/pywikibot/core.git cd core git submodule update --init cd externals git clone https://gerrit.wikimedia.org/r/pywikibot/spelling.git

You can clone the "compat" (formerly "trunk") instead of the "core" (formerly "rewrite") branch if you prefer:

git clone https://gerrit.wikimedia.org/r/pywikibot/compat.git pywikipedia cd pywikipedia git submodule update --init cd externals git clone https://gerrit.wikimedia.org/r/pywikibot/spelling.git

If you are using the core branch, install your bot with the following command and answer the questions when prompted: python setup.py

If you are using the compat branch, install your bot with the following command and answer the questions when prompted: python login.py

After installing, you can run your bot directly via shell access, though this is highly discouraged. You should use the grid to run jobs instead. For more information about running jobs on the grid, please see Submitting, managing and scheduling jobs on the grid.

To submit a job, you can use the following command: jsub -once -N YOURJOBNAME python /data/project/YOUR-TOOL/PATHOFYOURCODE For example, to run 'welcome.py', using 'core', you could command: jsub -once -N welcome python /data/project/YOUR-TOOL/core/scripts/welcome.py If you're using 'compat': jsub -once -N welcome python /data/project/YOUR-TOOL/pywikipedia/welcome.py

If you want to see the status of your job command: qstat and if you want to see output of your job you can see them via: vim YOURJOBNAME.err vim YOURJOBNAME.out the former shows errors and the latter shows outputs To delete a job, command: qdel NUMBEROFJOB you can find numberofjob in qstat. For more information about qstat please see Using ‘qstat’ to return status information.

For further information about running bots you can see this help

Setting up PyWikipediaBot
For instructions on setting up PyWikipediaBot using the snapshot available in the 'tools' project, please see Using pywikibot on Labs

Using PyWikipediaBot with virtualenv
You may find it easier to use PyWikipediaBot with virtualenv, which creates an isolated Python environment. For instructions on setting this up, please see this simple guide.

Tips for working collaboratively

 * Use source control!

How to use  to write tools on labs
Do you have experience that might help another user? Please share it (or point to it} here!

Redis
Redis is a key-value store similar to memcache, but with more features. It can be easily used to do publish/subscribe between processes, and also maintain persistent queues. Stored values can be different data structures, such as hash tables, lists, queues, etc. Stored data persists across service restarts. For more information, please see the Wikipedia article on Redis.

A Redis instance that can be used by all tools is available on, on the standard port. It has been allocated a maximum of 1G of memory, which can be increased if there is significant usage. You can set limits for how long your data stays in Redis; otherwise it will be evicted when memory limits are exceeded. See the Redis documentation for a list of available commands.

Libraries for interacting with Redis from PHP and Python have been installed on all the web servers and exec nodes. For an example of a bot using Redis, see SuchABot.

Security
Redis has no access control mechanism, so other users can accidentally/intentionally overwrite and access the keys you set. Even if you are not worried about security, it is highly probably that multiple tools will try to use the same key (such as, etc). To prevent this, it is highly recommended that you prefix all your keys with an application specific, lengthy, randomly generated secret key.

You can very simply generate a good enough prefix by running the following command:

openssl rand -base64 64

PLEASE PREFIX YOUR KEYS! To We have also disabled the redis commands that let users 'list' keys.

A note about memcache
Support for memcache is in the process of being deprecated. If you currently use memcache and need help converting to Redis, please ask for assistance on the Labs IRC channel.

Public dataset dumps
The 'tools' project has a directory for storing the public Wikimedia datasets (i.e. the dumps generated by Wikimedia). The most recent five dumps can be found in:

/public/datasets/public

This directory is read-only, but you can copy files to your tool's home directory and manipulate them in whatever way you like.

If you need access to older dumps, you must manually download them from the Wikimedia Downloads server.

CatGraph (aka Graphserv/Graphcore)
CatGraph is a custom graph database that provides tool developers fast access to the Wikipedia category structure. For more information, please see the documentation.

Troubleshooting
If you run into problems, please feel free to come into the #wikimedia-labs IRC (chat) channel using http://webchat.freenode.net/?channels=#wikimedia-labs and look for Coren (Marc-Andre Pelletier) or petan (Petr Bena). The labs-l mailing list at https://lists.wikimedia.org/mailman/listinfo/labs-l is another good place to ask for help, especially if the people in chat are not responding. You can also search wikitech.wikimedia.org for help pages, or look more widely with the custom search at https://www.google.com/cse/home?cx=010768530259486146519:twowe4zclqy.

What gets backed up?
The basic rule is: there is a lot of redundancy, but no backups of labs projects beyond the filesystem's time travel feature for short term disaster recovery. Labs users should make certain that they use source control to preserve their code, and make regular backups of irreplacable data.

Time travel
Although Labs users are ultimately responsible for backing up files and important data, "time travel" provides snapshots of the file system at fixed intervals and provides a short-term disaster recovery option.

You can access hourly snapshots for the last three hours and daily snapshots for the last three days. The snapshots are beneath. Its subdirectories are auto-mounted, so this directory may appear empty even though there are snapshots. To see the timestamps at which backups were made, look at the following files:


 * /home/.snaplist
 * /data/project/.snaplist

These files contain a list of the timestamps at which backups were made. To access a snapshot subdirectory directly, append the timestamp to the directory (e. g.  ).

To automount a snapshot, cd to the timestamp directory: cd /data/project/.snapshot/

The snapshot will unmount itself after some period of not access.

Moving a tool from Toolserver to Tool Labs
We know that you are putting your free time into the development of tools to improve Wikimedia projects and that migrating your tools from Toolserver to Tool Labs requires additional work. Unfortunately, at some point in 2014, WMDE will discontinue the Toolserver, and so staying is no option.

The Tool Labs environment is designed to support the development and maintenance of tools and bots, but it does differs from the Toolserver environment in ways that will impact migrating users. We are aware that the transition will require work, and--though ideally smooth--may not be entirely so. If you have questions or need assistance, please feel free to come into the #wikimedia-labs IRC (chat) channel using http://webchat.freenode.net/?channels=#wikimedia-labs and look for Coren (Marc-Andre Pelletier) or petan (Petr Bena). The labs-l mailing list at https://lists.wikimedia.org/mailman/listinfo/labs-l is another good place to ask for help, especially if the people in chat are not responding.

If you want to copy files from the Toolserver to Tool Labs, keep in mind that ssh/scp between the two currently works in one direction only. You can ssh from the Toolserver to Tool Labs but not the other way around.

Please see Migration of Toolserver tools for more information and FAQs specific to moving tools and bots from Toolserver to Tool Labs. Also see Magnus Manske's experience when migrating a tool. If you are planning migration or have already accomplished it, please consider documenting the experience to help other users through the process.

Thank you for all your contributions!

Do I explicitly have to specify the license of my tools?
Yes. If you think "this is just a draft, nothing ready" and you do not put a license into your code it's non-free software contradicting the idea of Tool Labs. So please add a license in the beginning! You can use any OSI-approved license. Read more on the licenses on the Open Source Initiative's website: http://opensource.org/licenses

What about file permissions? Who can see my code?
There are projects where users have root, so that all users in the project have full access to the whole project. This setup is not mandatory though: Tools can also use tool user IDs to control file permissions. On the tools project (which will be where toolserver tools migrate), you have full control over access permissions of your code and data. By default, only the tool maintainers have access (all the maintainers of a tool are in the tool's group).

Do stewards have a specific project on WMF Labs?
There is no plan to have distinct projects for different tool makers; but the tools are separated from each other. There is nothing that prevents you from sharing the maintenance of some tools between different stewards (in fact, it is recommended that you do so to ensure that there is always someone able to keep them up at need).

Can I delete a tool?
No, you can't do this yourself. The reason is that you or other members might accidentally delete precious stuff. You can delete the content of your directories. If you really want a tool / a service group to be deleted, please contact an admin.

If you are planning to try out Tool Labs but don't know yet if you are going to keep our tests as a later project, don't hesitate to create a tool (a service group) and to create a new one later where you put the stuff you want to keep.

Can I rename a tool?
No, sorry, this is not possible. You'd have to create a new one and put your code in there.

Can I have a subdomain for my web service?
Sorry, not yet. This is still in discussion at WMF. Currently, your web services are available under tools.wmflabs.org/.

How do I access the database replicas?
mysql --defaults-file=~/replica.my.cnf -h enwiki.labsdb # <- for English WP mysql --defaults-file=~/replica.my.cnf -h dewiki.labsdb # <- for German language WP mysql --defaults-file=~/replica.my.cnf -h wikidatawiki.labsdb # <- for Wikidata mysql --defaults-file=~/replica.my.cnf -h commonswiki.labsdb # <- for Commons mysql -h commonswiki.labsdb # <- for Commons
 * In your home directory you find your credentials for mariadb (in the file replica.my.cnf). You need to specify this file and the server you want to connect to. Some examples:
 * Alternatively you can rename the credentials file from replica.my.cnf to .my.cnf and just run

Why can't I access user preferences in the replicas?
The db replication gives access to everything that is visible for logged in users without special privileges. Others' user preferences are considered private information in Wikimedia Labs and are thus redacted from the replicas.

Why am I getting errors about ?
Encountering this while trying to run jobs on the grid engine means you need to give your job more memory; the (obscure) error message is caused by the system being unable to load your executable and all its shared libraries. As a rule, most scripting language require around 300-350M of virtual memory to load completely.

Is there a GUI tool for database work?
Not in Tool Labs, but you can run one locally on your computer (for example the MySQL Workbench http://dev.mysql.com/downloads/tools/workbench/). Here is how you connect to the database:
 * >For the login: username@tools-login.wmflabs.org
 * >For the database, it depends on the exact one you want to use, of course - for example: enwiki.labs

Why does public_html not work in my home directory?
Users do not and cannot have a public_html folder to themselves. The only web-accessible directories are in /data/project/ /public_html/*. To have a URL such as http://tools.wmflabs.org/ /, you must create a tool called, which will create a folder called /data/project/ /public_html/. Nobody--except you, the user--will *ever* be given access to your home or its files. Allowing public services to run from a home directory means that their management could not be shared or taken over if they end up abandoned, defeating the purpose.

I get a Permission denied error when running my script. Why's that?
Make sure that you are running your script from your tool account rather than your user account.

How can I detect if I'm running in Labs? And which project (tools or toolsbeta)?
There is a file that contains the project name on every labs instance: /etc/wmflabs-project. Testing for its presence tells you "you are in the WMF Labs", and checking its contents will tell you which project; that would be "tools" for the Tool Labs, or "toolsbeta" for the experimental tool labs. If using PHP checkout : $_SERVER['INSTANCENAME'], $_SERVER['INSTANCEPROJECT'] which contain strings of the current location such as 'tools-login' and 'tools'

My connection seems slow. Any advice?
When connecting to Tool Labs from Europe you might have higher ping times. Try using mosh (mosh.mit.edu). To connect, use mosh -a tools-login.wmflabs.org (-a to force predictive echo) (instead of ssh tools-login.wmflabs.org)

I want to ssh to bot instances outside of Labs. Any advice?
If you want to ssh to specific bot instances other than tools-login.wmflabs.org, it is helpful to create a new SSH key: $ ssh-keygen $ cat ~/.ssh/id_rsa.pub ssh-rsa .... user@host Copy the 'ssh-rsa ... user@host' line to your authorized keys in the labs console here.

How can I check my filesystem usage?
If you would like information on your filesystem usage, please come ask Ryan on IRC.

My Tool requires a package that is not currently installed in Tool Labs. How can I add it?
You might not be the only one missing that package. Please submit a ticket in bugzilla and ask the admins to install it project-wide. If they have reasons not to do so you can always install software locally / just for yourself.

My tool needs a more specialized infrastructure than Tool Labs provides. What should I do?
Tool Labs is a simplified environment intended to be a direct toolserver replacement for most small tools. When you need something more complicated or when you need to manage specialized infrastructure, what Tool Labs can't do you can probably do with your own Labs project! (Instead of inside one of the projects "tools" or "toolsbeta".)

I keep reading about puppet. What is it?
As a tool maintainer you don't have to worry about puppet.

The nutshell: puppet is a system by which you describe the configuration of a machine. When used, it will apply the necessary changes to make the machine you apply it to match that configuration.

In practice, the sysadmin would make any change of configuration intended for the machine in puppet (including what to install, files to edit, etc) so that it can be reapplied to a blank machine to configure it "just like it was", or to make a clone, and so on. In the case of a project where the tool maintainer do the system administration, it might be desirable to acually configure and install the tool /itself/ through puppet so that it is easy to return to a known state.

In the case of the Tool Labs, however, the actual tools would not normally be configured through puppet (it's possible, but not worthwile): they live on a shared filesystem rather than on the individual machines. What puppet /is/ used for is to maintain the components of the grid, making adding "one more compute node" or "an extra webserver" as simple as to create a new instance and set puppet accordingly. When we find a tool has a dependency, the tools labs sysadmins will add it to puppet so that every host part of the grid (current and future) will have that configured accordingly without manual intervention.

What is the labsconsole?
It's the old name of what is now Wikitech (wikitech.wikimedia.org)

I'm being prompted for a password when I try to 'become my-tool-account'. What's wrong?
If you are seeing "a password is required" message when you try to become your tool (i.e., sudo yourtoolaccount), it is likely because you were logged in to your Labs account when you created the tool account. Unix group membership is checked on login only, so an existing session will not have access to the new tool group. Log out and then log in to your Labs account again to fix this problem.

Are there any plans for adding monitoring or profiling tools?
Yes, in the very long term and not guaranteed. Want to help? Find out more here: https://wikitech.wikimedia.org/wiki/User:Yuvipanda/Icinga_for_tools