Wikimedia Engineering/Report/2013/January

Engineering metrics in January:
 * 112 unique committers contributed patchsets of code to MediaWiki.
 * The total number of unresolved commits remained stable around 650.
 * About 45 shell requests were processed.
 * Wikimedia Labs now hosts 155 projects and 931 users; to date 1473 instances have been created.

Major news in January include:
 * the successful migration of our main services to our data center in Ashburn, Virginia;
 * new features available in our mobile beta;
 * progress on input methods and our upcoming translation interface;
 * the announcement of GeoData, a feature to attach geo-coordinates to Wikipedia and Wikivoyage articles;
 * a testing event to assess how VisualEditor handles non-Latin characters.

''Note: We're also proposing a shorter, simpler and translatable version of this report that does not assume specialized technical knowledge.

Upcoming events
There are many opportunities for you to get involved and contribute to MediaWiki and technical activities to improve Wikimedia sites, both for coders and contributors with other talents.

For a more complete and up-to-date list, check out the Project:Calendar.

Work with us
Are you looking to work for Wikimedia? We have a lot of hiring coming up, and we really love talking to active community members about these roles.



Announcements

 * Yuvaraj Pandian re-joined the Mobile engineering team as Software developer (announcement). He joined the newly created Mobile App team with Brion Vibber and Shankar Narayan.
 * Munagala Ramanath (Ram) joined the MediaWiki core team of the Platform engineering group as Senior Software Engineer (announcement).
 * Runa Bhattacharjee joined the Language Engineering team as Outreach and QA coordinator (announcement).

Technical Operations
Production Site Switchover


 * The Wikimedia Foundation switched over its primary data center from Tampa, Florida to Ashburn, Virginia on January 22. Given the scale and complexity of the migration, we scheduled three 8-hour windows to perform the migration, but we were able to complete it on the first attempt. Because the switchover involved, among other things, moving over the master databases from Tampa to Ashburn, the site was set to 'read-only' mode for about 32 minutes. During that period, the site was available but no new contents were created, edited or uploaded. As expected, there was some minor fallout of the migration, mostly due to configuration changes, but they were quickly contained by the Engineering and Operation teams.
 * With this migration, Tampa data center will now be our fail-over site and we plan to perform site fail-over tests every few months. There are remaining small non-core applications still using Tampa as the primary site, such as RT, etherpad and Bugzilla. They too will be migrated in the coming months.

Site infrastructure
 * One of the main concerns of the migration was serving traffic from the new data center using empty memcached servers: the spike in load on the Apache and database servers could have been disastrous to the site. To address it, Tim Starling improved on the single instance implementation of 'Parser Cache' persistent store in Tampa (to 3 sharded instances), and Asher Feldman built and replicated the databases across the 2 data centers.
 * Another improvement, done by Asher and Peter Youngmeister, was the implementation of MHA (Master High Availability) on our MySQL clusters. Its primary objective is to automate the promotion of a slave database in a master database fail-over scenario and to to reduce downtime, without suffering from replication integrity problems, without prolong database latency, and without changing existing deployments.
 * Faidon Liambotis and Mark Bergsma continued to work on the Ceph file object store. With Domas Mituzas' help, they identified a performance issue with the RAID card which caused severe read/write latency on the Ceph cluster. Faidon has confirmed with the vendor that it is a known problem and no fix is available yet. We have ordered and substituted those RAID cards, and test results seem to indicate that the performance issue is solved.

Fundraising
 * Fundraising bastion hosts were deployed in the Ashburn and Tampa data centers. We also tweaked and tuned central logging and monitoring, and converted the remaining fundraising MyISAM tables to InnoDB, which should fix dump-induced replication lag.

Data Dumps
 * This month, we had a look at the process of using the XML dumps to create a local copy of a Wikimedia site: it turned out to be painful and cumbersome at best, and unfathomable for the end-user in the worst case. As part of an attempt to improve this situation, there is now a new experimental tool available for *nix platforms, for generating MySQL tables from the XML stub and page content files. It is intended to read input files from various versions of MediaWiki and generate output for the version the user wants. Testing and feedback is encouraged.

Wikimedia Labs
 * In January, we had a number of performance and usability improvements. Three compute nodes were added into the pmtpa zone. Alex Monk added Echo notification support to labsconsole, passwordless sudo is now the default for projects, and shell requests are created automatically on account creation. The sysadmin and netadmin roles have been combined into a single projectadmin role. Glusterfs was upgraded to handle a memory leak, but unfortunately a new bug has been introduced that caused some instability in project storage. Work is ongoing to improve the project storage situation.

Kiwix
The Kiwix project is funded and executed by Wikimedia CH.


 * We have adapted the kiwix-plug script to Tonidoplug2, a device cheaper than the Dreamplug. Kiwix was elected by Sourceforge users as February's Project of the Month and an interview of Emmanuel Engelhart was published. For the first time, Kiwix has reached 100.000 downloads a month in January.


 * Beside Kiwix, the openZIM website was revamped and simplified for better readability. The openZIM bug tracker and source code management were migrated to the Wikimedia infrastructure (Bugzilla and Git).

Wikidata
The Wikidata project is funded and executed by Wikimedia Deutschland.


 * January has been an exciting month for Wikidata. The deployment on the first Wikipedia sites ([//blog.wikimedia.de/2013/01/14/first-steps-of-wikidata-in-the-hungarian-wikipedia/ Hungarian], [//blog.wikimedia.de/2013/01/30/wikidata-coming-to-the-next-two-wikipedias/ Hebrew and Italian]) was completed. At the same time, work has continued on the user interface and back-end for statements, the core part of Wikidata's second phase. This will enable users to enter information like the children of a given person or a link to their portrait on Wikimedia Commons. These features can already be tested on the demo system. We've also worked on making AbuseFilter work with Wikidata, and wrote a new mechanism to distribute changes to the clients (Wikipedia) so they can show Wikidata changes in their RecentChanges. We made progress on using Solr for search and rewrote the draft for the inclusion syntax to be much simpler. This is the syntax that editors will use to include data from Wikidata in Wikipedia. A manual for using Pywikipedia on Wikidata was written as well.


 * If you want to code on Wikibase, the software powering Wikidata, have a look at the outstanding bugs and tasks.

Future

 * The engineering management team continues to update the Deployments page weekly, providing up-to-date information on the upcoming deployments to Wikimedia sites, as well as the engineering roadmap, listing ongoing and future Wikimedia engineering efforts.