HHVM

From MediaWiki.org
Jump to: navigation, search

HHVM, sometimes known as HipHop Virtual Machine, is a virtual machine for PHP, with an associated just-in-time compiler (JIT). Deploying HHVM on a MediaWiki wiki should lead to performance improvements across the board for most users.

This page is about Wikimedia-sponsored work on HipHop support in MediaWiki, and its deployment to Wikimedia production wikis.

Historically, the HipHop compiler was a project by Facebook which involved compiling PHP code into C++ for purposes of speeding up the language. Facebook has since abandoned this project, and now their development efforts are focused on HHVM itself instead.

Status[edit | edit source]

2014-08-monthly:

  • Since migrating test.wikipedia.org to HHVM exactly one week ago, we've had just one segfault (reported upstream: <https://github.com/facebook/hhvm/issues/3438>). That's very good.
  • Giuseppe shared some benchmarks in an e-mail to wikitech: <https://lists.wikimedia.org/pipermail/wikitech-l/2014-August/078034.html>. Also very good.
  • Re-imaging an app server was surprisingly painful, in that Giuseppe and I had to perform a number of manual actions to get the server up-and-running. This sequence of steps was poorly automated: update server's salt key -> synchronize Trebuchet deployment targets -> fetch scap scripts -> run sync-common to fetch MediaWiki -> rebuild l10n cache. Doing this much manual work per app server isn't viable. Giuseppe and I fixed some of this but there's more work to be done.
  • Mark submitted a series of patches (principally <https://gerrit.wikimedia.org/r/#/c/152903/>) to create a service IP and Varnish backend for an HHVM app server pool, with Giuseppe and Brandon providing feedback and amending the change. Brandon thinks it looks good and he may be able to deploy it some time next week.
  • The patch routes requests that are tagged with a specific cookie to the HHVM backends. Initially, we'll ask you (Wikimedia engineers and technically savvy / adventurous editors) to opt-in to help with testing by setting the cookie explicitly. The next step after that will be to divert a fraction of general site traffic to those backends. When exactly that happens will depend on how many bugs the next round of testing will uncover.
  • We'll be adding servers to HHVM pool as we reimage them.
  • Tim is looking at modifying the profiling feature of LuaSandbox to work with HHVM. We currently disable it, due to <https://bugzilla.wikimedia.org/show_bug.cgi?id=68413>. (Feedback from users re: how important is this feature to you would be useful).
  • Giuseppe and Filippo noticed a memory leak on the HHVM job runner (mw1053). Aaron is trying to isolate it. Tracked in bug <https://bugzilla.wikimedia.org/show_bug.cgi?id=69428>.
  • Giuseppe is on vacation for the week of 8/18 - 8/22; Filippo is the point-person for HHVM in TechOps.

Roadmap[edit | edit source]

Here is the general plan for deploying HHVM to the production cluster:

  • Deployment to Beta Cluster (currently ongoing work, in parallel with other work) YesY Done
  • Week of July 21: deployment to a few job runners in production YesY Done
  • Deploy to test.wikipedia.org application server YesY Done
  • Deploy Varnish module allowing partial deployment to a fraction of application serversYesY Done
  • Limited deployment to small number of application serversYesY Done
  • Ramp up deployment to more application servers until most servers use HHVM
  • Deploy to remainder of services

Each step along the way will likely cause discovery of new bugs that need to be fixed before the next step can be completed, so dates are difficult to venture at this time.

Current work[edit | edit source]



Bugzilla: Open bugs, All bugs.

Rationale[edit | edit source]

It is a well-studied phenomenon that even small delays in response time (e.g. half of a second) can result in sharp declines in web user retention.[1][2] As a result, popular websites such as Google and Facebook invest heavily in site performance initiatives, and partially as a result, remain popular. Formerly popular sites (such as Friendster) suffered due to lack of attention to these issues.[3] Wikipedia and its sister projects must remain usable and responsive in order for the movement to sustain its mission.

Facebook, as a big user of PHP, has recognized this problem, and invested heavily[4] in a solution: HHVM, a virtual machine that compiles PHP bytecode to native instructions at runtime, the same strategy used by Java and C# to achieve their speed advantages. We're quite confident that this will result in big performance improvements on our sites as well.

What will HipHop do for our end users?[edit | edit source]

MediaWiki is written in PHP, a language that is interpreted at run-time. The overhead of running this PHP code every time someone views a page necessitates the usage of caching servers, running software such as Varnish, which cache the HTML generated by running this PHP, so that the PHP does not have to run every time a page is viewed. These caches only serve users that are not logged in.[5] Actions which are not affected by the cache, and therefore are affected by the run time of PHP code, include:

  • Any page you view while logged in.
  • Saving pages that you've edited, whether you are logged in or not.

Therefore, any action we can take to reduce the time it takes for MediaWiki's PHP code will therefore also decrease the loading times of our site for all of our logged in users and anyone who edits anonymously.

HipHop was written to be a faster, more efficient PHP interpreter than our current interpreter (Zend). It is our hope that by implementing HipHop as a replacement for Zend, our users will notice a tangible increase in the performance of our sites.

How does our development work on HipHop affect MediaWiki developers?[edit | edit source]

In our initial sprint of work, due to be finished at the end of March 2014, we hope to make it so that anyone can elect to use HipHop on Beta Labs instead of Zend. This will be on a totally opt-in basis which can be disabled at any time. This will allow the MediaWiki Core team to gauge the performance of HipHop against that of Zend directly using our current test infrastructure, instead of just estimating theoretical performance increases. It will also create a development environment that will help us see how much work is needed to make HipHop compatible with MediaWiki, and as such let us create an estimate for how long it will take us to get HipHop live on production as a full replacement for Zend.

For other MediaWiki developers, the consequence of HipHop being deployed in this manner is that if they are using the Beta Cluster as a test environment, they will find it trivial to test how their patches perform using HipHop instead of Zend if they wish to. However, to minimise the disruption of our work, the opt-in nature of the infrastructure will allow developers will be able to continue to develop totally agnostic of the future HipHop migration if they wish to do so.

See also[edit | edit source]

References and footnotes[edit | edit source]

  1. "Bing and Google Agree: Slow Pages Lose Users" - Brady Forrest - O'Reilly Radar
  2. Greg Linden's blog: "Marissa Mayer at Web 2.0" - Marissa Mayer pointed out that a change from 0.4 seconds to 0.9 seconds in response time from Google caused a 20% drop in revenue and traffic.
  3. "Wallflower at the Web Party", New York Times, October 15, 2006. Quote: "Kent Lindstrom, now president of Friendster, said the board failed to address technical issues that caused the company’s overwhelmed Web site to become slower."
  4. http://www.wired.com/wiredenterprise/2013/06/facebook-hhvm-saga/
  5. By definition, users that are logged in cannot be served pages from a static cache, as the page served to them must include user-specific HTML such as their username at the top right of the page. This, unfortunately, creates a situation where simply logging in causes a tangible decrease in how well our sites perform for you.