Commit access requests



This page serves as a central location for users to request commit access to SVN. Please note that just because you've listed your name here doesn't mean that you'll get access. Final approval for access still lies with Brion and Tim.

Request format
Copy and paste the following text to the bottom of the list. Of course you can add extra information you think is relevant, but try to keep it simple and to the point.

=== User name === * Requested commit name: abcd * Expected area to work in: extensions, localization, etc * link to public key * Past work on MediaWiki - patches, extensions, bug reports, etc * Past programming experience - other open source projects, etc. ~ (signature with date)

You must have an email address registered in your mediawiki.org preferences before your request will be approved.

Completed requests
Generally when requests are completed, an announcement e-mail is sent to wikitech-l and the Developers page is updated.


 * /Archive 1

Current requests
Add your request here.

Gabriel Wicke

 * Requested commit name: gwicke
 * Expected area to work in: extensions, core. First concrete need is publishing updates to my catfeed extension and open-sourcing client work.
 * Link to public key: http://dl.wikidev.net/id_dsa.pub
 * Past work on MediaWiki: Long ago (2003/04): Wrote the MonoBook skin, added user styles, Squid caching infrastructure, Parser fixes, user interface work. Since then: miscellaneous client work
 * Past programming experience: Basic pocket calculators, Zope/Plone, PHP content management systems, Squid, libevent/libev network programming, misc C and Python tools, AVR assembler, Haskell dabbling GWicke 09:25, 13 January 2010 (UTC)
 * Your previous commit name appears to be gabrielwicke. Max Semenik 09:47, 13 January 2010 (UTC)
 * Yep, but that was back in the CVS days, I did not request an SVN account after the switch. If you would prefer me to use the old name feel free to set that up instead. Thanks! GWicke 17:53, 14 January 2010 (UTC)

Nathanael Thompson

 * Requested commit name: than4213
 * Expected area to work in: parser. I'm interested in improving the architecture of the parser by iteratively seperating it into a data layer and parser engine. For all who's interested, I have a svn diff available for an initial change.
 * Link to public key: http://sites.google.com/site/natesprograms/Home/id_rsa.pub
 * Past work on MediaWiki: None, I'll work on bugs while I'm waiting for access.
 * Past programming experience: Bachelors in Computer Science and 5 years experience as a feature dev at Microsoft. Used the following in either a academic or professional setting: C++, Java, C#, Scheme, Scala, SQL, Batch, Bash, Perl, Javascript, Matlab, Latex.  As mentioned above I made a diff for an initial change to the parser and was able to pick up PHP pretty quickly.  Than4213 00:20, 15 January 2010 (UTC)
 * Please register on this wiki and enable email features in your preferences. Max Semenik 06:32, 15 January 2010 (UTC)
 * Do we have an auth group for "branch only"? Or is that supplied by the 'extension' auth?  Definitely support giving him some way to work on a new parser in a branch; such a thing would need hundreds of hours of testing before it gets into core anyway. And there are plenty of extension bugs to work on in the meantime. Happy ‑ melon 12:46, 15 January 2010 (UTC)
 * We have parser tests for a reason. If it can pass all of those, we can do the same as when Tim deployed the preprocessor and check common pages automatically, document any practical changes, etc.  However, Than4213, you should be aware that the parser is one of the most complicated and CPU-intensive parts of the code, and it has very stringent compatibility requirements.  To ever get accepted, your code would likely have to 1) noticeably change the output only a small fraction of a percent of the millions of pages on Wikimedia wikis (which can be pathologically baroque), and 2) either improve performance significantly or provide some other concrete benefit to Wikimedia. —Simetrical (talk • contribs) 14:42, 15 January 2010 (UTC)
 * I registered on this wiki and enabled email. I'm perfectly fine working in a branch and subjecting my changes to extensive parser tests.  I do not plan on changing the functionality of the parser. I just want to improve the architecture of the parser so that future changes to the parser won't be as complicated and error prone.  I'll work on changes to the functionality of the parser after fixing up the architecture of the parser :).  I also realize that a good overhall of the parser's architecture would take at least hundreds of hours and I'm willing to do so.  I think my change will benefit Wikimedia by making changes to the parser a much less daunting task.  --Than4213 17:21, 15 January 2010 (UTC)


 * It would be interesting having a link to your patch. Platonides 17:26, 15 January 2010 (UTC)
 * Sure thing. You can find it at http://sites.google.com/site/natesprograms/Home/WMParser1.txt.  --Than4213 18:34, 15 January 2010 (UTC)


 * I meant that it will take hundreds of hours, at least, to get any new parser to pass the tests. But if someone is genuinely prepared to take a crack at it, they have my full support. If he can write a parser that behaves the same as the existing one, has exactly the same performance characteristics as the exising one, but doesn't fill everyone apart from Tim with supernatural dread, that would definitely qualify as a concrete benefit to MediaWiki/Wikimedia in my book.  Have you seen the work at Markup spec/BNF?  That's the closest we've ever come to actually describing the parser's grammar. Happy ‑ melon 23:32, 15 January 2010 (UTC)
 * Thanks for the link, it should be helpful. The solution I'm thinking of will use something like bnf but will also use regular expressions for pattern matching.  I think this will make it so that some of the more dysfunctional syntax can still be represented.  --Than4213 17:05, 16 January 2010 (UTC)
 * Wow, the WikiText syntax is even stranger than I thought. I finally made a replacement for preprocessToXml that uses a simpler parser with a seperate engine and data layer.  The new parser passes all 545 tests that the old parser passed.  Also, I ran the old and new parser through the profiler and it looks like they take about the same amount of time (Although I should say that the times I was getting fluctuated quite a bit for each).  You can see my proposed change at http://sites.google.com/site/natesprograms/Home/WMParser2.txt.  When will I know if I can check this in?  Should I check this into main or a branch?  My next step is to integrate preprocessToObj into the new parser, but I'd like to check this in first.  --Than4213 23:11, 27 January 2010 (UTC)

Gurch

 * Requested commit name: gurch
 * Expected area to work in: API (core and extensions)
 * key
 * Past work on MediaWiki: a half-dozen one-line patches, a larger one that, to quote Tim, "may crash the site", some bug/feature requests, some documentation and a fair bit of general familiarity with the technical side of things on Wikimedia projects.
 * Past programming experience: I wrote Huggle, whether that's a good or bad thing is open to interpretation.

Roan and Reedy both insisted I request this. Gurch 13:00, 24 January 2010 (UTC)