User:SSastry (WMF)

My name is Subramanya Sastry and in May 2012, I joined the Visual Editor team at the Wikimedia Foundation as a senior software engineer. I will primarily work with Gabriel Wicke on the Parsoid backend parser piece.

Wiki pages with wikitest use cases/tests

 * Quotes
 * UL/OL Lists
 * DL Lists

Other useful wiki pages to test against

 * Mediawiki Formatting Help page: Help:Formatting
 * Big page that can be a stressor: en:Wikipedia:Village_pump_(technical)

Parsoid/VE Notes
Notes I am making as I work through the code/algorithms/strategies for parsing wikitext in the context of the Visual Editor project. These notes may reflect a partial understanding or even misunderstanding of the issues involved, and are more notes to myself than anything else.
 * Handling whitespace

While the specific newline issues that led to the formulation of the note have mostly been addressed, the broad idea contained in the above note is applicable and possible useful in a more general sense, not just for whitespace, i.e. use the original wikitext to serialize most of the original text -- this also has an added benefit that for minor edits, there is no need to serialize a humongous DOM. For example, if someone corrects a typo on a barack obama page, does it make sense to really re-serialize everything? Is it simpler to issue a patch request to the PHP service to string replace specific sections of the original wikitext?

Selective Serialization
More generally, it may be useful to think of serialization as a diff patch in certain contexts, where applicable. Not sure how easy it will be to do this, but something to consider for large pages where progressively, changes on those pages will mostly be minor relative to the size of the page. Serialization has to be complete in and of itself to support all use cases and cannot rely on modification hints for correctness. But, modification hints from the visual editor could help the serializer optimize performance by focusing on modified bits and patching the source wikitext string rather than regenerate it from scratch.

Edit Transactions
Once a page has attained a certain size, a lot of edits on the page are likely going to be "minor" relative to the size of the page -- this is especially true for humongous pages. So, here is another idea for supporting high-performance edits.

Consider a page $$P$$ and let r be the latest revision. Let $$P_r$$ denote this revision of page $$P$$. Then, let us consider edits $$E_1, E_2 \cdots E_k$$ of the page. These edits would then have produced page revisions $$P_{r+1} \cdots P_{r+k}$$. In regular operation, if you fetch a revision $$P_{r+i}$$, you would fetch the wikitext for this revision, parse it, and send it to the VE. And the new edit would have to be serialized back to wikitext before storing that into the DB. But, given the earlier observation about minor edits, here is one way to improve on this to deliver better performance.

Let $$D(P_r)$$ be the DOM produced by Parsoid when it parses $$P_r$$. If this version is cached on disk, then to produce the DOM for page $$P_{r+i}$$, you would have to process edit transactions from $$E_1 \cdots E_i$$ and transform $$D(P_r)$$. If these edit transactions can be supported easily and efficiently, then there is no need to parse wikitext on every fetch. You might also decide to always cache the latest revision. So, the basic operation to support is apply an edit $$E$$ onto an existing dom $$D$$. So, you need a representation for $$E$$ and an efficient way to apply this. This is all pie in the sky idea at this time. Documenting it here for consideration in the future.

Disclaimer
Although I work for the Wikimedia Foundation, contributions under this account do not necessarily represent the actions or views of the Foundation unless expressly stated otherwise. For example, edits to articles or uploads of other media are done in my individual, personal capacity unless otherwise stated.