Extension:LinkTitles

What can this extension do?
The LinkTitles extension automatically adds links to page titles to the words of a page. Whenever a page is edited and saved, the extension will look if any existing page titles occur in the text, and automatically link to the corresponding pages. This will automatically cross-reference your wiki for you.

Usage
There is not much to do. Simply install the extension, and then edit a page. When you save the page, any existing page titles that occur on the page will be converted to MediaWiki links. (See below for configuration options.)

When the checkbox "This is a minor edit" is checked, the extension will not parse the page, in order to save time when you make frequent small edits to a page.

Performance impact
The extension looks for the occurrence of all existing page titles in the page content whenever a page is saved. For large wikis, this must come at the cost of a delay when a page is saved. The developer uses the extension in a wiki with about 150 content pages (about 800 pages in total) without a noticeable impact on performance.

Limitations
Links are only updated when a page is saved. Therefore, titles of newly created pages will not automatically appear as links in existing pages until they are edited and saved again.

In the rare case that it messes up your page
If the extension messes up your page, you can always revert back to a previous version using MediaWiki's 'View history' function. Just make sure to click the "This is a minor edit" checkbox at the bottom of the edit form when you revert the changes. If this box is checked, the extension will leave your page content alone.

Download & Installation
To install this extension, download the archive and extract it to the extensions directory of your MediaWiki installation. Then, add the following to LocalSettings.php:

Configuration parameters
Parse page content whenever it is edited and saved, unless 'minor edit' box is checked. This is the default mode of operation. It has the disadvantage that newly created pages won't be linked to from existing pages until those existing pages are edited and saved.

Parge page content when it is viewed. Since MediaWiki caches pages, this may or may not apply to a given page. Newly created pages will be linked to from existing ones, but the due to the caching it is impossible to predict when parsing will be triggered. Therefore this mode of operation is disabled by default.

Determines whether or not to add links to headings. By default, the extension will leave your (sub)headings untouched.

If $wgLinkTitlesPreferShortTitles is set to true, parsing will begin with shorter page titles. By default, the extension will attempt to link the longest page titles first, as these generally tend to be more specific.

Only link to page titles that have a certain minimum length. In my experience, very short titles can be ambiguous. For example, "mg" may be "milligrams" on a page, but there may be a page title "Mg" which redirects to the page "Magnesium". This settings prevents erroneous linking to very short titles by setting a minimum length. You can adjust this setting to your liking.

Issues

 * The extension performs a case-insensitive regexp search. Therefore, brackets may be added to words that have incorrect capitalization, causing 'broken' wiki links to appear. You may want to create redirecting pages for these variants (to also handle different user inputs).
 * When a page title contains special characters, they may not be properly escaped in the regex, causing unexpected behavior.
 * If a page title occurs in a URL by coincidence, the title words will be enclosed by  ... , breaking the URL.

Algorithm
You can browse the source code at https://github.com/bovender/LinkTitles.

The extension uses the following algorithm to convert words to links:
 * The extension is called whenever an ArticleSave event occurs.
 * It requests the page titles one by one from the wiki database, starting with the longest title.
 * For each of the page titles:
 * If the working page title from the database is different from the current page title:
 * Break the page content string into parts separated by wiki links "..." or by headings "==...==".
 * For each substrings that does not represent a wiki links or heading:
 * Perform a case-insensitive regular expression search and replace is performed to add ... to every occurrence of the title.
 * Put the substrings back together.
 * Repeat with the next heading.
 * Return the converted page content.

Thus, there is a nested  loop, which may potentially be slow when there a lot of pages in the wiki, or when the page is very long. I have not noticed a delay in page processing though when using the extension in a closed-community wiki with about 130 true content pages on hosted web space.

Credits
Credits to Eugene and inhan at StackOverflow for help with the regular expression.