Extension:LinkTitles

What can this extension do?
The LinkTitles extension automatically adds links to page titles to the words of a page. Whenever a page is edited and saved, the extension will look if any existing page titles occur in the text, and automatically link to the corresponding pages. This will automatically cross-reference your wiki for you.

Usage
There is not much to do. Simply install the extension, and then edit a page. When you save the page, any existing page titles that occur on the page will be converted to MediaWiki links. (See below for configuration options.)

When the checkbox "This is a minor edit" is checked, the extension will not parse the page, in order to save time when you make frequent small edits to a page.

Performance impact
The extension looks for the occurrence of all existing page titles in the page content whenever a page is saved. For large wikis, this must come at the cost of a delay when a page is saved. The developer uses the extension in a wiki with about 150 content pages (about 800 pages in total) without a noticeable impact on performance.

Limitations
Links are only updated when a page is saved. Therefore, titles of newly created pages will not automatically appear as links in existing pages until they are edited and saved again.

In the rare case that it messes up your page
If the extension messes up your page, you can always revert back to a previous version using MediaWiki's 'View history' function. Just make sure to click the "This is a minor edit" checkbox at the bottom of the edit form when you revert the changes. If this box is checked, the extension will leave your page content alone.

Download & Installation
To install this extension, download the archive and extract it to the extensions directory of your MediaWiki installation. Then, add the following to LocalSettings.php:

Alternatively, if you have Git installed on your server, you can change to the extensions directory, and then clone the LinkTitles GitHub repository to your server (this will copy the entire commit history to your server).

To update a cloned git repository, change to the LinkTitles directory, then issue the command

Configuration parameters
Parse page content whenever it is edited and saved, unless 'minor edit' box is checked. This is the default mode of operation. It has the disadvantage that newly created pages won't be linked to from existing pages until those existing pages are edited and saved.

Parge page content when it is viewed. Since MediaWiki caches pages, this may or may not apply to a given page. Newly created pages will be linked to from existing ones, but the due to the caching it is impossible to predict when parsing will be triggered. Therefore this mode of operation is disabled by default.

Determines whether or not to add links to headings. By default, the extension will leave your (sub)headings untouched.

If $wgLinkTitlesPreferShortTitles is set to true, parsing will begin with shorter page titles. By default, the extension will attempt to link the longest page titles first, as these generally tend to be more specific.

Only link to page titles that have a certain minimum length. In my experience, very short titles can be ambiguous. For example, "mg" may be "milligrams" on a page, but there may be a page title "Mg" which redirects to the page "Magnesium". This settings prevents erroneous linking to very short titles by setting a minimum length. You can adjust this setting to your liking.

Exclude page titles in the array from automatic linking. You can populate this array with common words that happen to be page titles in your Wiki. For example, if for whatever reason you had a page "And" in your Wiki, every occurrence of the word "and" would be linked to this page.

To add page titles to the black list, you can use statements such as  in your. Use one of these for every page title that you want to put on the black list.

Keep in mind that a MediaWiki page title always starts with a capital letter. If you have lowercase first letters in the black list array, they will have no effect.

If set to true, do not parse the variable text of templates, i.e. in, leave the entire text between the curly brackets untouched. If set to false (default setting), the text after the pipe symbole ("|") will be parsed.

If set to true, only link the first occurrence of a title on a given page.

Restrict linking to occurrences of the page titles at the start of a word. If you want to have only the exact page titles linked, you need to set both options $wgLinkTitlesWordStartOnly and $wgLinkTitlesWordEndOnly to true. On the other hand, if you want to have all occurrences of a page title linked, even if they are in the middle of a word, you need to set both options to false.

See the above option $wgLinkTitlesWordStartOnly for an explanation.

Keep in mind that linking is case-sensitive.

Issues

 * When a page title contains special characters, they may not be properly escaped in the regex, causing unexpected behavior.

Algorithm
You can browse the source code at https://github.com/bovender/LinkTitles.

The extension uses the following algorithm to convert words to links:
 * The extension is called whenever an ArticleSave event occurs.
 * It requests the page titles one by one from the wiki database, starting with the longest title.
 * For each of the page titles:
 * If the working page title from the database is different from the current page title:
 * Break the page content string into parts separated by wiki links "...</tt>" or by headings "==...==</tt>".
 * For each substrings that does not represent a wiki links or heading:
 * Perform a case-insensitive regular expression search and replace is performed to add ... to every occurrence of the title.
 * Put the substrings back together.
 * Repeat with the next heading.
 * Return the converted page content.

Thus, there is a nested  loop, which may potentially be slow when there a lot of pages in the wiki, or when the page is very long. I have not noticed a delay in page processing though when using the extension in a closed-community wiki with about 130 true content pages on hosted web space.

Credits
Credits to Eugene and inhan at StackOverflow for help with the regular expression.