Apache configuration

[https://httpd.apache.org/ Apache] is the webserver used most with MediaWiki.

PHP as Apache Module
MediaWiki is written to use as an Apache module. Your PHP is probably configured as a module if you start with URLs like this:php example.com/index.php/Main_Page


 * 1) sudo apt-get install libapache2-mod-php

Enable apache2 php5 module:
 * 1) a2enmod php5

Restart Apache:
 * 1) service apache2 restart

CGIWrap
useradd -M -s /sbin/nologin wikiuser


 * Have a cgi-bin folder, containing CGIWrap (example in /home/myuser/cgi-bin). Once everything is configured, keep only cgiwrap, move the debug versions to another folder if you ever need it. Your cgiwrap file should be accessible only to Apache (chown and chmod accordingly).

ln -s /home/myuser/public_html/wiki /home/myuser/cgi-bin/wikilink
 * Inside the cgi-bin folder, create a symbolic link to the Wikimedia root.

AddHandler php-wrapper .php Action php-wrapper /cgi-bin/cgiwrap/wikiuser/wikilink
 * In your wiki's .htaccess file, add the following definitions:


 * Finally, chown and chmod all the .php files of your Wikimedia folder to be accessible solely by wikiuser.

The files will be accessible as usual. You do not need to specify in your path any cgi-bin, as this is transparently taken care of for you.

I strongly suggest you start out with /cgi-bin/cgiwrapd/... as your php-wrapper, as it will precisely show what is currently working. I also strongly suggest you do not delete your CGIWrap source folder until everything works perfectly as this is a real trial and error process, taking a long time. However, it's all worth your time as your MediaWiki will be run in its own separate process, in its own uid, without being able to interfere any other uid. Inverse is also true, except for root, that can read anything anywhere.

mod_alias / mod_rewrite
The recommended method of involves [https://httpd.apache.org/docs/current/mod/mod_alias.html mod_alias]. Other methods use [https://httpd.apache.org/docs/current/mod/mod_rewrite.html mod_rewrite] instead.

Spiders and bots
You really should use a file to tell well-behaved spiders not to download dynamically generated pages (edit pages, for instance). This can reduce the load on your webserver, preserve your bandwidth, and prevent duplicate content issues with search engines. However, malicious bots could tie up your webserver and waste your bandwidth by downloading a large volume of pages extremely quickly. Request throttling can help protect against this.