Manual:Backing up a wiki

It is important to make regular backups of your wiki (data and files). This page provides an overview of the backup process for a typical MediaWiki wiki; you will probably want to devise your own backup scripts or schedule to suit the size of your wiki and your individual needs.

Overview
MediaWiki stores important data in two places:
 * Database : Pages and their contents, users and their preferences, metadata, search index, etc.
 * File system : Software configuration files, custom skins, extensions, images (including deleted images), etc.

Consider making the wiki read-only before creating the backup - see . This makes sure all parts of your backup are consistent (some of your installed extensions may write data nonetheless).

File transfer
You will have to choose a method for transferring files from the server where they are:


 * Non-private data you can simply [ https://github.com/WikiTeam/wikiteam/wiki/Tutorial#Publishing_the_dump publish on archive.org] and/or in a  directory of your webserver.
 * SCP (or WinSCP), SFTP/FTP or any other transfer protocol you choose.
 * The hosting company might provide a file manager interface via a web browser; check with your provider.

Database
Most of the critical data in the wiki is stored in the database. If your wiki is currently offline, its database can be backed up by simply copying the database file.

When using the default MySQL or MariaDB backend, the database can be dumped into a script file which can be used later to recreate the database and all the data in it from scratch.

Automysqlbackup
See the package on Debian:

Install the package:

All your databases will be saved in /var/lib/automysqlbackup/ :

Manual backup:

Restore a database:

For other distributions, see on [ https://sourceforge.net/projects/automysqlbackup/ Sourceforge].

Mysqldump from the command line
The most convenient way to create a dump file of the database you want to back up is to use the standard MySQL dump tool mysqldump from the command line. Be sure to get the parameters right or you may have difficulty restoring the database. Depending on database size, mysqldump could take a considerable amount of time.

First insert the following line into LocalSettings.php

this can be removed as soon as the dump is completed.

Example of the command to run on the Linux/UNIX shell:

mysqldump -h hostname -u userid -p --default-character-set=whatever dbname > backup.sql

Substituting,  ,  , and   as appropriate. All four may be found in your (LSP) file. may be found under ; by default it is localhost. may be found under,   may be found under  , where it is listed after. If  is not specified mysqldump will likely use the default of utf8, or if using an older version of MySQL, latin1. While  may be found under. After running this line from the command line mysqldump will prompt for the server password (which may be found under in LSP).

See mysqldump for a full list of command line parameters.

The output from mysqldump can instead be piped to gzip, for a smaller output file, as follows

mysqldump -h hostname -u userid -p dbname | gzip > backup.sql.gz

Some newer versions of MySQL might show an error about tablespaces and PROCESS privilege. MediaWiki does not use tablespaces. The solution is to add the --no-tablespaces option to the command:

mysqldump --no-tablespaces -h hostname -u userid -p dbname | gzip > backup.sql.gz

A similar mysqldump command can be used to produce XML output instead, by including the --xml parameter.

mysqldump -h hostname -u userid -p --xml dbname > backup.xml

and to compress the file with a pipe to gzip

mysqldump -h hostname -u userid -p --xml dbname | gzip > backup.xml.gz

Remember to also backup the file system components of the wiki that might be required, e.g., images, logo, and extensions.

Running mysqldump with Cron
Cron is the time-based job scheduler in Unix-like computer operating systems. Cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates.

A sample command that you may run from a crontab may look like this:  nice -n 19 mysqldump -u $USER --password=$PASSWORD $DATABASE -c | nice -n 19 gzip -9 > ~/backup/wiki-$DATABASE-$(date '+%Y%m%d').sql.gz

The  lowers the priority of the process.

Use valid values for,  , and. This will write a backup file with the weekday in the filename so you would have a rolling set of backups. If you want to save the files and extensions as well, you might want to use this one.

If you want to add this task in Cron through Cpanel then you must escape the character "%"

/usr/bin/mysqldump -u $USER --password=$PASSWORD $DATABASE -c | /bin/gzip > ~/backup/wiki-$DATABASE-$(date '+\%Y\%m\%d').sql.gz

or you will get an error:

/bin/sh: -c: line 0: unexpected EOF while looking for matching `'' /bin/sh: -c: line 1: syntax error: unexpected end of file

Tables
Some of the tables dumped have different degrees of temporariness. So to save disk space (beyond just gziping), although those tables need to be present in a proper dump, their data does not. However, under certain circumstances the disadvantage of having to rebuild all this data may outweigh saving disk space (for example, on a large wiki where restoration speed is paramount).

See mailing list thread mysql5 binary schema about the topic.

Latin-1 to UTF-8 conversion
See the relevant section of the upgrading page for information about this process. Also see the talk page for more information about working with character sets in general.

PostgreSQL
You can use the  tool to back up a MediaWiki PostgreSQL database. For example:

pg_dump mywiki > mywikidump.sql

will dump the  database to mywikidump.sql.

To restore the dump:

psql mywiki -f mywikidump.sql

You may also want to dump the global information, e.g. the database users:

pg_dumpall --globals > postgres_globals.sql

phpMyAdmin
Turn your wiki to read only by adding to LocalSettings.php.

Find the wiki database in LocalSettings.php. Here is an example of that this looks like in LocalSettings.php :

  Open the browser to your phpadmin link, login, choose the wiki database.   Select Export.

Make sure all items under Export are highlighted, and make sure Structure is highlighted (it's important to maintain the table structure).

'' Optionally check Add DROP TABLE to delete existing references when importing. ''

Make sure Data is checked.   Select zipped.   Click on GO and save the backup file.   Remove from LocalSettings.php  

Remember to also backup the file system components of the wiki that might be required, e.g. images, logo, and extensions.

HeidiSQL (alternative to phpMyAdmin)
[ http://www.heidisql.com/ HeidiSQL] is similar to phpMyAdmin, but without any restrictions of phpMyAdmin's free version. HeidiSQL requires a direct database connection, where some hosts may only offer web interfaces (phpMyAdmin) to firewalled databases.

File system
MediaWiki stores other components of the wiki in the file system.

The most important of these are:


 * uploaded files in the directory (including deleted files, thumbnails, and rendered math and SVG images, if applicable).
 * uploaded files in the directory (including deleted files, thumbnails, and rendered math and SVG images, if applicable).

The best method to back these up is to place them into an archive file, such as a  file, which can then be compressed if desired. On Windows, applications such as WinZip or 7-zip can be used.

For Linux variants, assuming the wiki is stored in

It should be possible to backup the entire "wiki" folder in "htdocs" if using XAMPP.

Configuration files
LocalSettings.php is the most important of these, but a wiki might also have things like  or other web server configuration files that should be backed up.

Uploaded files
Files uploaded to the wiki are by default put into the  directory, separated into subdirectories such as. There are also other directories such as  and. These should all be backed up.

The  can be backed up along with everything else, but can optionally be excluded in order to save backup space. This directory stores the derived thumbnails of images and other files; generally multiple thumbnails per wiki file. After restoring from backup, these thumbnails will be recreated as required (although depending on this may need to be a manual process).

Backup the content of the wiki (XML dump)
It is also a good idea to create an XML dump in addition to the database dump. XML dumps contain the content of the wiki (wiki pages with all their revisions), without the site-related data (they do not contain user accounts, image metadata, logs, etc).

XML dumps are less likely to cause problems with character encoding, as a means of transferring large amounts of content quickly, and can easily be used by third party tools, which makes XML dumps a good fallback should your main database dump become unusable.

To create an XML dump, use the command-line tool, located in the  directory of your MediaWiki installation. See for more details.

You can also create an XML dump for a specific set of pages online, using Special:Export, although attempting to dump large quantities of pages through this interface will usually time out.

To import an XML dump into a wiki, use the command-line tool. For a small set of pages, you can also use the Special:Import page via your browser (by default, this is restricted to the sysop group).

'' See for more information. ''

Without shell access to the server
If you have no shell access, use the WikiTeam Python script dumpgenerator.py from a DOS, Unix or Linux command-line. Requires Python v2. Mediawiki Client Tools are developing Mediawiki Scraper, a Python 3.x port.

User account information won't be preserved. The XML dump can include full or only most recent page history. The images dump will contain all file types with associated descriptions. The siteinfo.json and SpecialVersion.html files will contain information about wiki features such as the installed extensions and skins.

Full instructions are at the WikiTeam [ https://github.com/WikiTeam/wikiteam/wiki/Tutorial#I_have_no_shell_access_to_server tutorial] and [ https://github.com/mediawiki-client-tools/mediawiki-scraper Mediawiki Scraper] GitHub repositories.

See also Data dumps.

Scripts

 * Unofficial backup script by User:Duesentrieb.


 * Unofficial backup script by Flominator; creates a backup of all files and the database, with optional backup rotation.


 * User:Darizotas/MediaWiki Backup Script for Windows - a script for backing up a Windows MediaWiki install. Note: Has no restore feature.


 * [ https://github.com/WikiTeam/wikiteam WikiTeam tools] - if you do not have server access (e.g. your wiki is in a free wikifarm), you can generate an XML dump and an image dump using dumpgenerator from WikiTeam tools, (Python 2). See [ https://github.com/WikiTeam/wikiteam/wiki/Available-Backups some saved wikis].
 * Mediawiki Scraper - if you do not have server access (e.g. your wiki is in a free wikifarm), you can generate an XML dump and an image dump using dumpgenerator from Mediawiki Client Tools, (Python 3).


 * Another [ https://github.com/samwilson/MediaWiki_Backup backup script] that: dumps DB, files (just pictures by default, option to include all files in installation), and XML; puts the site into read-only mode; timestamps backups; and reads the charset from LocalSettings. Script does not need to be modified for each site to be backed up.  Does not (yet) rotate old backups.  Usage:  .  Also provides a script to restore a backup.


 * Another unofficial by Lanthanis that: exports the pages of specified namespaces as an XML file; dumps specified database tables; and adds further specified folders and files to a ZIP backup file.  Can be used with Windows task scheduler.


 * Script to make periodical backups [ https://github.com/nischayn22/mw_backup mw_backup]. This script will make daily, weekly and monthly backups of your database and images directory when run as a daily cron job.

Extensions

 * – Allows users to generate and download database dumps
 * – Allows users to generate and download XML and file/image dumps