Extension:ArchiveLinks/Project/Design


 * On page save all the links from the article are retrieved from the parser
 * if have already been archived nothing is done
 * if they have not yet been archived they are added to a queue for a web bot to come by and archive them if they are not blacklisted
 * Sometime later a web bot comes by and attempts to retrieve the web page
 * if the archive is successful it is saved and displayed on request
 * if the web site is down the page is readded to the queue to be checked later, or if the page is still down after a certain number of attempts the the link is assumed to be dead and we stop trying
 * if the web site is up but the link can't be archived due to robots.txt, nocache, or noarchive tags automatically blacklist the site for a certain amount of time
 * if the web site is up but the page comes back as a 404 or a redirect assume it as a failed attempt, note it, and blacklist that link
 * Add a hook to the parser to display a link to access the cached version of the page for every external link on the wiki, or possibly configurable options, this will be done on parse so the link may link to stuff that has not yet been archived or where the archive was unsuccessful