Topic on Project:Support desk

Star Warden (talkcontribs)

Hi. I have a problem with redirects on my wiki: http://dragon-mania-legends-wiki.mobga.me/Special:SpecialPages. The Broken redirects special page doesn't get updated, even if the problem was fixed ago and I ran php updateSpecialPages.php. The same goes for double redirects. Also, the problem that appears at double redirects (the special page showing the double redirect 44 times on the page) also occurs in the list of redirects, where each redirect and the page it redirects to are shown multiple times. I ran fixDoubleRedirects.php, but it didn't solve any problem. What else could I try?

This post was hidden by Ciencia Al Poder (history)
Ciencia Al Poder (talkcontribs)

The problem is that the pagelinks table is not being updated, so for MediaWiki the redirects aren't fixed yet. And that's caused by a malfunction in the Manual:Job queue. Looking at there are currently 1660523 (!) pending jobs.

You should first clear that queue of jobs, by executing runJobs.php (probably with a --maxjobs or --maxtime parameter to limit the number of jobs executed, since it may take a while to complete).

Once done, set Manual:$wgRunJobsAsync to false, and see if the number of jobs, when doing edits, doesn't grow without limit...

Just another reason to fix task T142751 once and for all.

Star Warden (talkcontribs)

I see that that script is now set to false by default. Shouldn't I just pull the recent changes through git?

Ciencia Al Poder (talkcontribs)

Only if you want to run unstable software, otherwise just change the setting in LocalSettings.php as I suggested and wait for the next stable or security release.

Star Warden (talkcontribs)

I am still getting the same issue, even though I ran runJobs.php and set that string to false...

Ciencia Al Poder (talkcontribs)

Did you run runJobs.php? Did it run without errors? If you ran it with --maxjobs or --maxtime, you should execute it as many times as needed to bring the number of jobs to 0. The number of jobs has decreased slightly, but still several orders of magnitude over what should be a sane value.

Star Warden (talkcontribs)

I ran it but without any parameters and I received some error with max memory reached or something. I forgot to screenshot it. I remember I ran this same script three or four weeks ago and I didn't get any issues. I am not sure if it's because of setting that string to false or not, but edits aren't updated in real time. It takes a few minutes for new edits to be reflected or new images to be displayed. Isn't there a faster way to run all the jobs? Because bringing a number over 1,5 million to 0 would take an enormous amount of time.

Ciencia Al Poder (talkcontribs)

Most of them are repeats from the same pages, so they'll be automatically discarded when picked. The "fastest" way to run those jobs is to run them through this script, either all of them, or in small batches. Using --memory-limit you can increase the memory limit so it doesn't fail with max memory errors (see documentation linked from that script).

You can also run rebuildall.php to "apply" those pending edits, but that won't clear the job queue.

Star Warden (talkcontribs)

I will try to run all the jobs, but it's going to take a while. Is it possible to have multiple people running the same script from different computers? To speed up the process.

Ciencia Al Poder (talkcontribs)

I don't think the script is designed to be run in parallel. One single script can run several jobs in parallel with --procs, though. Note also that the database is (usually) a single instance. If it has been weeks since it's not working, it should not be critical to wait a while for the script to complete.

Star Warden (talkcontribs)

Is there a way to have putty rerun the script, automatically, everytime it stops? I keep getting memory exhausted after it ran about 8 thousand jobs and I have to restart it again (usually happens every quarter of an hour). I tried using max with --memory-limit, but it said it wasn't able to allocate the memory needed or something along those lines (can't check right now).

Ciencia Al Poder (talkcontribs)

You can create a cron job and execute the script every 15 minutes or so. However, I'd be cautious of not executing the script if it's already running, you may get more memory errors and maybe executing the same job multiple times, so I'd execute them with a --maxtime limit to ensure it finishes before a new instance is run.

Star Warden (talkcontribs)

I am not sure it worked? I added this line when I was presented with the file to edit: */10 * * * * /srv/dml-wiki/maintenance/php runJobs.php --maxtime 900 then I saved with ctrl + x. But I don't see the number of jobs decreasing. I opened the crontab with my user account instead of root using sudo crontab -e and used the one from /bin/nano. Does it take some time for it to start or did I miss a step?

Star Warden (talkcontribs)

Now that I am looking at the API, it seems that more jobs are added. Was this happening before the cron or once I "set up" the cron?

Ciencia Al Poder (talkcontribs)

Some jobs just add more jobs to the table, because are "containers". Also, new edits to the wiki may add new jobs.

You should look at the cron logs to see if they're executed.

Star Warden (talkcontribs)

So, I looked at the log and from here (http://prntscr.com/dm47dw) it seems it's being executed. Yet the API shows more jobs than before. Before it was under 1,63m now it's over 1,63m. Have I added the correct command for the cronjob to run?

Star Warden (talkcontribs)

I still didn't figure where I was going wrong, so I just disabled the cron job and started running the script manually.

Star Warden (talkcontribs)

Nevermind. I found the issue. I forgot to add cd before the command. Everything seems to be working fine, for now. By my calculations, it should take about 12 full days for the jobs to hit 0. Thanks a lot for your help!

Star Warden (talkcontribs)

I have to reopen the topic because the redirects still haven't been fixed. The thing is, the number of jobs won't go below 6 no matter what I do? Any workaround to this job issue or preferably to the redirects?

Reply to "Redirects"