Project:Support desk

Jump to navigation Jump to search

About this board

vde   Welcome to MediaWiki.org's Support desk, where you can ask MediaWiki questions!

There are also other places where to askCommunication: IRCCommunication#Chat, mailing listsMailing lists, Wikimedia Developer Support, Q&A, mwusers (unofficial forum) etc.

Before you post

Post a new question

  1. To help us answer your questions, please always indicate which versions you are using (reported by your wiki's Special:Version page):
    • MediaWiki
    • PHP
    • Database
  2. Please include the URL of your wiki unless you absolutely can't. It's often a lot easier for us to identify the source of the problem if we can look for ourselves.
  3. To start a new thread, click "Start a new topic".

Where is the data from which the existence of "Sloane's gap" was inferred?

1
Michael Hardy (talkcontribs)

On the page below we find that someone has a list of the frequencies of appearance in oeis of the first ten-thousand positive integers. Given that this has been a topic of several preprints on the arXiv and a section in Wikipedia's article about oeis and the page whose URL appears below, it seems surprising that that that list of frequencies cannot be instantly found on the web. But I've looked around and can't find it. Where is it?

(And is there no contact information for those responsible for maintaining this site?)

oeis.org/wiki/Frequency_of_appearance_in_the_OEIS_database

Reply to "Where is the data from which the existence of "Sloane's gap" was inferred?"

Issues connecting MySQL container and MediaWiki Container on Docker

4
Squeak24 (talkcontribs)

Hi All

I am tryingto install MediaWiki using Docker. I am using Windows, but want to use Docker so I have more scope of what I can do. Long term I want to create my own image so I can have Parsoid and GraphViz running on it.

The issue I am getting is when I go onto the database page of the installation I get:

Cannot access the database: php_network_getaddresses: getaddrinfo failed: Name or service not known (mysqlhost).

Check the host, username and password and try again.


I have tried to create my own hostname for the MySQL running the line:

docker run --name=mysql1 -–network=mysqlhost -d mysql/mysql-server

But it comes up with the same error.

To get the container for MediaWiki working I have used:

docker run --name mediawiki -p 80:80 mediawiki

It looks like they are both working, just not talking to each other.

I have tried to link the MySQL up with MediaWiki as detailed on the MediaWiki Docker page using:

docker run --name mediawiki --link wiki:mysql -d mediawiki


But with that I get the error:


C:\Users\User>docker run --name mediawiki --link wiki:mysql -d mediawiki docker: Error response from daemon: Conflict. The container name "/mediawiki" is already in use by container "eb03498d223748379186507fe6d58e1cb7f59f4f2de3e6e1ba863d5b8210bf3c". You have to remove (or rename) that container to be able to reuse that name. See 'docker run --help'.

So I tried:

docker run --name wiki --link wiki:mysql -d mediawiki


But I get the error:

C:\Users\User>docker run --name wiki --link wiki:mysql -d mediawiki docker: Error response from daemon: could not get container for wiki: No such container: wiki. See 'docker run --help'.

Not sure what I am doing wrong, any help is appreciated,


Ciencia Al Poder (talkcontribs)

You use "docker run" to create a new container. Once you do that, you can't use the same name, unless you destroy that container. Try "docker rm mediawiki" to remove it and recreate it again.

Once you have the container created, and stop it, you can start it again with "docker start mediawiki"

Squeak24 (talkcontribs)

I get that, I just can't seem to get the MediaWiki container to talk to the MySQL container. I found this which indicates the MySQL host is mysql in my case.

199.58.99.202 (talkcontribs)

This is an issue for me as well. is there any resolution that has been found?

Reply to "Issues connecting MySQL container and MediaWiki Container on Docker"

Thousands of accounts created in last few days

3
170.199.249.136 (talkcontribs)

MySQL server crashed on Monday and in investigating a second crash today, I found that an open wiki that I host for an open group has tens of thousands of users, each with only one or two edits. The entire content of the project was edited away.

The last legitimate edit was about one month ago.

How can all changes within the past xxx hours be undone? It will take days to delete the users.

170.199.249.136 (talkcontribs)

If this is a massive task, it isn't unthinkable to revert the Main Page and cut paste the content. There were only 10-15 pages, but if there is a scriptable solution that I can implement in the future, it's nice to know about.

As the server is a solar/battery powered Raspberry PI, CPU time and RAM is pretty tight. I noticed an issue when spam mail stopped coming through the mail server. Their page loads caused Apache to consume all RAM+swap and other processes started thrashing. A beefier system would have allowed this unchecked for weeks.

170.199.249.136 (talkcontribs)

Does MediaWiki have any mechanism to remove such entries. I have over 10,000 new users that have been created. Is this a new type of attack that the developers aren't aware of yet?


Reply to "Thousands of accounts created in last few days"

Persistent 504 gateway errors on editing pages - never on reading

10
Antek Baranski (talkcontribs)

Hi there,

I am very new to MediaWiki, never touched the thing until this week. :D


I have 8 wikis running on https://www.esportspedia.com and I am having timeout issues on 2 (https://www.esportspedia.com/lol/ and https://pt.esportspedia.com/lol/) of them when submitting a change to a page, the other 6 wikis save pages just fine albeit they are a little slower than I'd like them to be but that's not the main problem right now.


The nginx/fpm-php frontend runs on its own 8CPU 16GB RAM system and the databases & `runJobs` scripts for the 8 wikis run on a 16CPU 32GB RAM system.


I've looked at increasing the fpm-php timeout to 300 and also tried using `set_time_limit(120);` in my LocalSettings.php but neither option appears to have any impact what so ever.

Examknow (talkcontribs)

I looked on your site and found nothing wrong with it. Maybe it is your computer.

Antek Baranski (talkcontribs)

Is that a serious answer or are you just a troll?

Examknow (talkcontribs)

@Antek Baranski No I am definitely not a troll. It is my mission to keep wikis free of trolls. If you check your site maybe on another computer, then it might work. However as far as I can see there is nothing wrong. Also I noticed that you are hosting on a hosting provider. If you have further issues, you should let them know.

Antek Baranski (talkcontribs)

The behaviour only occurs when you EDIT pages as I wrote in the first post, and its been confirmed by at least 12+ people, so how you were able to verify this I don't understand because unless you are already registered as a contributor you won't be able to edit anything.


As for hosting, the wikis are running on an EC2 instance behind CloudFlare, so I am not entirely sure what you mean by a `hosting provider`.

Examknow (talkcontribs)

Okay never mind. I am going to remove myself before a dispute arises. Please do not reply to me any further.

Ciencia Al Poder (talkcontribs)

How long does it take when you submit the edit until you get the 504 error? Does it match the configured timeout? It could be on different layers: Cloudflare has a timeout, the webserver (nginx, apache...) another, php-fpm another... You'll hit the lowest value from all of them.

About the slowness, I'm not sure if setting a debug log for a couple of requests would give timestamps on the debug log, which should give some indication on where it seems to spend most of the time. Manual:Profiling would definitively help in diagnosing the problem.

Antek Baranski (talkcontribs)

The timeout occurs on 2 fronts, CloudFlare & php-fpm.


CloudFlare responds that the 'nginx' server replied with a 504 after 60 seconds which is the default nginx proxy timeout, which if memory serves me right is what is used when passing a request on to php-fpm, I've bumped up nginx timeouts like this:

        client_header_timeout 300;

        client_body_timeout 300;

        fastcgi_read_timeout 300;

        proxy_read_timeout 300;

        proxy_send_timeout 300;

And I also modified the php & php-fpm timeouts to be 300 seconds.


On the php-fpm I am seeing the following 5 recurrent issues:

[19-Feb-2019 16:17:38 UTC] PHP Fatal error:  Maximum execution time of 300 seconds exceeded in /home/docs/master_wiki/includes/exception/MWExceptionHandler.php on line 521

[19-Feb-2019 17:58:58 UTC] PHP Fatal error:  Maximum execution time of 300 seconds exceeded in /home/docs/master_wiki/includes/exception/MWExceptionHandler.php on line 388

[19-Feb-2019 16:27:01 UTC] PHP Fatal error:  Maximum execution time of 300 seconds exceeded in /home/docs/master_wiki/includes/exception/MWExceptionHandler.php on line 154

[19-Feb-2019 18:03:35 UTC] PHP Fatal error:  Maximum execution time of 300 seconds exceeded in /home/docs/master_wiki/includes/json/FormatJson.php on line 144

[19-Feb-2019 18:53:01 UTC] PHP Fatal error:  Maximum execution time of 300 seconds exceeded in /home/docs/master_wiki/includes/exception/MWExceptionHandler.php on line 154


None of those changes seems to have had any meaningful impact

Ciencia Al Poder (talkcontribs)

Try the debug log thing. Set up a debug log, perform an action that takes that long time, and disable it. Then inspect its contents and try to identify any possible problem from the log.

Antek Baranski (talkcontribs)

@Ciencia Al Poder thanks for that tip, the SQL debug log showed the reason for the timeouts.


Apparently every page edit on the wiki triggers a SELECT statement to be fired against the database for every single image on the wiki, this happens after the actual page edit is saved as the changes are persisted.


On the https://www.esportspedia.com/lol wiki, that is a massive 30K SELECT statements being fired by MW against the DB one after the other, even with the DB server being configured to have all data cache in memory, running 30K queries one-by-one is going to take a while which in turn causes the time outs.


Now the obvious question is, why would an individual page edit cause these SELECT statements to begin with?

Below is a sample of the SELECT statements being fired after an edit:

[DBQuery] lolpt_wiki SELECT /* LinkCache::fetchPageRow  */  page_id,page_len,page_is_redirect,page_latest,page_content_model,page_touched  FROM `page`    WHERE page_namespace = '6' AND page_title = 'ASE_2014_logo_small.png'  LIMIT 1  

[objectcache] Rejected set() for lolpt_wiki:page:6:7bec0c289f58cb1f91c8b8cafba04553b9c713c3 due to pending writes.

[DBQuery] lolpt_wiki SELECT /* Wikimedia\Rdbms\Database::query  */         MIN(rev_timestamp) AS creation_timestamp,

        COUNT(rev_timestamp) AS revision_count

        FROM `revision` WHERE rev_page = 2863

[DBQuery] lolpt_wiki SELECT /* LinkCache::fetchPageRow  */  page_id,page_len,page_is_redirect,page_latest,page_content_model,page_touched  FROM `page`    WHERE page_namespace = '6' AND page_title = 'Veigar_Splash_2.jpg'  LIMIT 1  

[objectcache] Rejected set() for lolpt_wiki:page:6:ad0841858ea15065f25c9db347fefc2cfe0d8734 due to pending writes.

[DBQuery] lolpt_wiki SELECT /* Wikimedia\Rdbms\Database::query  */         MIN(rev_timestamp) AS creation_timestamp,

        COUNT(rev_timestamp) AS revision_count

        FROM `revision` WHERE rev_page = 2864

[DBQuery] lolpt_wiki SELECT /* LinkCache::fetchPageRow  */  page_id,page_len,page_is_redirect,page_latest,page_content_model,page_touched  FROM `page`    WHERE page_namespace = '6' AND page_title = 'Adaptive_Helm.png'  LIMIT 1  

[objectcache] Rejected set() for lolpt_wiki:page:6:4b8983d7295c057cf42451727898ce7f998300a1 due to pending writes.

[DBQuery] lolpt_wiki SELECT /* Wikimedia\Rdbms\Database::query  */         MIN(rev_timestamp) AS creation_timestamp,

        COUNT(rev_timestamp) AS revision_count

        FROM `revision` WHERE rev_page = 2865

[DBQuery] lolpt_wiki SELECT /* LinkCache::fetchPageRow  */  page_id,page_len,page_is_redirect,page_latest,page_content_model,page_touched  FROM `page`    WHERE page_namespace = '6' AND page_title = 'Pr0llyCOL2014.png'  LIMIT 1  

[objectcache] Rejected set() for lolpt_wiki:page:6:3badd71acb2eb57e4edda4eaf35240cf92ca8558 due to pending writes.

[DBQuery] lolpt_wiki SELECT /* Wikimedia\Rdbms\Database::query  */         MIN(rev_timestamp) AS creation_timestamp,

        COUNT(rev_timestamp) AS revision_count

        FROM `revision` WHERE rev_page = 2866

Reply to "Persistent 504 gateway errors on editing pages - never on reading"
Zissouu (talkcontribs)

So I have been running into an issue with a mediawiki installation. I tried rolling back to 1.31.1 from 1.32 since I was running into issues but upon install I recieve this error and I can't seem to resolve it. I appreciate any help and thank you in advance

  • Setting up database... done
  • Creating tables... [b2e8f96f6b18d66046f58c42] /wiki/mw-config/index.php?page=Install Wikimedia\Rdbms\DBQueryError from line 1457 of C:\xampp\htdocs\wiki\includes\libs\rdbms\database\Database.php: A database query error has occurred. Did you forget to run your application's database schema updater after upgrading? Query: CREATE TABLE `user` ( user_id int unsigned NOT NULL PRIMARY KEY AUTO_INCREMENT, user_name varchar(255) binary NOT NULL default '', user_real_name varchar(255) binary NOT NULL default '', user_password tinyblob NOT NULL, user_newpassword tinyblob NOT NULL, user_newpass_time binary(14), user_email tinytext NOT NULL, user_touched binary(14) NOT NULL default '', user_token binary(32) NOT NULL default '', user_email_authenticated binary(14), user_email_token binary(32), user_email_token_expires binary(14), user_registration binary(14), user_editcount int, user_password_expires varbinary(14) DEFAULT NULL ) ENGINE=InnoDB, DEFAULT CHARSET=binary Function: Wikimedia\Rdbms\Database::sourceFile( C:\xampp\htdocs\wiki/maintenance/tables.sql ) Error: 1050 Table 'user' already exists (localhost) Backtrace: #0 C:\xampp\htdocs\wiki\includes\libs\rdbms\database\Database.php(1427): Wikimedia\Rdbms\Database->makeQueryException(string, integer, string, string) #1 C:\xampp\htdocs\wiki\includes\libs\rdbms\database\Database.php(1200): Wikimedia\Rdbms\Database->reportQueryError(string, integer, string, string, boolean) #2 C:\xampp\htdocs\wiki\includes\libs\rdbms\database\Database.php(4194): Wikimedia\Rdbms\Database->query(string, string) #3 C:\xampp\htdocs\wiki\includes\libs\rdbms\database\Database.php(4129): Wikimedia\Rdbms\Database->sourceStream(resource (closed), NULL, NULL, string, NULL) #4 C:\xampp\htdocs\wiki\includes\installer\DatabaseInstaller.php(225): Wikimedia\Rdbms\Database->sourceFile(string) #5 C:\xampp\htdocs\wiki\includes\installer\DatabaseInstaller.php(248): DatabaseInstaller->stepApplySourceFile(string, string, boolean) #6 C:\xampp\htdocs\wiki\includes\installer\Installer.php(1575): DatabaseInstaller->createTables(MysqlInstaller) #7 C:\xampp\htdocs\wiki\includes\installer\WebInstallerInstall.php(44): Installer->performInstallation(array, array) #8 C:\xampp\htdocs\wiki\includes\installer\WebInstaller.php(281): WebInstallerInstall->execute() #9 C:\xampp\htdocs\wiki\mw-config\index.php(79): WebInstaller->execute(array) #10 C:\xampp\htdocs\wiki\mw-config\index.php(38): wfInstallerMain() #11 {main} Notice: Uncommitted DB writes (transaction from DatabaseInstaller::stepApplySourceFile). in C:\xampp\htdocs\wiki\includes\libs\rdbms\database\Database.php on line 4543
Reply to "Error in setup"

search autocomplete case insensitive

4
72.228.158.3 (talkcontribs)

We have several topics that haves upper case characters in the title. When searching the autocomplete does not complete the search using all lower case characters (searching however will). It looks as if the default configuration is only case insensitive for the first character. I would like to have the autocomplete case insensitive. I know this has the ability to be enabled as Wikipedia does not have this problem.


My effort researching a solution has been unsuccessful.


MediaWiki 1.20.2

Brion VIBBER (talkcontribs)
Pedro.guima (talkcontribs)

Probably not the right way to fix this, but a possible workaround is:

vim includes/PrefixSearch.php +234

Replace

'page_title using utf8 ' . $dbr->buildLike( $prefix, $dbr->anyString() )

with

'CONVERT (page_title using utf8 ) ' . $dbr->buildLike( $prefix, $dbr->anyString() )

Reply to "search autocomplete case insensitive"

Error when creating wiki

1
Summary by Zissouu

was an error in the SQL server name

Zissouu (talkcontribs)

Hello, I am currently having an issue creating a mediawiki page Below is my error and I cannot figure out where its coming from. I am using MySQL and its working fine, xampp has no issues. Thanks for any help I'm really confused as to why it wont launch I do get the page telling me to please complete the setup and when I click on that link I am met with this:

[43d01d6e2ed5233f29d9580a] /wiki/mw-config/index.php Error from line 244 of C:\xampp\htdocs\wiki\includes\installer\MysqlInstaller.php: Call to a member function query() on null

Backtrace:

#0 C:\xampp\htdocs\wiki\includes\installer\MysqlInstaller.php(365): MysqlInstaller->getEngines()

#1 C:\xampp\htdocs\wiki\includes\installer\WebInstallerDBSettings.php(42): MysqlInstaller->getSettingsForm()

#2 C:\xampp\htdocs\wiki\includes\installer\WebInstaller.php(272): WebInstallerDBSettings->execute()

#3 C:\xampp\htdocs\wiki\mw-config\index.php(79): WebInstaller->execute(array)

#4 C:\xampp\htdocs\wiki\mw-config\index.php(38): wfInstallerMain()

#5 {main}


Thanks for any help!

Moving MediaWiki from Linux to Windows and updating

2
80.147.223.128 (talkcontribs)

Hi! Currently I host my MediaWiki on an Ubuntu Server. I would like to Upgrade the wiki to the 1.32 Version and move it to my Windows Server.

What I tried doing was creating a new Wiki on the Win-Server, creating a dump from mysql of the Linux Server and importing it manually to the new db (since phpmyadmin Import did not want to work). Changed the Settings so that they link to the new database but in the end it did not work.

Is there a proper way to migrate from Linux to Windows and upgrade? Did anyone already do this?

Osnard (talkcontribs)

Actually your approach seems to be correct. I have moved a lot of wikis between different servers (even Linux to Windows and vice versa) with that procedure:

  1. Dump database using mysqldump CLI tool on source server
  2. Copy over complete codebase + all configuration files (e.g. LocalSettings.php) from source to destination server
  3. Import database using mysql CLI tool on destination server
  4. Maybe adapt configuration redarding database ($wgDB*) and server ($wgServer)
Reply to "Moving MediaWiki from Linux to Windows and updating"
Spiros71 (talkcontribs)

I use PAGENAME in a template as part of a URL. However, when the pagename is more than a word (has space) the URL breaks. Is there a way to use it in a URL safe way?

Notmadewelcome (talkcontribs)

Use Uul0qlni8jnp5kpo, a variant that encodes spaces to make a valid URL

Spiros71 (talkcontribs)

Thank you, I found it here Manual:PAGENAMEE encoding#PAGENAMEE as it was parsed on this page. However, there is one issue, for a phrase (words separated with spaces), the wiki automatically adds an underscore in the URL, so in the search it is encoded with an underscore rather than a space.

Ciencia Al Poder (talkcontribs)
Spiros71 (talkcontribs)
Notmadewelcome (talkcontribs)

When I wanted to couple a wiki with my own website, I used .htaccess RewriteRule to allow incoming underscores to be treated as space, so I could offer both. This was easier than getting mediawiki to change its approach

Spiros71 (talkcontribs)

Well, in my case it is outgoing URLs to third sites.

Reply to "Using {{PAGENAME}} in a URL"
Jtuli (talkcontribs)

Hi,

I had problem with visual editor when I was trying to insert < Media (with Instantcommons set to false) I had " No result found"

After many searchs and no problem solved, I found that my Parsoid version was 0.10.0 for Visual-editor on Rel1_30 and Mediawiki version on 1.31.

I downgrade Parsoid version on 0.8.0 but now my Wiki url sens me a blank page...

Now i've 2 problems with no solutions, maybe my installation is break...

I installed parsoid 0.8.0 package with dpkg.

MarkAHershberger (talkcontribs)

Please follow the instructions here and report back any errors.

Jtuli (talkcontribs)

Thank, I resolved the Parsoid's and Ve's version problem.

I would like to post my local medias with Visual Editor but on Insert < Media but I've "no result found". $wgUseInstantCommons is set to false.

Reply to "Insert local media - Visual-Editor"