API:FAQ/es

NOTA: se está trabajando sobre esta página

Also read the quick start guide. It answers some questions not answered here and points to other useful pages.

consigo ayuda?

 * 1) Lea este FAQ
 * 2) Trate de encontrar la respuesta a su pregunta en el  documentación de la API aquí o en el página de inicio API autodocumentado
 * 3) Si no puede encontrar la respuesta a su pregunta en la web, usted puede hacer su pregunta en el mediawiki-api lista de correo
 * 4) Ask on IRC in the  channel on the Freenode network.

reporto un error o pido una funcionalidad?
If you have found a bug in the API or have a feature request, you can report it on Phabricator. Be sure to search for existing bugs first (please don't file duplicate bugs) and choose "MediaWiki" for product and "API" for component when reporting a new bug against the API. If the functionality you're requesting or reporting a bug against is offered by an extension (e.g. AbuseFilter, FlaggedRevs), file it in that extension's component in the "MediaWiki extensions" product.

llamo a la API?
Send HTTP requests to api.php. For example, on the English Wikipedia, the URL is http://en.wikipedia.org/w/api.php. Most wikis have api.php at a similar URL: just use api.php in place of index.php in page actions. From 1.17 onwards, MediaWiki supports Really Simple Discovery; the HTML source of every page has an RSD link pointing to an RSD descriptor which indicates where to find the API. If you can't figure out the URL of api.php on a third-party (non-Wikimedia-operated) wiki, contact its owner. The wiki may not enable the MediaWiki API, see.

To play with the API


 * use Special:ApiSandbox
 * enable your browser's developer console and watch net requests to api.php as you interact with the wiki

controlo el formato de salida?
Pass  in the query string. See the list of output formats for more information. Note that effort is underway to remove all output formats except JSON, so try to use JSON whenever possible.

detecto errores?
An error response from the API will set the  HTTP header and return an. For an example error response, see http://en.wikipedia.org/w/api.php?action=blah. See also the documentation about errors and warnings.

obtengo el contenido de una página (wikitexto)?
If you just want the raw wikitext without any other information whatsoever, it's best to use index.php's action=raw mode instead of the API: http://en.wikipedia.org/w/index.php?action=raw&title=Main_Page. Note that this will output plain wikitext without any formatting. See also the action=raw documentation

To get more information about the page and its latest version, use the API: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Main_Page. See also the documentation for the prop=revisions module.

Puedes recuperar 50 páginas por solicitud del API: http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&titles=Main_Page|Articles. Esto también funciona con generadores.

obtengo el contenido de una página (HTML)?
If you just want the HTML, it's best to use index.php's action=render mode instead of the API: http://en.wikipedia.org/w/index.php?action=render&title=Main_Page. See also the action=render documentation.

To get more information distilled from the wikitext at parse time (links, categories, sections, etc.), use the API parse module: http://en.wikipedia.org/w/api.php?action=parse&page=Main_Page. See also the documentation for the action=parse module.

obtengo errores HTTP 403?
This could mean you're not passing a  HTTP header or that your   is empty or blacklisted User-Agent policy. See the quick start guide for more information. Also, it could mean that you're passing  in the query string of a GET request: Wikimedia blocks all such requests, use POST for them instead.

Cómo puedo obtener el error readapidenied?
The wiki you're querying contains private content and requires users to log in in order to be able to read all pages. This means that a client needs to be logged in to query any information at all through the API. See the quick start guide for more information. It's not currently possible to query the contents of whitelisted pages without logging in, even though they're available in the regular user interface.

Me gustaría saber los errores badtoken?
This is usually because you're either not passing a token at all (read about tokens in the documentation of the module you're using) or because you're having trouble staying logged in. It's also possible you're reusing a type of token that can't be reused (see module documentation for details) or that you're using a token that's associated with an expired session. In general, when using cached tokens, refetch the token (generally using action=tokens) and try again before giving up.

do I get warnings instead of tokens (Action 'edit' is not allowed for the current user)?
You either don't have the right to execute the action you requested, or you're having trouble staying logged in.

X no está disponible a través de la API?
Not all features available in the user interface are available through the API. Such features weren't implemented either because no one has gotten around to it yet or because no one has requested them. For information about filing feature requests, see above.

does my API call on Wikimedia wikis just return an HTML error?
Si utiliza llamadas a la API con las peticiones POST asegurarse de que estas peticiones no utilizan '' Content-Type: multipart / form-data "'. Esto sucede, por ejemplo, si utiliza CURL para acceder a la API y pasa sus parámetros POST como una matriz. Los servidores proxy Squid que se utilizan en los servidores frontend en la granja wiki Wikimedia no manejan correctamente que, por tanto, se devuelve un error.

Instead, use the "value1=key1&value2=key2..." notation to pass the parameters as a string, similar to GET requests.

En otras wikis que se accede directamente no hace la diferencia.

Además, algunos programas (como cURL) envía un encabezado de  para peticiones POST más largos (>1024 bytes). Los wikis wikimedia que pasan por los servidores Squid no pueden hacer frente a esto. Si usted todavía está recibiendo errores de HTML con peticiones de correos, y no está en el sistema, pruebe a definir un encabezado en blanco Esperar (por ejemplo, usando cURL en la línea de comandos, utilice la opción de ).

no funcionan las urls de API muy largas?
There is a maximum limit of the url size that can be used with the API when making GET requests. This limit varies depending on the website. Wikimedia's limit is roughly around 8100 characters. To get around this limit use POST requests instead (you may also need to set the Expect header, as described above)