User:Mine0901/sandbox

Creating Zotero translators [Page title, above the content table]
The citoid service relies on the Zotero community for much of the "magic". We use Zotero translators to convert a page link into detailed information, and these translators need to be written for each site. Currently, the support is the best for English language sources, and we need your help to improve coverage of other sites. All translators share a similar structure, are short pieces of code and hence easy to create. Translators often work both in the browser and translation-server. They can be written for various browser support, namely, Firefox, Chrome, Safari, Internet Explorer. For citoid's use, it is required for any new translator to work in translation-server.

Zotero translators
Zotero translators are scripts written in JavaScript to parse web pages into citations. They can be written for journal articles, manuscripts, blog posts, newspaper articles, etc. A feature of Zotero, an open-source software for reference management, a translator can be created for any site and then contributed to the Zotero repository of translators. You can see a list of all Zotero translators there too.

Setting development environment
Translator development can be done on translation-server side or through Scaffold. While development through Scaffold is easier since it is interactive, if you like to work on the console and keep things simpler, you can work on the server. We will setup an environment and then proceed to writing translators.

For translation-server side development

Install Sublime Text

Go to the download page of Sublime Text.

Choose the link according to your operating system.

Download the binary files or follow the steps as provided.

Install translation-server

Follow the steps to download translation-server as provided here.

Install Zotero 4.0
With Zotero 5.0, there is a single standalone application for Zotero that will work for all browser supports. It does not support Follow the steps given below for the installation-
 * 1) Go to the download page to get Zotero.
 * 2) Click on the "Download" button for suitable platform, that is, Windows, Linux or Mac.
 * 3) Extract the compressed folder and Double click on "Zotero" to launch the application.
 * 4) (ADD steps on installation of extension for word processor)

Install Scaffold
Scaffold is an integrated development environment for creating Zotero translators. It makes it easy to write and debug a translator. You can also add test cases for a translator very conveniently using Scaffold. Scaffold is a Firefox add-on, in case you don't have Firefox in your system, get it from the Mozilla site (Explain the type of translators above and add which all are supported by Scaffold here). Follow the steps given below to install Scaffold-
 * 1) Open this link to get Scaffold in your Firefox browser.
 * 2) Double click the XPI file (Extension Archive file) to download the software.
 * 3) If Firefox prevents the installation, choose the "Allow" option.
 * 4) After the add-on is downloaded and verified, click on Install.
 * 5) Restart Firefox to apply the changes. You can now access Scaffold from the Tools.

Required Concepts
There are a few concepts that you should know that will help you in creating translators. These concepts are discussed briefly.

HyperText Markup Language
Knowing the basics of HTML is crucial as it makes it easy for you to understand the source code of the web page you want to write a translator for. The good point is it is very easy to read and understand. HTML is a language for creating web pages and applications and along with CSS and JavaScript form the foundation of web pages all over the internet. HTML contains tags that group content. Tags can form markup elements which have an opening <> and the closing tag  or empty elements, which have only the opening tag<>. Tags can also have attributes which help in identifying elements, styling them, etc. Web browsers can then process HTML documents and present it to the user.



Document Object Model
DOM is a language-independent interface that structures a web page into a tree-like pattern. It recognizes parts of a document as nodes and organizes them into a hierarchical structure. For example, consider a section of an HTML document below and representation of its DOM:

CSS Selectors
CSS selectors are used to target specific elements in an HTML document for styling. These selectors can be used to identify HTML nodes through their class, id, attributes, DOM position and relation, etc. Once identified, we can scrape information from these tags through HTML DOM methods like querySelector and querySelectorAll. These methods can be invoked on a document or an element by passing CSS selectors as parameters. With document, querySelector returns the first element that matches the selectors and with element, it returns the first descendent element that matches the selectors. The selectors that will be used frequently are mentioned below: To understand how to get CSS selectors of a node in an HTML document, consider the following example:
 * 1)   - It selects all the HTML elements that have class = "classname".
 * 2)   - It selects all the HTML elements that have id = "idname".
 * 3)   - It selects all the HTML elements that are of type "elementType".
 * 4)   - It selects all HTML elements that are of type "elementType2" present inside any element of type "elementType1".
 * 5)   - It selects all HTML elements that are either of type "elementType1" or of type "elementType2".
 * 6)   - It selects all the HTML elements that have an attribute named "attributename" with the specified value.
 * 1) Open Citoid documentation in a new window of Firefox.
 * 2) Open Toolbox by pressing Ctrl+Shift+I.
 * 3) Inspect the title of the document with the node picker. On the selected element Right click->Copy->CSS path.
 * 4) The CSS path returned will be  . We can shorten this path by ignoring most of the selectors. The title is present in a tag which has class="firstHeading" and id="firstHeading". We can either use the selector  or   to uniquely identify the title node. In this way, we'll try to shorten CSS paths and use them in the translators.

JavaScript
JavaScript is a programming language used in web browsers, servers, game development, databases, etc. Zotero translators are primarily JS files that have above-mentioned concepts in action. You need to have a clear idea of the following concepts of JS before starting to write a translator.
 * 1) Variables
 * 2) Statements
 * 3) Loops
 * 4) Methods
 * 5) Functions

Common code blocks in translators
Before we jump to writing a translator, below are the functions that come handy when we prepare a translator. If you wish to quickly write your web translator, you can open Scaffold and simply copy-paste the following blocks in the code tab and make changes as required. For filing metadata and testing, you can refer the working example.

attr
This function returns the value of the attribute we pass to it for a node or set of nodes that match the CSS selectors. Since this function is not available in Zotero 4.0, we need to pass the document explicitly as one of the parameters. Next, we pass the selector/selectors for identifying the node/nodes for which we want to get the information. Attribute name for the node, like class, id or name is passed as attr variable. For index zero, querySelector runs and returns the first element else querySelectorAll runs.

text
This function will return the text content of the specific node and its descendants or set of nodes that match the CSS selectors. We pass the document, the selector/selectors and the index similar to how we did above for the attr function. It will also be used as polyfill untill it gets included in the Zotero code. Note: You can use the minimized code for attr and text

detectWeb
detectWeb function is used to classify the type of data on a web page. It should return one of the item types defined by Zotero. Once a web page falls in a category, its retrieval can then be carried out. There is a wide list of available item types. Each item has relevant fields which can hold data. A book item type has fields like title, publisher, ISBN, author, edition, the number of pages, etc.

For example, for an article on Wikipedia, we can use "encyclopediaArticle". A complete list of the types is available here.

doWeb
doWeb is a function that initiates the retrieval of data. This function is generally written such that if a page has multiple items, it calls getSearchResults (explained below) and provide the user a pop-up window to select which all items to save and if the page has a singleton entry, then it calls scrape (explained below) to save the item information.

getSearchResults
This function contains the logic to collect multiple items. Each item is stored as a key-value pair. Generally the href ( hypertext reference) of the item is chosen to be the key and the title of the item is chosen to be the value. Once the set of all items is ready, it is returned to the doWeb function.

scrape
scrape function is called to save a single item. It is the most interesting function to code in a translator. We first create a new item as returned by detectWeb and then store the metadata in the relevant fields of that item. Along with the metadata, attachments can be saved for an item. These attachments become available even when one is offline. In the function shown below, we make use of another translator called Embedded Metadata. We load this translator and it scrapes information from the meta tags of the web page, filling fields and reducing our work. We can always insert and update information of fields on top of what Embedded Metadata provided.

License block
This block should be added at the beginning of a translator if you wish to submit your translator to the Zotero upstream.

Write the code
We will prepare a translator for mediawiki.org to scrape information by using the above mentioned functions. Open an editor and create a new JavaScript file and name it Mediawiki.js. One can use Scaffold to develop translators or test them on translation-server. Below you will find explanation to create translators in both ways. You can refer the code snippets provided in the previous section as they go for the same translator that we will now be preparing.
 * 1)  Include the attr and text functions at the top of the file.
 * 2)  We will first write the detectWeb function. For a multiple entries page ( example search page) you can notice that the url has a substring "?search=". So we'll write an "if" clause that checks whether the url of the target page contains the keyword "search" . To prevent pages that may not be search pages ans still have similar substring in the url(example), we will check if getSearchResults returns true. When both conditions are satisfied, the function should return "multiple". For other pages that are wiki pages, we can check if their url contains substring "mediawiki.org/wiki" and if that is satisfied, we can make the function return "encyclopediaArticle". For running this function we will need to write getSearchReuslts first. After that, we can run and test it.
 * 3) For getSearchResults, we will generate a CSS path that contains all the items on the page and then for each item, we will take its href as the key and store the title of that articular search result as its value. Open this search page in a new tab within the same window and inspect the first search result with node picker (Ctrl+Shift+I) . Copy its CSS path. The CSS path generated by the inspector will be  which is quite long. This can be shortened to   as it will uniquely identify the node in the entire document, reading that we are looking for  tag nested in a element that has class name "mw-search-result-heading". We will pass these selectors to querySelectorAll which will return a list of all nodes that match these selectors, hence scraping all results of the search.
 * 4) Lets move to the doWeb function. This function has the same template for almost all the translators. We check for multiple entries and provide the user with a select window containing all items provided by getSearchResults. The URLs of item/items that the user selects are stored in an array (here the variable named articles) and the Zotero utility processDocuments sends a GET request to each of these URL and then pass the DOM of each page to the scrape function which is the callback function for processDocuments. In case the page contains a single item, doWeb directly calls scrape function on it. This is done through the else clause.
 * 5) The scrape function gets all the information from the DOM and saves it. The Embedded Metadata.js is a translator that you can include in any of your web translator and it will get information from the meta tags that are well defined. Refer the code snippet of scrape in the above section to see how it can be loaded. We need to create an object of the correct item type before we start storing the information. For this, we get the result from detectWeb and based on it, create an object. For this example, we are categorizing pages as encyclopediaArticle and so we can simply create an object of that type. In case we have different options to choose from, like if your resource holds articles on books, newspaper, journal, etc. you can use a conditional loop to check the item type and then create an appropriate object as required. For the object we have created, we need to know what all information we should scrape. You can find a list of all valid item types and their fields here. For the title of the article, use node inspector to inspect it. Since this node has id as firstHeading, we can extract the title as follows. trimInternal is a Zotero Utility that will remove trailing white spaces if there are any. Articles on Mediawiki don't support mentioning any author or contributor, these fields will be skipped here but for almost all the items, this is an important information. A Zotero utility that often comes handy is cleanAuthor which splits the input into the first and last name which makes it easy to store the names in the creator field. Next, we can store the rights under which each article is available. This is mentioned in the footer of each page. Examining it return us the css path . We can shorten it to   and then write the code as follows. We can also hard-code this information if it is not subject to any change. We can hard-code the archive as Mediawiki and get the language of the article as shown below. We can store the article tags that are at the bottom the page mentioned under Categories. Inspect the element using the inspector and generate the CSS path. The CSS selectors will be  . You'll notice that the division that holds the list of tags is given the class name   which we can use to get all elements that match the specified group of selectors. We can then use the text function to get the content of each element. We can save links/files/PDFs along with metadata as attachments. For this translator, we can save the active webpage through its url. The mime type can be set "text/html" for links and "application/pdf" for PDFs. Finally, the item can be saved by the following line of code

Run on translation-server
Build over. Run the docker image. From another terminal, get into /bin/bash. Then update and install curl. Use curl to test. It is not happening with the first way, ask?

Run on Scaffold
Scaffold is an integrated development environment provided by Zotero to write web and import translators. The latest release is Zotero 5.0 but Scaffold doesn't work with Zotero standalone and as a result, we need to work with Zotero 4.0 for Firefox. It is available for download on the download page of Zotero.

Fill in metadata

 * 1) Open mediawiki.org in a new window in Firefox browser. You can open the web page you want to translate in a new tab but for Scaffold to detect the source, you will need to keep switching between tabs. So it is convenient to use another window to keep the web page as the active tab.
 * 2) From the menu bar of Firefox, through the Tools drop-down, open Scaffold.
 * 3) It has six buttons on top, to load an existing translator, to save the current translator, to run detectWeb, doWeb, detectImport and doImport, respectively.
 * 4) In the Metadata tab, you will see an automatically generated translator id, which is unique to each translator.
 * 5) In the label field, enter the name of the translator such that it is easy to recognize the source for which it works. For example, for mediawiki.org, enter the label as Wikimedia.
 * 6) (Include Target Regex)
 * 7) Let the other fields have default values. It the bottom, for the translator type, check the "Web" option since we are building a web translator.
 * 8) For the Browser support, it is convenient to check all the modes, but in case you want to choose a limited list, you can also do that. For citoid's use, it is compulsory for a translator to run in translation-server mode. So do check the last option that says "Server".

Fill in code and test

 * 1) In the code tab, enter the JavaScript code we saved in Mediawiki.js. Alternatively, you can write the functions directly in the space provided while testing them simultaneously.DetectwebScaffold.png
 * 2) Once you enter the code for detectWeb and the getSearchResults, you can test the output of detectWeb for an individual article and search page by clicking on the eye-like button and make changes if necessary.DoWebScaffold.png
 * 3) doWeb for a single article will call the scrape function, fill in the information into the fields of item and present it. To try and test, you can use  command to print stuff on the test frame.DoWebSearchScaffold.png
 * 4) For a page with multiple entries, Scaffold will show a selection window from where you can choose to save one or multiple items in Zotero library.

Generate test cases
Once the code of a translator is prepared, it is recommended to create test cases. These test cases are run daily and help the community to figure out if a translator fails in future and needs any update or complete rewriting. We will generate test cases for MediaWiki translator through Scaffold.
 * 1) Open mediawiki in a new tab. Launch Scaffold and open the translator we have created.
 * 2) Open the "Testing" tab of Scaffold. We need to give a web page as input. For eg, open citoid's page. Keeping this web page as the active tab, simply click on the "New Web" button. It will load the web page in the Input pane as a new unsaved test.
 * 3) Select the input entry and click the save button to have the output of test be saved as JSON data.
 * 4) Similarly lets create a test case for a search page. Open this link in a new tab as the active one and then click on "New Web". Once it is loaded, save it. You can see the saved test cases in the "Test" tab of Scaffold. For this search page, you can notice a JSON object as follows.

Locate the translator file
The translator file we create through Scaffold is saved locally.
 * 1) To access it, open Zotero application and choose "Edit" from the menu bar.
 * 2) In the dropdown list, you will find "Preferences". Click on it to open Zotero Preferences.
 * 3) Under the Advanced option, you will find a tab "File and Folders"
 * 4) From there click on the "Show Data Directory" button.
 * 5) It will open a Zotero directory, where the folder named translators will contain the file we created, named Mediawiki.js (in coherence with the label we gave in metadata).

Submit the translator
Once your translator is ready, submit it to Zotero's repository for translators on github by creating a pull request from your fork.

Verify the deployment of translator in Citoid
The changes in the Zotero translator upstream are frequently pulled into Wikimedia's mirror of these translators and deployed on Citoid. To check if a translator works in Citoid, you can have a look on the running tests and for translators under the "server" link, check if the required translator has entry "Yes" under the "Supported" column. Only the translators with "Yes" entry work with Citoid. In case a translator is supported and yet not passing the tests, it is probably outdated. The updated Wikimedia's repository of translators is https://github.com/wikimedia/mediawiki-services-zotero-translators. All the translators that are present in the server tests may not be yet deployed and hence not be in Wikimedia's mirror. Though it is updated regularly, you can still ping the community by creating a Phabricator ticket for the same. An example of one such ticket is here.

Useful links

 * 1) Zotero's documentation for writing translator - https://www.zotero.org/support/dev/translators/coding
 * 2) W3S reference sheet on CSS selectors - https://www.w3schools.com/cssref/css_selectors.asp
 * 3) Commit message guidelines for Mediawiki - Gerrit/Commit message guidelines
 * 4) Firefox Devtools - https://developer.mozilla.org/son/docs/Tools

ToDo :

 *  For the working example, include images of output in Scaffold for each code block . For installation section, add screenshots.
 * CSS selector section( instead/along XPath)
 * Section on Zotero (Maybe?)
 * Link of Devtools.