Phlogiston/Running

In normal operation, Phlogiston should run automatically every day on both the production and development servers. A cron job (of the  user) runs right after the Phabricator dump is usually made available. The normal sequence of operation is:
 * 1) Download new dump
 * 2) Load the dump into the database
 * 3) For each specified scope:
 * 4) Reconstruct data
 * 5) Normally, this runs incrementally compared to the last date processed, so it only runs on one day of data.
 * 6) Regenerate the report completely.

Access
Configuration files for a given your_scope_prefix include  and  , in the github folder https://github.com/wikimedia/phlogiston.
 * You must already have a wikitech labs (not Tool Labs) shell account. See Getting Started.
 * Your account must be set up to access Phlogiston.
 * Admin note: user must be in group project-phlogiston; not sure if this is set at server or at labs level.
 * Your phlogiston shell account on phlogiston-2 must be in the  group.

Manual Control
The current practice is for only the Phlogiston developer to work on production, and for the Phlogiston developer to be the primary user of the development server. One use case for shared development is supported: users of reports may reconfigure their reports and then re-run the reports on the development server to see results immediately instead of the next day.

Phlogiston has no run control or locking; multiple Phlogiston reports run at the same time will have bad results. We therefore have a manual convention on the development server that all Phlogiston runs should happen in a shared tmux console session, by convention called, to prevent two conflicting runs from happening at once. This convention is not followed on production, which should not have multiple users running phlogiston.

To re-run a report on the development server

 * 1) Change the configuration files to make the desired changes, and commit to github.
 * 2) Log in to phlogiston-2.
 * 3) Change to be the phlogiston user:.
 * 4) Your phlogiston shell account on phlogiston-2 must be in the   group.
 * 5) Join the shared console:.
 * 6) If this fails with a message about "no sessions", then there is not already a mission_control session. Create it with.
 * 7) Re-run Phlogiston.
 * 8) replace your_scope_prefix with the code for your scope, for example, and for Android, or ve for VisualEditor.  This is determined when the files for this scope report are originally created.
 * 9) This will automatically update files from git, and then rerun the report.  It will not reprocess any of the data.
 * 1) This will automatically update files from git, and then rerun the report.  It will not reprocess any of the data.

To create a new scope

 * 1) Create new configuration files and add them to github.
 * 2) Log in to phlogiston-2.
 * 3) Change to be the phlogiston user:.
 * 4) Join the shared console:.
 * 5) If this fails with a message about "no sessions", then there is not already a mission_control session. Create it with.
 * 6) In the   directory, get the new files from github:
 * 7) Build the new scope data reconstruction and report.
 * 8) Rerecon will generate or regenerate the scope reconstruction completely, and generate the report, but will not download and load fresh dump data.
 * 9) After the report is complete, verify it through a browser.  If it looks good, add the scope to the phlogiston crontab command on both develompent and production.  It may also be helpful to add a link to the report to the file   and to deploy that file to development and production.
 * 1) Rerecon will generate or regenerate the scope reconstruction completely, and generate the report, but will not download and load fresh dump data.
 * 2) After the report is complete, verify it through a browser.  If it looks good, add the scope to the phlogiston crontab command on both develompent and production.  It may also be helpful to add a link to the report to the file   and to deploy that file to development and production.

Automation
Phlogiston is run on both servers automatically every day with the crontab entry

Data integrity and idempotency
The data dump includes all historical data from Phabricator, so only the most current dump is required for operation. Each data dump load will provide Phlogiston with complete information, and the data dump does not need to be reloaded until a new dump is available. Loading the dump is independent of any specific scopes.

Reconstruction and reporting are partitioned by scope. Changes to one scope will not affect any other.

An incremental reconstruction will operate from the most recent date available in the already-processed data, so if it is run a second time on the same day, it will not corrupt data. A complete reconstruction will begin by wiping all data for that scope.

A report will wipe the existing report on the website prior to generating a new report, so it is possible to end up with a broken report if the new report fails.