Analytics/Server Admin Log

2017-03-12

 * 22:24 elukey: restarted webrequest-load-text 12 Mar 2017 16:00:00 and 17:00:00
 * 22:24 elukey: stopped yarn nodemanager on an1028
 * 22:17 elukey: restarted webrequest-load Maps 12 Mar 2017 14:00:00
 * 07:11 elukey: re-set SET GLOBAL max_connections=300 on bohrium's mysql (got lost after the restart)

2017-03-10

 * 15:39 elukey: applied innodb_buffer_pool_size = 512M and restarted mysql on bohrium
 * 10:54 elukey: executed set global innodb_flush_log_at_trx_commit=2; on bohrium as test

2017-03-09

 * 14:17 joal: restart failed webrequest load [upload maps misc] 2017-03-09T09:00Z
 * 11:04 elukey: an1041 yarn nodemanager back running
 * 10:31 elukey: analytics1041 yarn nodemanager stopped, chowning to yarn:yarn all the perms in /var/lib/hadoop/data/X/yarn dirs
 * 10:09 elukey: restarted yarn-nodenamanger on analtycs1040
 * 09:52 elukey: restarted Mar 2017 02:00:00 webrequest-load-text (second time)
 * 08:57 elukey: re-running webrequest-load-text failed jobs too via Hue
 * 08:43 elukey: re-run via Hue the failed upload-load job
 * 08:39 elukey: re-run all the failed misc webrequest-load oozie jobs (total of four)
 * 08:28 elukey: re-run 186-09 Mar 2017 00:00:00 (webrequest-load-maps) on Hie

2017-03-07

 * 15:20 joal: deploying aqs in prod
 * 14:44 joal: Deploy AQS on beta
 * 12:52 elukey: analytics1040 back in service
 * 12:50 elukey: restarted webrequest-load-wf-text-2017-3-7-9 from Hue (oozie id: 0010151-170228165458841-oozie-oozi-W mapred that failed: job_1488294419903_24496)

2017-03-03

 * 11:29 joal: Restart 3 oozie spark jobs
 * 11:02 joal: Deploying refinery after having break stat1002 :(
 * 10:32 joal: deploying refinery
 * 09:43 joal: Deploying refinery-source v0.0.42 using jenkins

2017-03-02

 * 18:22 ottomata: deleteing and recreating oozie share lib
 * 18:15 joal: Restarting webrequest load for tect 2017-03-02T15:00Z
 * 14:27 joal: restart mediacounts job starting 2017-03-01T11:00Z

2017-03-01

 * 14:41 joal: Deploying refinery onto hdfs (before restarting jobs)
 * 14:38 joal: Restart all hdfs oozie jobs with 2048M launcher memory (using script)
 * 10:16 joal: Kill and restart webrequest-load-maps coordinator checking for new oozie_loader_memory parameter (starting from 2017-02-28T18:00 - 2g launcher memory)
 * 09:39 joal: Kill and restart webrequest-load-maps coordinator checking for new oozie_loader_memory parameter (starting from 2017-02-28T18:00)
 * 07:17 elukey: restarted manually the browser-general-coord failed jobs
 * 07:13 elukey: restarted manually the pageview-hourly-coord failed jobs
 * 07:09 elukey: restarted manually the pageview-druid-monthly-coord (february job failed)
 * 07:06 elukey: restarted manually via Hue UI the webrequest-load-coord-misc failed jobs
 * 06:59 elukey: restarted manually via Hue UI the webrequest-load-coord-maps failed jobs

2017-02-28

 * 18:03 joal: restart pageview oozie job for 2017-02-28T12:00
 * 17:53 elukey: restarted via Hue Feb 2017 14:00:00 webrequest-load-coord-misc/maps
 * 14:02 joal: Suspend mediawiki-load jobs as well (forgot about those)
 * 13:31 joal: Suspend webrequest-load bundle for CDH upgrade
 * 13:30 elukey: stopping camus as prep step for the CDH upgrade

2017-02-23

 * 12:18 joal: Restart cassandra-coord-pageview-per-project-hourly 2017-02-23T07, 08, 09 to recover from cassandra issue - Worked !
 * 11:19 joal: Restart cassandra-coord-pageview-per-project-hourly 2017-02-23T07 and 08 to recover from cassandra issue

2017-02-22

 * 08:06 elukey: restart Hue on an1027 for openssl upgrades

2017-02-16

 * 13:22 elukey: updated firewall rules for Analytics VLAN

2017-02-15

 * 13:55 elukey: disabled apache mod_deflate on bohrium (piwik test)
 * 09:01 elukey: restarted Piwik with bulk_requests_use_transaction=0 to try to fix the SQL deadlock issue (https://github.com/piwik/piwik/issues/6398#issuecomment-91093146)

2017-02-13

 * 21:38 elukey: Restarted webrequest-load-coord-upload 19:00 - failed and Hue returning 500s

2017-02-11

 * 00:13 joal: Restartedwebrequest-load-wf-text-2017-2-10-20

2017-02-10

 * 09:53 elukey: re-enabled oozie bundles after maintenance
 * 09:51 elukey: restarted Hive-* and oozie on analytics1003
 * 09:40 elukey: suspending oozie bundles to allow oozie/hive maintenance

2017-02-09

 * 13:02 mforns: Restarted webrequest-load-bundle and pageview-hourly-coord
 * 12:46 mforns: Deployed refinery using scap, then deployed onto hdfs
 * 12:00 elukey: added Marcel as superuser in Hue
 * 11:56 elukey: stopped webrequest-load-bundle from hue
 * 11:06 mforns: Deployed refinery-source using jenkins
 * 10:48 elukey: restarting druid daemons for Java upgrades
 * 10:05 elukey: re-enabled oozie bundles after maintenance
 * 10:04 elukey: performed master failover from an1001 to an1002 (and vice-versa) for java upgrades
 * 10:04 elukey: restarted oozie, hive-server and metastore for java upgrades
 * 09:49 elukey: suspended oozie bundles temporarily to allow graceful restarts

2017-02-08

 * 18:05 ottomata: restarting pivot
 * 17:52 ottomata: restarting pivot
 * 15:35 elukey: restarted all the failed oozie cassandra load jobs

2017-02-07

 * 20:24 joal: Resubmit cassandra-coord-pageview-per-project-hourly for 2017-02-07T18:00
 * 14:36 elukey: restarted webrequest-load-wf-text-2017-2-7-13

2017-02-04

 * 13:18 joal: Restarted mediacounts-archive job for day 2017-02-03 (had failed)

2017-02-02

 * 12:07 joal: Restarted daily and monthly pageview druid loading jobs
 * 12:03 joal: Deployed refinery to correct bug introduced in https://gerrit.wikimedia.org/r/#/c/335067/
 * 10:13 joal: Killed-Restarted last access uniques monthly jobs to pick up new config -0097552-161121120201437-oozie-oozi-C

2017-02-01

 * 19:01 joal: Killed-Restarted Mobile apps Uniques monthly jobs to pick up new config - 0096638-161121120201437-oozie-oozi-C
 * 18:47 joal: Deploy refinery for uniques monthly patches
 * 17:27 joal: Restarting 2 webrequest-load text jobs that failed during NM restart (2016-02-01T11:00 and T13:00)
 * 13:12 elukey: restarted pageview-druid-monthly-coord and pageview-druid-daily-coord oozie coordinators after deployment
 * 12:17 elukey: deployed Refinery via scap and then executed the hdfs copies on stat1002

2017-01-31

 * 16:11 elukey: started Cassandra nodetool cleanup for aqs1007-a
 * 16:04 elukey: started Cassandra nodetool cleanup for aqs1004-b
 * 08:31 elukey: started Cassandra nodetool cleanup for aqs1004-a

2017-01-26

 * 19:20 joal: Restart webrequest-lood-coord-text 2017-01-26T15:00 after cluster shake
 * 19:18 elukey: restored an1001 as RM and HDFS master

2017-01-24

 * 21:30 ottomata: restarted hadoop-mapreduce-historyserver on analytics1001. it died due to OOM

2017-01-22

 * 13:27 joal: Rerun pageview-druid-daily-wf-2017-1-20 trying to see if it fixes automagically

2017-01-19

 * 15:51 joal: Launched 0080172-161121120201437-oozie-oozi-B to recover from missing webrequest-load 2017-01-18 19:00 with a correct setup this time
 * 15:39 joal: Launched 0080149-161121120201437-oozie-oozi-B to recover from missing webrequest-load 2017-01-18 19:00

2017-01-17

 * 11:16 joal: Remove mediawiki-history-beta datasource from druid
 * 09:51 elukey: restarted mediacounts-archive-wf-2017-01-16

2017-01-11

 * 19:23 joal: Start mediawiki history reconstruction job on newly sqooped data
 * 18:25 joal: Replace /wmf/data/raw/mediawiki/tables/ with newly sqooped data

2017-01-10

 * 15:30 joal: Restart 0024519-160420145651441-oozie-oozi-C for day 2017-01-09 to see if it fails again

2017-01-06

 * 20:35 joal: Launched 0063574-161121120201437-oozie-oozi-C to cover for upload-2017-01-06-[16-17]
 * 19:04 elukey: started 0063446-161121120201437-oozie-oozi-C to re-run upload-2017-1-6-17

2016-12-22

 * 15:28 elukey: changed firewall rules to allow only $ANALYTICS_NETWORKS (rather than the broader $INTERNAL) for the Yarn UI http service (an1001) and the hive metastore (an1003)

2016-12-19

 * 21:27 nuria: deployed analytics refinery, restarted webrequest load and pageview_hourly jobs
 * 20:11 nuria: deployed analytics/refinery to cluster (2nd try)

2016-12-13

 * 11:12 elukey: deleted /srv/stat1001 on stat1004

2016-12-09

 * 14:32 joal: restarted eventlogging mysql consumer after DB restart
 * 13:57 joal: Stopped EventLogging Mysql consumer for database restart

2016-12-08

 * 18:37 ottomata: preferred-replica-election on analytics kafka cluster to bring 1012 back as leader for its partitions
 * 18:15 ottomata: restarting broker on kafka1012 to repro T152674

2016-12-07

 * 21:59 ottomata: restarting eventlogging again to pick up puppet changes to use kafka-confluent writer
 * 19:39 ottomata: restarting analytics eventlogging to test out confluent kafka producer for processors

2016-12-05

 * 11:02 joal: Killing wikidata-articleplaceholder_metrics job and restarting it starting Nov. 1st for code update
 * 10:43 joal: Deploy refinery onto hdfs
 * 10:35 joal: deploying refinery

2016-12-02

 * 09:43 joal: Restarted yesterday failed oozie webrequest-load jobs (upload, text, misc, hours 21, 22,23)

2016-12-01

 * 20:27 ottomata: bouncing kafka broker on kafka1018 to test config changes to eventlogging analytics kafka clients
 * 20:25 ottomata: restarting eventlogging analytics processes again to pick up api_version change for consumers too
 * 19:45 ottomata: restarting eventlogging analytics processes to pick up api_version kafka arg
 * 08:02 elukey: added fi.wikivoyage to the pageview whitelist manually

2016-11-30

 * 21:32 milimetric: restarted webrequest/load oozie bundle
 * 21:17 milimetric: Deployed refinery using scap, then deployed onto hdfs
 * 20:52 milimetric: Deployed refinery-source using jenkins

2016-11-25

 * 09:16 elukey: resumed oozie bundles and camus crontab after maintenance
 * 08:49 elukey: stopping oozie and camus as prep-step for Yarn/HDFS master failover (remaining hosts with old openjdk)

2016-11-12

 * 19:23 joal: Launch 0028421-161020124223818-oozie-oozi-B to cover for webrequest-load hours 19-20 missing on 2016-11-10

2016-11-10

 * 19:59 nuria: deployed v0.0.37 of refinery to hdfs
 * 18:22 nuria: deployed v0.0.37 of refinery-source https://gerrit.wikimedia.org/r/#/c/320797/

2016-11-08

 * 12:33 joal: Deploying refinery for patching pageview whitelist

2016-11-07

 * 09:45 elukey: started 0022558-161020124223818-oozie-oozi-C to rerun wf-text-2016-11-7-07
 * 08:00 elukey: started 0022441-161020124223818-oozie-oozi-C to rerun wf-text-2016-11-7-04 -> 06
 * 04:53 joal: started 0022249-161020124223818-oozie-oozi-C to rerun wf-text-2016-11-7-00 -> 03

2016-11-06

 * 19:50 joal: started 0021806-161020124223818-oozie-oozi-C to rerun wf-text-2016-11-6-16
 * 17:39 elukey: started 0021694-161020124223818-oozie-oozi-C to rerun wf-text-2016-11-6-15
 * 09:27 joal: started 0021136-161020124223818-oozie-oozi-C to re-run wf-text-2016-11-6-01 -> 07

2016-11-05

 * 18:05 joal: started 0020254-161020124223818-oozie-oozi-C to re-run wf-text-2016-11-5-10
 * 08:47 joal: started 0019693-161020124223818-oozie-oozi-C to re-run wf-text-2016-11-5-00 -> wf-upload-2016-11-5-07
 * 08:45 joal: started 0019686-161020124223818-oozie-oozi-C to re-run wf-text-2016-11-4-19 -> wf-upload-2016-11-4-20

2016-11-04

 * 08:45 elukey: started 0018557-161020124223818-oozie-oozi-C to re-run wf-upload-2016-11-4-6
 * 08:45 elukey: started 0018549-161020124223818-oozie-oozi-C to re-run wf-upload-2016-11-4-2 -> wf-upload-2016-11-4-4

2016-11-02

 * 19:43 ottomata: manually stopped an old wikistats_git pageviews cron in spetrea's crontab on stat1003. no output from it since 2013, and spetrea doesn't really have an account

2016-11-01

 * 17:52 joal: Deploying refinery
 * 14:45 joal: Restart webrequest load job to apply
 * 14:33 joal: deploying refinery onto the cluster
 * 14:00 ottomata: restarting pivot

2016-10-31

 * 17:09 ottomata: bouncing eventlogging
 * 17:00 ottomata: kafka preferred replica election on main-eqiad kafka cluster to promote kafka1003 as leader for its preferred partitions
 * 14:49 ottomata: adding kafka1003 in as replicas for active main-eqiad topics
 * 14:12 ottomata: adding kafka1003 as kafka broker in main-eqiad cluster
 * 14:00 joal: deploy refinery

2016-10-28

 * 13:04 elukey: oozie firewall rules changed - nowonly the analytics network is allowed
 * 00:19 bd808: Testing logging to mw.o SAL via stashbot

2016-09-23

 * 09:06 elukey: reboot eventlog2001.codfw.wmnet for kernel upgrades
 * 08:45 elukey: upgrading varnishkafka to 1.0.12-1 in cache:misc
 * 08:32 elukey: upgrading varnishkafka to 1.0.12-1 in cache:maps

2016-09-22

 * 15:30 elukey: analytics1001 is back Yarn/HDFS master
 * 13:16 elukey: previous comment was meant to be read as "set a permanent read only = false"
 * 13:16 elukey: set read_only = false (on startup) for the analytics1003's mariadb instance
 * 13:12 elukey: restarted oozie jobs for 2016-9-22-6
 * 12:50 elukey: varnishkafka 1.0.12 installed in cache:upload ulsfo and eqiad
 * 11:04 elukey: re-enabling oozie and camus after cluster reboots
 * 10:57 elukey: rebooted analytics1001
 * 10:55 elukey: Failover from analytics1001 to analytics1002 as prep step for 1001's reboot
 * 10:28 elukey: setting global read_only = 0 to analytics1003 mariadb instance
 * 10:04 elukey: rebooted analytics1003 (oozie, hive-metastore and hive-server2 daemons affected)
 * 09:51 elukey: executed aptitude remove apache2 on analytic1027 (we use nginx in front of hue, apache steals port 8888 to hue and it does not start)
 * 09:49 elukey: suspended all oozie bundles as prep step to reboot analytics1003
 * 09:39 elukey: rebooted analytics1027
 * 09:14 elukey: varnishkafka 1.0.12 installed in cache:upload codfw
 * 08:52 elukey: varnishkafka 1.0.12 installed in cache:upload esams
 * 06:45 elukey: stopped camus on analytics1027 and suspended webrequest-load-bundle via Hue (prep step for reboots)

2016-09-21

 * 17:43 elukey: installed varnishkafka 1.0.12-1 on cp3034.esams
 * 06:25 elukey: removed aqs100[123] from live traffic

2016-09-20

 * 17:03 elukey: aqs100[56] added to LVS and serving live traffic
 * 16:22 elukey: restarting cassandra on aqs1005
 * 07:41 elukey: restart cassandra on aqs100[456] for T130861 - only aqs1004 is taking live traffic

2016-09-16

 * 09:24 elukey: added aqs100[456] to conftool-data (not pooled but the load balancer is doing health checks)

2016-09-14

 * 16:07 elukey: cassandra on aqs100[123] restarted for T130861

2016-09-12

 * 18:54 ottomata: reenabled camus with new version of camus checker jar
 * 18:41 ottomata: disabled camus crons on analytics1027
 * 09:48 elukey: restarted pivot on a tmux session on stat1002 since it died

2016-09-09

 * 08:32 elukey: executed apt-get clean on analytics1032 to free space

2016-09-08

 * 15:37 ottomata: deploying refinery with v0.0.35 of refinery source
 * 09:54 elukey: removed duplicates from the hdfs crontab on analytics1027

2016-09-05

 * 13:23 elukey: removed the unsued analytics-root group from puppet

2016-08-31

 * 09:18 elukey: deleted /var/www/limn-public-data/caching on stat1001 to free space
 * 09:10 elukey: Moved stat1003:/srv/reportupdater/output/caching to /home/elukey/caching as temporary measure to free space on stat1001
 * 07:54 elukey: removed /home/home dir from stat1001 to free space
 * 07:52 elukey: removed /home/home/home dir from stat1001 to free space

2016-08-30

 * 17:45 joal: Drop pageviews test datasource in druid

2016-08-26

 * 13:52 elukey: re-enabling camus and oozie
 * 13:48 elukey: restarted hadoop-hdfs-namenode on analytics1002 (1001 back to active)
 * 13:45 elukey: restarted yarn-resourcemanager on analytics1002 (1001 back to active)
 * 13:33 elukey: restarted hadoop-hdfs-namenode on analytics1001
 * 13:30 elukey: restarted yarn-resourcemanager on analytics1001
 * 13:09 elukey: oozie, hive-server and hive-metastore restarted for security upgrades
 * 11:32 elukey: stopped camus on analytics1027
 * 11:31 elukey: suspended all the oozie bundles via Hue

2016-08-12

 * 14:40 elukey: created the 'aqsloader' user on aqs100[456] cassandra instances following https://wikitech.wikimedia.org/wiki/User:Elukey/Analytics/AQS_Tasks
 * 14:09 joal: Deploy refinery on hadoop
 * 13:51 joal: Deploy refinery from tin

2016-08-10

 * 15:41 joal: Loading 2016-07 in new aqs

2016-08-09

 * 17:48 ottomata: restarting eventlogging with kafka-python 1.3.1 (and bugfix), will be testing kafka broker restarts again today
 * 13:12 elukey: deploying the aqs cassandra user to aqs100[123] (not using it in aqs-restbase yet)
 * 13:10 elukey: deploying the aqs cassandra user to aqs100[456] (not using it in aqs-restbase yet)

2016-08-08

 * 18:54 ottomata: restarting eventlogging with processors retries=6&retry_backoff_ms=200. if this works better, will puppetize.
 * 18:30 ottomata: restarting kafka broker on kafka1013 to test eventlogging leader rebalances
 * 15:13 ottomata: deploying eventlogging/analytics - kafka-python 1.3.0 for both consumers and producers
 * 14:13 joal: Loading 2016-06 in clean new aqs
 * 14:10 joal: Adding test data onto newly wiped aqs cluster
 * 14:06 joal: Updating cassandra compaction to deflate on newly wiped cluster

2016-08-05

 * 15:39 joal: Restart oozie jobs for druid loading from production refinery instead of joal
 * 14:31 joal: Retrying deploying refinery from scap
 * 13:51 joal: Stopping pagecounts-[raw|all-sites] oozie jobs (load and archive)
 * 13:07 joal: Deploying refinery using scap
 * 12:59 joal: Rolled back refinery interactive deploy
 * 12:54 joal: Deploy refinery using brand new scap deploy !
 * 07:42 elukey: ran apt-get clean on analytics1027 to free space

2016-08-04

 * 19:50 ottomata: now running kafka-python 1.2.5 for eventlogging-service-eventbus in codfw, removed downtime for kafka200[12]
 * 17:36 elukey: added the analytics-deploy key to the Keyholder for the Analytics Refinery scap3 migration (also updated https://wikitech.wikimedia.org/wiki/Keyholder)
 * 17:29 elukey: deploying the refinery with scap3 for the first time on all nodes

2016-07-29

 * 01:55 milimetric: limn1 disk full, no idea how to clean it because /public refuses to list its files or listen to me when I try to delete it

2016-07-28

 * 17:37 ottomata: powercycling analytics1032

2016-07-26

 * 10:13 joal: Re-deploying refinery after bug fix
 * 09:26 joal: Deploying refinery
 * 08:41 joal: Deploying refinery-source using Jenkins

2016-07-25

 * 18:31 ottomata: upgrading kafka to 0.9 in main-codfw, first kafka2001 then 2002

2016-07-20

 * 19:40 joal: Relaunch 2016-07-19 cassandra per-article-daily oozie job
 * 15:45 elukey: executed https://phabricator.wikimedia.org/P3520 on aqs100[456] for both a/b cassandra instances
 * 15:33 elukey: raising compaction throughput to 256 on aqs100[456]

2016-07-18

 * 17:16 joal: Change compression from lz4 to deflate on aqs100[456]
 * 17:16 joal: Change compression from lz4 to deflate
 * 08:59 joal: deploy restabase on aqs100[23]
 * 08:36 elukey: re-executed cassandra-daily-wf-local_group_default_T_pageviews_per_article_flat-2016-7-16 (failed oozie job)

2016-06-08

 * 08:45 elukey: removed temporary retention override for kafka webrequest_text topic (T136690)
 * 08:17 elukey: lowering down webrequest_text kafka topic retention time from 7 days to 4 days to free disk space

2016-06-07

 * 17:51 ottomata: restarting broker on kafka1020
 * 10:10 elukey: hue restarted on analytics1027 for security upgrades

2016-06-06

 * 19:16 ottomata: restarting kafka broker on kafka1020 to test python consumption client

2016-06-04

 * 09:47 elukey: removed temporary Analytics Kafka upload retention override (T136690)
 * 09:38 elukey: Lowering down temporarily the Analytics kafka upload retention time to 24h to free space (T136690)

2016-06-03

 * 08:38 elukey: event logging restarted on eventlog1001
 * 08:34 elukey: rebooting kafka1012 for kernel upgrades.

2016-06-02

 * 19:53 ottomata: stopping kafka broker and restarting kafka1014

2016-06-01

 * 18:16 ottomata: stopping kafka broker on kafka1018 and rebooting node
 * 11:55 elukey: restarted EL on eventlog1001
 * 11:51 elukey: rebooting kafka1022 for kernel upgrades
 * 08:26 elukey: deleted very old kafka.log files in /var/log/kafka to free root space
 * 07:54 elukey: EL restarted on eventlog1001
 * 07:47 elukey: stopping kafka on kafka1020.eqiad and rebooting the host for Linux 4.4 upgrades

2016-05-27

 * 11:28 elukey: restarted jmxtrans on kafka10* hosts
 * 11:26 elukey: restarted jmxtrans on kafka1013
 * 11:21 elukey: executed kafka preferred-replica-election on kafka1013

2016-05-25

 * 14:24 joal: deploying aqs from tin
 * 14:16 joal: Deploying aqs into aqs_deploy

2016-05-24

 * 19:25 nuria_: deploying latest master to dashiki 08cc9a2545bcc0a183a3c00c18e81f21326a41b
 * 12:56 elukey: EL restarted after kafka1013 node stop (kernel upgrades)
 * 12:50 elukey: stopping kafka on kafka1013 and rebooting the host for kernel upgrade

2016-05-23

 * 17:28 elukey: re-run from Hue webrequest-load-wf-(text|upload)-2016-5-23-13. The failures were likely caused by my global Yarn restart on the cluster.
 * 17:20 elukey: oozie bundles re-enabled
 * 14:58 elukey: suspended all the oozie bundles as prep step for https://gerrit.wikimedia.org/r/#/c/290252 (yes I know super paranoid mode on)
 * 06:42 elukey: Removed Kafka temp. override for webrequest_upload retention.ms after freeing some disk space.
 * 06:37 elukey: Set kafka retention.ms=172800000 for the topic webrequest_upload to free some disk space on kafka1022

2016-05-20

 * 12:50 elukey: aqs100[123] restarted for openjdk upgrades
 * 08:53 elukey: cassandra upgraded to 2.1.13 on aqs1003
 * 08:30 elukey: aqs1002 migrated to cassandra 2.1.13

2016-05-02

 * 18:30 joal: manually touch _SUCCESS file in hdfs://analytics-hadoop/wmf/data/raw/webrequest/webrequest_text/hourly/2016/05/02/14/ to launch refine process despites load job failure
 * 17:38 elukey: removed out of service banner from dashiki dashboards
 * 17:33 elukey: reverted Varnish config to return 503s for datasets and stats
 * 12:14 elukey: deployed Varnish change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage.
 * 12:05 elukey: enabled maintenance banner to dashiki based dashboards via https://meta.wikimedia.org/wiki/Dashiki:OutOfService
 * 11:21 elukey: deployed last version of Event Logging. Service also restarted.

2016-04-30

 * 13:42 elukey: disabled puppet on analytics1047 and scheduled downtime for the host, IO errors in the dmesg for /dev/sdd. Stopped also Hadoop daemons to remove it from the cluster temporarily (not sure how to do it properly, will write docs).

2016-04-28

 * 10:44 joal: deployed aqs on all three nodes (Thanks elukey !!!!)
 * 09:03 joal: Deploying aqs on aqs1001
 * 08:14 elukey: restarting kafka on kafka{1012,1014,1022,1020,2001,2002} for Java upgrades. EL will be restarted as well (sigh)

2016-04-27

 * 15:47 elukey: restarted event logging on eventlogging1001
 * 14:01 elukey: restarted Event Logging on eventlogging1001
 * 13:53 elukey: restarted kafka on kafka1018.eqiad.wmnet for Java upgrades

2016-04-25

 * 19:55 nuria_: deployed new vitalsigns code to https://vital-signs.wmflabs.org
 * 17:43 nuria_: deployed new vitalsigns code to https://vital-signs.wmflabs.org

2016-04-22

 * 09:23 moritzm: installing ircbalance bugfix updates (preventing massive logspam on some systems)

2016-04-20

 * 16:06 elukey: camus re-enabled on analytics1027
 * 13:54 elukey: puppet stopped on analytics1027 together with Camus (via crontab -e)
 * 10:41 elukey: started rsync of /srv from stat1001 to stat1004 (/srv/stat1001)

2016-04-19

 * 08:33 joal: deployed new refinery on hadoop
 * 08:21 joal: deploying refinery from tin

2016-04-18

 * 10:11 elukey: execute sudo eventloggingctl restart on eventlogging1001

2016-04-13

 * 16:35 ottomata: rebuilding raid1 array on aqs1001 after hot swapping sdh
 * 15:00 joal: restarting failed jobs
 * 14:38 ottomata: restarting hadoop-yarn-nodemanager on all hadoop worker nodes one by one to apply increase in heap size

2016-04-11

 * 11:52 joal: Restart refine job after deploy
 * 10:30 joal: Deploying refinery on HDFS
 * 10:21 joal: deploying refinery from tin
 * 09:13 joal: Releasing refinery-source v0.0.30 to archiva

2016-04-08

 * 10:09 joal: deploying aqs from tin on aqs1003
 * 10:08 joal: deploying aqs from tin on aqs1002
 * 10:03 joal: deploying aqs from tin on aqs1001

2016-04-07

 * 22:58 nuria_: deployed browser-reports master branch to labs
 * 19:34 ottomata: restarting eventlogging so it runs out of the scap deploy in eventlogging/analytics
 * 10:21 elukey: nodejs-legacy upgraded too on all aqs nodes
 * 09:43 elukey: aqs1002.eqiad.wmnet re-pooled, aqs1003.eqiad.wmnet de-pooled/re-pooled too (nodejs upgrade)
 * 09:30 elukey: aqs1002.eqiad.wmnet de-pooled via confctl. Nodejs upgrade will follow.
 * 09:18 elukey: re-added aqs1001.eqiad.wmnet to LVS pool via confctl
 * 08:59 elukey: removed aqs1001.eqiad.wmnet from LVS pool via confd for nodejs upgrade

2016-04-06

 * 14:04 elukey: ran nodetool repair system_auth on aqs1002.eqiad/aqs1003.eqiad
 * 13:59 elukey: ran nodetool repair system_auth on aqs1001.eqiad
 * 11:45 elukey: started nodetool repair on aqs1002 after running "ALTER KEYSPACE system_auth WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': 3 };"

2016-04-04

 * 15:45 elukey: aqs1001 re-added to the aqs pool (nodejd NOT upgraded)
 * 14:46 elukey: de-pooled aqs1001.eqiad from the confd pool for nodejs upgrade
 * 10:42 elukey: re-pooled aqs1001.eqiad (no node upgrade, need more info about restbase)
 * 09:53 elukey: de-pooled aqs1001.eqiad.wmnet as pre-step for nodejs upgrade

2016-04-01

 * 13:38 joal: Deploying aqs in aqs1003 from tin
 * 13:35 joal: Deploying aqs in aqs1002 from tin
 * 13:23 joal: Deploying aqs in aqs1001 from tin

2016-03-31

 * 20:01 ottomata: stopping eventlogging, uninstalling globally installed eventlogging python code, running puppet, restarting eventlogging from /srv/deployment/eventlogging/eventlogging
 * 19:45 ottomata: merging puppet change to run eventlogging code out of deploy repo

2016-03-30

 * 18:06 ottomata: repooling aqs1001
 * 18:00 ottomata: depooling aqs1001

2016-03-29

 * 13:27 joal: Update CirrusSearchRequestSet schema in hive

2016-03-24

 * 18:29 elukey: camus and puppet re-enabled on analytics1027
 * 18:27 ottomata: resuming suspended webrequest load and refine jobs
 * 17:57 elukey: enabled Hadoop Master Node automatic failover on analytics1001/1002 (this time without fireworks).
 * 17:09 ottomata: temporarily suspending oozie webrequest refine jobs
 * 16:18 ottomata: suspending webrequest load job temporarily
 * 16:15 elukey: disabled camus and puppet on analytics1027
 * 13:16 elukey: camus and puppet re-enabled on analytics1027
 * 09:56 elukey: Camus stopped on analitics1027 (puppe disabled too)
 * 09:52 elukey: puppet disabled on analytics1001/1002 as pre-set to enable HDFS HA failover.

2016-01-21

 * 16:35 ottomata: stopped eventlogging mysql consumers for long downtime: https://phabricator.wikimedia.org/T120187
 * 16:20 ottomata: started eventlogging mysql consumers
 * 15:59 ottomata: stopping eventlogging mysql consumers for https://phabricator.wikimedia.org/T123546

2016-01-20

 * 18:30 mforns: deployed EL in production with removal of queue
 * 17:37 mforns: restarted EventLogging because of Kafka consumption lag

2016-01-19

 * 20:08 mforns: deployed eventlogging to deployment-eventlogging03 with removal of mysql consumer batch

2016-01-18

 * 14:49 ottomata: restarting eventlogging to un-blacklist MobileWebSectionUsage
 * 01:07 ottomata: restarted eventlogging again. A single raw client side processor consumer seemed stuck (according to burrow).  seeing offset commit errors in logs.

2016-01-17

 * 08:26 ottomata: restarting eventlogging to see if it'll help burrow reported kafka consumer lag

2016-01-14

 * 22:29 YuviPanda: wikimetrics
 * 19:55 ottomata: restarted eventlogging_sync script to insert batches of 1000

2016-01-13

 * 20:01 ottomata: dropped MobileWebSectionUsage_14321266 and MobileWebSectionUsage_15038458 from analytics-store eventlogging slave db
 * 19:24 ottomata: restarting eventlogging to apply blacklist of MobileWebSectionUsage scheas

2015-12-30

 * 15:23 ottomata: killing oozie legacy_tsv job 0102159-150605005438095-oozie-oozi-B to restart it without mobile, 5xx-mobile and zero outputs

2015-11-10

 * 03:14 ottomata: restarted eventlogging

2015-11-09

 * 14:40 ottomata: restarting eventlogging to see if it is ok after enabling firewall rules on kafka1014

2015-11-06

 * 15:51 joal: Change replication factor to 2 in cassandra per_article_flat keyspace
 * 15:47 ottomata: deploying aqs

2015-11-05

 * 18:24 ottomata: deploying aqs

2015-10-29

 * 10:35 joal: Gzipped already archived pageview files
 * 10:34 joal: restarted pageview job to archive gzipped files
 * 10:34 joal: refinery deployed

2015-10-28

 * 19:16 joal: Downsizing cassandra replication from 3 to to 2 on per_article_flat keyspace
 * 19:07 joal: Restart load job (based on IMPORTED flag)
 * 15:48 joal: Deploying refinery
 * 15:40 joal: deploying refinery-source v0.0.22

2015-10-27

 * 19:06 ottomata: deploying aqs
 * 18:24 joal: deploying refinery
 * 16:46 joal: Releasing refinery-source v0.0.21
 * 10:34 joal: manual aggregator launch after small bug correction

2015-10-26

 * 18:42 joal: refine bundle, pageview_hourly and projectview_hourly coord restarted
 * 18:41 joal: refinery deployed on HDFS
 * 14:33 joal: truncating "local_group_default_T_pageviews_per_article".data on aqs
 * 09:58 joal: Restart cassandra on aqs1001

2015-10-22

 * 20:24 ottomata: deploying aqs
 * 09:51 joal: restart cassandra on aqs1003

2015-10-21

 * 22:53 milimetric: deployed EventLogging and tried to backfill data lost on 2015.10.14 but failed
 * 18:24 joal: Stopped per article loading in cassandra
 * 13:39 ottomata: deploying aqs

2015-10-20

 * 10:12 joal: restart cassandra on aqs1002

2015-10-19

 * 18:35 ottomata: restarting eventlogging with change to parse schema names out of errored events

2015-10-16

 * 20:38 joal: restarted cassandra on aqs100[1,2,3]

2015-10-15

 * 12:17 joal: Refinery deploy needed before restart --> Deploying
 * 12:12 joal: Restarting daily and monthly mobile unique coordinators with new patch
 * 12:12 joal: Rerunning daily mobile unique jobs for days 2015-08-[03,04,11,12,12,14,17], 2015-09-16
 * 12:10 joal: Stopped daily and monthly mobile unique coordinators

2015-10-14

 * 15:22 ottomata: restarting lagging eventlogging mysql consumer

2015-10-09

 * 19:26 ottomata: releasing refinery 0.20
 * 15:19 ottomata: moved camus property files out of refinery repository and into puppet. Camus properties now live on an27 at /etc/camus.d, and camus log files are in /var/log/camus
 * 14:54 joal: Cassandra restarted on aqs1003
 * 09:15 joal: Restart cassandra on aqs1002

2015-10-08

 * 17:38 joal: Backfilling load from hadoop to cassandra from beginning of october

2015-10-07

 * 16:32 joal: Started cassandra load jobs from 2015-10-01

2015-10-01

 * 16:27 valhallasw`cloud: testing again
 * 16:13 valhallasw`cloud: test

2015-09-29

 * 10:51 joal: cluster back to normql state. Some errors are still not explained, need to be carefull.

2015-09-28

 * 14:56 joal: backfilling various load jobs having failed at earlier stages than check_sequence_statistics
 * 13:03 joal: Errors on cluster, dome refine jobs have failed, investigating.

2015-08-19

 * 18:20 ottomata: does this log work?

March 25

 * 22:09 qchris: starting HDFS balance for unhealty node analytics1016.eqiad.wmnet with healty nodes analytics1037.eqiad.wmnet,analytics1040.eqiad.wmnet

February 25

 * 16:07 ottomata: hello?

February 7

 * 02:10 qchris: Ran kafka leader re-election as analytics1021 dropped out of it's partition leader role.
 * 01:32 qchris: name nodes died with error "Java heap space" and did not come back up. Bumping heap allowed to resurrect them (See ).

February 4

 * 23:22 qchris: Manual failover of Hadoop namenode from analytics1001 to analytics1002, as analytics1001 had Heap space errors
 * 07:49 qchris: Manual failover of Hadoop namenode from analytics1002 to analytics1001, as analytics1002 had Heap space errors

January 30

 * 20:21 ottomata: deployed refinery 0.0.4
 * 19:37 ottomata: released refinery 0.0.4

January 25

 * 21:53 qchris: Marked raw text webrequest partition for 2015-01-24T00/1H ok (See )

January 23

 * 22:46 qchris: Marked raw upload webrequest partition for 2015-01-16T12/1H ok (The partition only needed deduping)
 * 22:23 qchris: Marked raw upload webrequest partition for 2015-01-16T01/1H ok (The partition only needed deduping)
 * 22:11 qchris: Marked raw upload webrequest partition for 2015-01-15T17/1H ok (The partition only needed deduping)
 * 22:04 qchris: Marked raw text webrequest partition for 2015-01-15T15/1H ok (The partition only needed deduping)
 * 22:01 qchris: Marked raw mobile webrequest partition for 2015-01-16T01/1H ok (The partition only needed deduping)

January 15

 * 08:25 qchris: Ran kafka leader re-election to bring analytics1021 back into the set of leaders

January 10

 * 16:55 qchris: Dropped wmf.webstats tables, as announced on https://lists.wikimedia.org/pipermail/analytics/2015-January/003019.html

January 6

 * 12:15 qchris: Marked raw mobile+text webrequest partitions for 2015-01-05T17/1H ok (See )

January 4

 * 12:06 qchris: Marked raw mobile and upload webrequest partition for 2015-01-03T10/1H ok (See )

January 2

 * 21:21 qchris: Ran kafka leader re-election to bring analytics1021 back into the set of leaders
 * 21:07 qchris: Marked raw bits, text, and upload webrequest partition for 2014-12-11T14/1H ok (See )
 * 19:05 qchris: Marked raw text+upload webrequest partitions for 2014-12-26T06/1H ok (See )
 * 15:51 qchris: Marked raw text webrequest partition for 2014-12-11T20/1H ok (See )
 * 12:39 qchris: Marked raw mobile webrequest partition for 2014-12-29T17/1H ok (See )
 * 11:21 qchris: Marked raw text webrequest partition for 2014-12-30T20/1H ok (See )

January 1

 * 20:26 qchris: Marked raw webrequest partitions for 2014-12-10T14/2H ok (See )