Analytics/Server Admin Log

2016-11-06

 * 09:27 joal: started 0021136-161020124223818-oozie-oozi-C to re-run wf-text-2016-11-6-01 -> 07

2016-11-05

 * 18:05 joal: started 0020254-161020124223818-oozie-oozi-C to re-run wf-text-2016-11-5-10
 * 08:47 joal: started 0019693-161020124223818-oozie-oozi-C to re-run wf-text-2016-11-5-00 -> wf-upload-2016-11-5-07
 * 08:45 joal: started 0019686-161020124223818-oozie-oozi-C to re-run wf-text-2016-11-4-19 -> wf-upload-2016-11-4-20

2016-11-04

 * 08:45 elukey: started 0018557-161020124223818-oozie-oozi-C to re-run wf-upload-2016-11-4-6
 * 08:45 elukey: started 0018549-161020124223818-oozie-oozi-C to re-run wf-upload-2016-11-4-2 -> wf-upload-2016-11-4-4

2016-11-02

 * 19:43 ottomata: manually stopped an old wikistats_git pageviews cron in spetrea's crontab on stat1003. no output from it since 2013, and spetrea doesn't really have an account

2016-11-01

 * 17:52 joal: Deploying refinery
 * 14:45 joal: Restart webrequest load job to apply
 * 14:33 joal: deploying refinery onto the cluster
 * 14:00 ottomata: restarting pivot

2016-10-31

 * 17:09 ottomata: bouncing eventlogging
 * 17:00 ottomata: kafka preferred replica election on main-eqiad kafka cluster to promote kafka1003 as leader for its preferred partitions
 * 14:49 ottomata: adding kafka1003 in as replicas for active main-eqiad topics
 * 14:12 ottomata: adding kafka1003 as kafka broker in main-eqiad cluster
 * 14:00 joal: deploy refinery

2016-10-28

 * 13:04 elukey: oozie firewall rules changed - nowonly the analytics network is allowed
 * 00:19 bd808: Testing logging to mw.o SAL via stashbot

2016-09-23

 * 09:06 elukey: reboot eventlog2001.codfw.wmnet for kernel upgrades
 * 08:45 elukey: upgrading varnishkafka to 1.0.12-1 in cache:misc
 * 08:32 elukey: upgrading varnishkafka to 1.0.12-1 in cache:maps

2016-09-22

 * 15:30 elukey: analytics1001 is back Yarn/HDFS master
 * 13:16 elukey: previous comment was meant to be read as "set a permanent read only = false"
 * 13:16 elukey: set read_only = false (on startup) for the analytics1003's mariadb instance
 * 13:12 elukey: restarted oozie jobs for 2016-9-22-6
 * 12:50 elukey: varnishkafka 1.0.12 installed in cache:upload ulsfo and eqiad
 * 11:04 elukey: re-enabling oozie and camus after cluster reboots
 * 10:57 elukey: rebooted analytics1001
 * 10:55 elukey: Failover from analytics1001 to analytics1002 as prep step for 1001's reboot
 * 10:28 elukey: setting global read_only = 0 to analytics1003 mariadb instance
 * 10:04 elukey: rebooted analytics1003 (oozie, hive-metastore and hive-server2 daemons affected)
 * 09:51 elukey: executed aptitude remove apache2 on analytic1027 (we use nginx in front of hue, apache steals port 8888 to hue and it does not start)
 * 09:49 elukey: suspended all oozie bundles as prep step to reboot analytics1003
 * 09:39 elukey: rebooted analytics1027
 * 09:14 elukey: varnishkafka 1.0.12 installed in cache:upload codfw
 * 08:52 elukey: varnishkafka 1.0.12 installed in cache:upload esams
 * 06:45 elukey: stopped camus on analytics1027 and suspended webrequest-load-bundle via Hue (prep step for reboots)

2016-09-21

 * 17:43 elukey: installed varnishkafka 1.0.12-1 on cp3034.esams
 * 06:25 elukey: removed aqs100[123] from live traffic

2016-09-20

 * 17:03 elukey: aqs100[56] added to LVS and serving live traffic
 * 16:22 elukey: restarting cassandra on aqs1005
 * 07:41 elukey: restart cassandra on aqs100[456] for T130861 - only aqs1004 is taking live traffic

2016-09-16

 * 09:24 elukey: added aqs100[456] to conftool-data (not pooled but the load balancer is doing health checks)

2016-09-14

 * 16:07 elukey: cassandra on aqs100[123] restarted for T130861

2016-09-12

 * 18:54 ottomata: reenabled camus with new version of camus checker jar
 * 18:41 ottomata: disabled camus crons on analytics1027
 * 09:48 elukey: restarted pivot on a tmux session on stat1002 since it died

2016-09-09

 * 08:32 elukey: executed apt-get clean on analytics1032 to free space

2016-09-08

 * 15:37 ottomata: deploying refinery with v0.0.35 of refinery source
 * 09:54 elukey: removed duplicates from the hdfs crontab on analytics1027

2016-09-05

 * 13:23 elukey: removed the unsued analytics-root group from puppet

2016-08-31

 * 09:18 elukey: deleted /var/www/limn-public-data/caching on stat1001 to free space
 * 09:10 elukey: Moved stat1003:/srv/reportupdater/output/caching to /home/elukey/caching as temporary measure to free space on stat1001
 * 07:54 elukey: removed /home/home dir from stat1001 to free space
 * 07:52 elukey: removed /home/home/home dir from stat1001 to free space

2016-08-30

 * 17:45 joal: Drop pageviews test datasource in druid

2016-08-26

 * 13:52 elukey: re-enabling camus and oozie
 * 13:48 elukey: restarted hadoop-hdfs-namenode on analytics1002 (1001 back to active)
 * 13:45 elukey: restarted yarn-resourcemanager on analytics1002 (1001 back to active)
 * 13:33 elukey: restarted hadoop-hdfs-namenode on analytics1001
 * 13:30 elukey: restarted yarn-resourcemanager on analytics1001
 * 13:09 elukey: oozie, hive-server and hive-metastore restarted for security upgrades
 * 11:32 elukey: stopped camus on analytics1027
 * 11:31 elukey: suspended all the oozie bundles via Hue

2016-08-12

 * 14:40 elukey: created the 'aqsloader' user on aqs100[456] cassandra instances following https://wikitech.wikimedia.org/wiki/User:Elukey/Analytics/AQS_Tasks
 * 14:09 joal: Deploy refinery on hadoop
 * 13:51 joal: Deploy refinery from tin

2016-08-10

 * 15:41 joal: Loading 2016-07 in new aqs

2016-08-09

 * 17:48 ottomata: restarting eventlogging with kafka-python 1.3.1 (and bugfix), will be testing kafka broker restarts again today
 * 13:12 elukey: deploying the aqs cassandra user to aqs100[123] (not using it in aqs-restbase yet)
 * 13:10 elukey: deploying the aqs cassandra user to aqs100[456] (not using it in aqs-restbase yet)

2016-08-08

 * 18:54 ottomata: restarting eventlogging with processors retries=6&retry_backoff_ms=200. if this works better, will puppetize.
 * 18:30 ottomata: restarting kafka broker on kafka1013 to test eventlogging leader rebalances
 * 15:13 ottomata: deploying eventlogging/analytics - kafka-python 1.3.0 for both consumers and producers
 * 14:13 joal: Loading 2016-06 in clean new aqs
 * 14:10 joal: Adding test data onto newly wiped aqs cluster
 * 14:06 joal: Updating cassandra compaction to deflate on newly wiped cluster

2016-08-05

 * 15:39 joal: Restart oozie jobs for druid loading from production refinery instead of joal
 * 14:31 joal: Retrying deploying refinery from scap
 * 13:51 joal: Stopping pagecounts-[raw|all-sites] oozie jobs (load and archive)
 * 13:07 joal: Deploying refinery using scap
 * 12:59 joal: Rolled back refinery interactive deploy
 * 12:54 joal: Deploy refinery using brand new scap deploy !
 * 07:42 elukey: ran apt-get clean on analytics1027 to free space

2016-08-04

 * 19:50 ottomata: now running kafka-python 1.2.5 for eventlogging-service-eventbus in codfw, removed downtime for kafka200[12]
 * 17:36 elukey: added the analytics-deploy key to the Keyholder for the Analytics Refinery scap3 migration (also updated https://wikitech.wikimedia.org/wiki/Keyholder)
 * 17:29 elukey: deploying the refinery with scap3 for the first time on all nodes

2016-07-29

 * 01:55 milimetric: limn1 disk full, no idea how to clean it because /public refuses to list its files or listen to me when I try to delete it

2016-07-28

 * 17:37 ottomata: powercycling analytics1032

2016-07-26

 * 10:13 joal: Re-deploying refinery after bug fix
 * 09:26 joal: Deploying refinery
 * 08:41 joal: Deploying refinery-source using Jenkins

2016-07-25

 * 18:31 ottomata: upgrading kafka to 0.9 in main-codfw, first kafka2001 then 2002

2016-07-20

 * 19:40 joal: Relaunch 2016-07-19 cassandra per-article-daily oozie job
 * 15:45 elukey: executed https://phabricator.wikimedia.org/P3520 on aqs100[456] for both a/b cassandra instances
 * 15:33 elukey: raising compaction throughput to 256 on aqs100[456]

2016-07-18

 * 17:16 joal: Change compression from lz4 to deflate on aqs100[456]
 * 17:16 joal: Change compression from lz4 to deflate
 * 08:59 joal: deploy restabase on aqs100[23]
 * 08:36 elukey: re-executed cassandra-daily-wf-local_group_default_T_pageviews_per_article_flat-2016-7-16 (failed oozie job)

2016-06-08

 * 08:45 elukey: removed temporary retention override for kafka webrequest_text topic (T136690)
 * 08:17 elukey: lowering down webrequest_text kafka topic retention time from 7 days to 4 days to free disk space

2016-06-07

 * 17:51 ottomata: restarting broker on kafka1020
 * 10:10 elukey: hue restarted on analytics1027 for security upgrades

2016-06-06

 * 19:16 ottomata: restarting kafka broker on kafka1020 to test python consumption client

2016-06-04

 * 09:47 elukey: removed temporary Analytics Kafka upload retention override (T136690)
 * 09:38 elukey: Lowering down temporarily the Analytics kafka upload retention time to 24h to free space (T136690)

2016-06-03

 * 08:38 elukey: event logging restarted on eventlog1001
 * 08:34 elukey: rebooting kafka1012 for kernel upgrades.

2016-06-02

 * 19:53 ottomata: stopping kafka broker and restarting kafka1014

2016-06-01

 * 18:16 ottomata: stopping kafka broker on kafka1018 and rebooting node
 * 11:55 elukey: restarted EL on eventlog1001
 * 11:51 elukey: rebooting kafka1022 for kernel upgrades
 * 08:26 elukey: deleted very old kafka.log files in /var/log/kafka to free root space
 * 07:54 elukey: EL restarted on eventlog1001
 * 07:47 elukey: stopping kafka on kafka1020.eqiad and rebooting the host for Linux 4.4 upgrades

2016-05-27

 * 11:28 elukey: restarted jmxtrans on kafka10* hosts
 * 11:26 elukey: restarted jmxtrans on kafka1013
 * 11:21 elukey: executed kafka preferred-replica-election on kafka1013

2016-05-25

 * 14:24 joal: deploying aqs from tin
 * 14:16 joal: Deploying aqs into aqs_deploy

2016-05-24

 * 19:25 nuria_: deploying latest master to dashiki 08cc9a2545bcc0a183a3c00c18e81f21326a41b
 * 12:56 elukey: EL restarted after kafka1013 node stop (kernel upgrades)
 * 12:50 elukey: stopping kafka on kafka1013 and rebooting the host for kernel upgrade

2016-05-23

 * 17:28 elukey: re-run from Hue webrequest-load-wf-(text|upload)-2016-5-23-13. The failures were likely caused by my global Yarn restart on the cluster.
 * 17:20 elukey: oozie bundles re-enabled
 * 14:58 elukey: suspended all the oozie bundles as prep step for https://gerrit.wikimedia.org/r/#/c/290252 (yes I know super paranoid mode on)
 * 06:42 elukey: Removed Kafka temp. override for webrequest_upload retention.ms after freeing some disk space.
 * 06:37 elukey: Set kafka retention.ms=172800000 for the topic webrequest_upload to free some disk space on kafka1022

2016-05-20

 * 12:50 elukey: aqs100[123] restarted for openjdk upgrades
 * 08:53 elukey: cassandra upgraded to 2.1.13 on aqs1003
 * 08:30 elukey: aqs1002 migrated to cassandra 2.1.13

2016-05-02

 * 18:30 joal: manually touch _SUCCESS file in hdfs://analytics-hadoop/wmf/data/raw/webrequest/webrequest_text/hourly/2016/05/02/14/ to launch refine process despites load job failure
 * 17:38 elukey: removed out of service banner from dashiki dashboards
 * 17:33 elukey: reverted Varnish config to return 503s for datasets and stats
 * 12:14 elukey: deployed Varnish change to force HTTP 503 for datasets.wikimedia.org, stats.wikimedia.org, metrics.wikimedia.org as prep-step for OS reimage.
 * 12:05 elukey: enabled maintenance banner to dashiki based dashboards via https://meta.wikimedia.org/wiki/Dashiki:OutOfService
 * 11:21 elukey: deployed last version of Event Logging. Service also restarted.

2016-04-30

 * 13:42 elukey: disabled puppet on analytics1047 and scheduled downtime for the host, IO errors in the dmesg for /dev/sdd. Stopped also Hadoop daemons to remove it from the cluster temporarily (not sure how to do it properly, will write docs).

2016-04-28

 * 10:44 joal: deployed aqs on all three nodes (Thanks elukey !!!!)
 * 09:03 joal: Deploying aqs on aqs1001
 * 08:14 elukey: restarting kafka on kafka{1012,1014,1022,1020,2001,2002} for Java upgrades. EL will be restarted as well (sigh)

2016-04-27

 * 15:47 elukey: restarted event logging on eventlogging1001
 * 14:01 elukey: restarted Event Logging on eventlogging1001
 * 13:53 elukey: restarted kafka on kafka1018.eqiad.wmnet for Java upgrades

2016-04-25

 * 19:55 nuria_: deployed new vitalsigns code to https://vital-signs.wmflabs.org
 * 17:43 nuria_: deployed new vitalsigns code to https://vital-signs.wmflabs.org

2016-04-22

 * 09:23 moritzm: installing ircbalance bugfix updates (preventing massive logspam on some systems)

2016-04-20

 * 16:06 elukey: camus re-enabled on analytics1027
 * 13:54 elukey: puppet stopped on analytics1027 together with Camus (via crontab -e)
 * 10:41 elukey: started rsync of /srv from stat1001 to stat1004 (/srv/stat1001)

2016-04-19

 * 08:33 joal: deployed new refinery on hadoop
 * 08:21 joal: deploying refinery from tin

2016-04-18

 * 10:11 elukey: execute sudo eventloggingctl restart on eventlogging1001

2016-04-13

 * 16:35 ottomata: rebuilding raid1 array on aqs1001 after hot swapping sdh
 * 15:00 joal: restarting failed jobs
 * 14:38 ottomata: restarting hadoop-yarn-nodemanager on all hadoop worker nodes one by one to apply increase in heap size

2016-04-11

 * 11:52 joal: Restart refine job after deploy
 * 10:30 joal: Deploying refinery on HDFS
 * 10:21 joal: deploying refinery from tin
 * 09:13 joal: Releasing refinery-source v0.0.30 to archiva

2016-04-08

 * 10:09 joal: deploying aqs from tin on aqs1003
 * 10:08 joal: deploying aqs from tin on aqs1002
 * 10:03 joal: deploying aqs from tin on aqs1001

2016-04-07

 * 22:58 nuria_: deployed browser-reports master branch to labs
 * 19:34 ottomata: restarting eventlogging so it runs out of the scap deploy in eventlogging/analytics
 * 10:21 elukey: nodejs-legacy upgraded too on all aqs nodes
 * 09:43 elukey: aqs1002.eqiad.wmnet re-pooled, aqs1003.eqiad.wmnet de-pooled/re-pooled too (nodejs upgrade)
 * 09:30 elukey: aqs1002.eqiad.wmnet de-pooled via confctl. Nodejs upgrade will follow.
 * 09:18 elukey: re-added aqs1001.eqiad.wmnet to LVS pool via confctl
 * 08:59 elukey: removed aqs1001.eqiad.wmnet from LVS pool via confd for nodejs upgrade

2016-04-06

 * 14:04 elukey: ran nodetool repair system_auth on aqs1002.eqiad/aqs1003.eqiad
 * 13:59 elukey: ran nodetool repair system_auth on aqs1001.eqiad
 * 11:45 elukey: started nodetool repair on aqs1002 after running "ALTER KEYSPACE system_auth WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': 3 };"

2016-04-04

 * 15:45 elukey: aqs1001 re-added to the aqs pool (nodejd NOT upgraded)
 * 14:46 elukey: de-pooled aqs1001.eqiad from the confd pool for nodejs upgrade
 * 10:42 elukey: re-pooled aqs1001.eqiad (no node upgrade, need more info about restbase)
 * 09:53 elukey: de-pooled aqs1001.eqiad.wmnet as pre-step for nodejs upgrade

2016-04-01

 * 13:38 joal: Deploying aqs in aqs1003 from tin
 * 13:35 joal: Deploying aqs in aqs1002 from tin
 * 13:23 joal: Deploying aqs in aqs1001 from tin

2016-03-31

 * 20:01 ottomata: stopping eventlogging, uninstalling globally installed eventlogging python code, running puppet, restarting eventlogging from /srv/deployment/eventlogging/eventlogging
 * 19:45 ottomata: merging puppet change to run eventlogging code out of deploy repo

2016-03-30

 * 18:06 ottomata: repooling aqs1001
 * 18:00 ottomata: depooling aqs1001

2016-03-29

 * 13:27 joal: Update CirrusSearchRequestSet schema in hive

2016-03-24

 * 18:29 elukey: camus and puppet re-enabled on analytics1027
 * 18:27 ottomata: resuming suspended webrequest load and refine jobs
 * 17:57 elukey: enabled Hadoop Master Node automatic failover on analytics1001/1002 (this time without fireworks).
 * 17:09 ottomata: temporarily suspending oozie webrequest refine jobs
 * 16:18 ottomata: suspending webrequest load job temporarily
 * 16:15 elukey: disabled camus and puppet on analytics1027
 * 13:16 elukey: camus and puppet re-enabled on analytics1027
 * 09:56 elukey: Camus stopped on analitics1027 (puppe disabled too)
 * 09:52 elukey: puppet disabled on analytics1001/1002 as pre-set to enable HDFS HA failover.

2016-01-21

 * 16:35 ottomata: stopped eventlogging mysql consumers for long downtime: https://phabricator.wikimedia.org/T120187
 * 16:20 ottomata: started eventlogging mysql consumers
 * 15:59 ottomata: stopping eventlogging mysql consumers for https://phabricator.wikimedia.org/T123546

2016-01-20

 * 18:30 mforns: deployed EL in production with removal of queue
 * 17:37 mforns: restarted EventLogging because of Kafka consumption lag

2016-01-19

 * 20:08 mforns: deployed eventlogging to deployment-eventlogging03 with removal of mysql consumer batch

2016-01-18

 * 14:49 ottomata: restarting eventlogging to un-blacklist MobileWebSectionUsage
 * 01:07 ottomata: restarted eventlogging again. A single raw client side processor consumer seemed stuck (according to burrow).  seeing offset commit errors in logs.

2016-01-17

 * 08:26 ottomata: restarting eventlogging to see if it'll help burrow reported kafka consumer lag

2016-01-14

 * 22:29 YuviPanda: wikimetrics
 * 19:55 ottomata: restarted eventlogging_sync script to insert batches of 1000

2016-01-13

 * 20:01 ottomata: dropped MobileWebSectionUsage_14321266 and MobileWebSectionUsage_15038458 from analytics-store eventlogging slave db
 * 19:24 ottomata: restarting eventlogging to apply blacklist of MobileWebSectionUsage scheas

2015-12-30

 * 15:23 ottomata: killing oozie legacy_tsv job 0102159-150605005438095-oozie-oozi-B to restart it without mobile, 5xx-mobile and zero outputs

2015-11-10

 * 03:14 ottomata: restarted eventlogging

2015-11-09

 * 14:40 ottomata: restarting eventlogging to see if it is ok after enabling firewall rules on kafka1014

2015-11-06

 * 15:51 joal: Change replication factor to 2 in cassandra per_article_flat keyspace
 * 15:47 ottomata: deploying aqs

2015-11-05

 * 18:24 ottomata: deploying aqs

2015-10-29

 * 10:35 joal: Gzipped already archived pageview files
 * 10:34 joal: restarted pageview job to archive gzipped files
 * 10:34 joal: refinery deployed

2015-10-28

 * 19:16 joal: Downsizing cassandra replication from 3 to to 2 on per_article_flat keyspace
 * 19:07 joal: Restart load job (based on IMPORTED flag)
 * 15:48 joal: Deploying refinery
 * 15:40 joal: deploying refinery-source v0.0.22

2015-10-27

 * 19:06 ottomata: deploying aqs
 * 18:24 joal: deploying refinery
 * 16:46 joal: Releasing refinery-source v0.0.21
 * 10:34 joal: manual aggregator launch after small bug correction

2015-10-26

 * 18:42 joal: refine bundle, pageview_hourly and projectview_hourly coord restarted
 * 18:41 joal: refinery deployed on HDFS
 * 14:33 joal: truncating "local_group_default_T_pageviews_per_article".data on aqs
 * 09:58 joal: Restart cassandra on aqs1001

2015-10-22

 * 20:24 ottomata: deploying aqs
 * 09:51 joal: restart cassandra on aqs1003

2015-10-21

 * 22:53 milimetric: deployed EventLogging and tried to backfill data lost on 2015.10.14 but failed
 * 18:24 joal: Stopped per article loading in cassandra
 * 13:39 ottomata: deploying aqs

2015-10-20

 * 10:12 joal: restart cassandra on aqs1002

2015-10-19

 * 18:35 ottomata: restarting eventlogging with change to parse schema names out of errored events

2015-10-16

 * 20:38 joal: restarted cassandra on aqs100[1,2,3]

2015-10-15

 * 12:17 joal: Refinery deploy needed before restart --> Deploying
 * 12:12 joal: Restarting daily and monthly mobile unique coordinators with new patch
 * 12:12 joal: Rerunning daily mobile unique jobs for days 2015-08-[03,04,11,12,12,14,17], 2015-09-16
 * 12:10 joal: Stopped daily and monthly mobile unique coordinators

2015-10-14

 * 15:22 ottomata: restarting lagging eventlogging mysql consumer

2015-10-09

 * 19:26 ottomata: releasing refinery 0.20
 * 15:19 ottomata: moved camus property files out of refinery repository and into puppet. Camus properties now live on an27 at /etc/camus.d, and camus log files are in /var/log/camus
 * 14:54 joal: Cassandra restarted on aqs1003
 * 09:15 joal: Restart cassandra on aqs1002

2015-10-08

 * 17:38 joal: Backfilling load from hadoop to cassandra from beginning of october

2015-10-07

 * 16:32 joal: Started cassandra load jobs from 2015-10-01

2015-10-01

 * 16:27 valhallasw`cloud: testing again
 * 16:13 valhallasw`cloud: test

2015-09-29

 * 10:51 joal: cluster back to normql state. Some errors are still not explained, need to be carefull.

2015-09-28

 * 14:56 joal: backfilling various load jobs having failed at earlier stages than check_sequence_statistics
 * 13:03 joal: Errors on cluster, dome refine jobs have failed, investigating.

2015-08-19

 * 18:20 ottomata: does this log work?

March 25

 * 22:09 qchris: starting HDFS balance for unhealty node analytics1016.eqiad.wmnet with healty nodes analytics1037.eqiad.wmnet,analytics1040.eqiad.wmnet

February 25

 * 16:07 ottomata: hello?

February 7

 * 02:10 qchris: Ran kafka leader re-election as analytics1021 dropped out of it's partition leader role.
 * 01:32 qchris: name nodes died with error "Java heap space" and did not come back up. Bumping heap allowed to resurrect them (See ).

February 4

 * 23:22 qchris: Manual failover of Hadoop namenode from analytics1001 to analytics1002, as analytics1001 had Heap space errors
 * 07:49 qchris: Manual failover of Hadoop namenode from analytics1002 to analytics1001, as analytics1002 had Heap space errors

January 30

 * 20:21 ottomata: deployed refinery 0.0.4
 * 19:37 ottomata: released refinery 0.0.4

January 25

 * 21:53 qchris: Marked raw text webrequest partition for 2015-01-24T00/1H ok (See )

January 23

 * 22:46 qchris: Marked raw upload webrequest partition for 2015-01-16T12/1H ok (The partition only needed deduping)
 * 22:23 qchris: Marked raw upload webrequest partition for 2015-01-16T01/1H ok (The partition only needed deduping)
 * 22:11 qchris: Marked raw upload webrequest partition for 2015-01-15T17/1H ok (The partition only needed deduping)
 * 22:04 qchris: Marked raw text webrequest partition for 2015-01-15T15/1H ok (The partition only needed deduping)
 * 22:01 qchris: Marked raw mobile webrequest partition for 2015-01-16T01/1H ok (The partition only needed deduping)

January 15

 * 08:25 qchris: Ran kafka leader re-election to bring analytics1021 back into the set of leaders

January 10

 * 16:55 qchris: Dropped wmf.webstats tables, as announced on https://lists.wikimedia.org/pipermail/analytics/2015-January/003019.html

January 6

 * 12:15 qchris: Marked raw mobile+text webrequest partitions for 2015-01-05T17/1H ok (See )

January 4

 * 12:06 qchris: Marked raw mobile and upload webrequest partition for 2015-01-03T10/1H ok (See )

January 2

 * 21:21 qchris: Ran kafka leader re-election to bring analytics1021 back into the set of leaders
 * 21:07 qchris: Marked raw bits, text, and upload webrequest partition for 2014-12-11T14/1H ok (See )
 * 19:05 qchris: Marked raw text+upload webrequest partitions for 2014-12-26T06/1H ok (See )
 * 15:51 qchris: Marked raw text webrequest partition for 2014-12-11T20/1H ok (See )
 * 12:39 qchris: Marked raw mobile webrequest partition for 2014-12-29T17/1H ok (See )
 * 11:21 qchris: Marked raw text webrequest partition for 2014-12-30T20/1H ok (See )

January 1

 * 20:26 qchris: Marked raw webrequest partitions for 2014-12-10T14/2H ok (See )