Reading/Multimedia/Scrum notes/gi11es

Last update on: 2014-05-13

2014-05-13
 
 * Yesterday
 * Reviewed everything in the queue
 * Added my own cron job running at 7am UTC to generate the data for the graphs, as Mark's seems to still be failing (#596)
 * Looked at the API graphs again and wrote a final analysis. I couldn't find anything alarming in the graphs, the impact of our last launches doesn't seem measurable. (#523)
 * The Commons village pump survey discussion brought up an interesting issue about visual artifacts currently introduced by our ImageMagick settings, which I did some research on.
 * Today
 * 1-on-1 with Rob

2014-05-12
 
 * Yesterday
 * Fixed the graph update issue but terrible performance on the actions data needs to be resolved (#596)
 * Got Ops to add badly needed indexes to the actions tables (#596)
 * Interpreted initial results of the image scaler survey. Not many respondent but so far people seem to like the chained images more than the current ones, eventhough they should be slightly worse quality... I suspect this is due to the extra sharpening. (#584)
 * Responded to comments on Commons village pump about the survey and addressed concerns (#584)
 * Reviewed everything in the queue except one large GWToolset change by Dan due to lack of time
 * Today
 * Review Dan's GWToolset change (better duplicates handling)
 * Look into sample images provided by JHeald in the commons discussion
 * UW metrics if Gergo hasn't picked it yet

2014-05-09
 
 * Yesterday
 * Reviewed everything in the queue
 * Put together a new survey for image scaling alternative techniques, sent it to wikitech-l and Commons village pump (#584)
 * Responded to Gergo's comments on DurationLogger improvements (#571)
 * Today
 * Pick up more UploadWizard/Tech debt tickets from the current cycle if Gergo has started work on UW metrics, otherwise work on UW metrics

2014-05-08
 
 * Yesterday
 * Weekly team meeting
 * Reviewed everything in the queue, including out-of-focus changes. Notably there was a series of massive changesets for the upload pipeline written by Aaron Schulz that will take a while to get to the point of being ready to merge. At this point I mostly asked for unit tests, as none of the new code introduced has any.
 * Started discussions on OOJS and OOUI on mailing list and bugzilla, in order to prepare for our upcoming decision to use those or not in UploadWizard
 * Went through the recent UploadWizard bug reports on bugzilla, since Wiki Loves Earth introduced a spike in usage and had a few bugs reported
 * Today
 * Make a new survey with one intermediary thumbnail instead of a chain (#584) and send it to wikitech-l

2014-05-07
 
 * Yesterday
 * Addressed further concerns on DurationLogger improvements (#571)
 * Reviewed everything in the queue
 * Implemented fix for Safari defullscreen bug (#569)
 * Investigated the effect of pilot launches on image scalers (#523)
 * Implement unified image graph (#529)
 * Today
 * Weekly team meeting

2014-05-06
 
 * Yesterday
 * Addressed concerns on threshold of what is considered a browser cache hit for mmv.performance (#563)
 * Addressed concerns on DurationLogger improvements (#571)
 * Someone broke mediawiki.ui in core, which we rely on. I wrote a fix. (#572)
 * Reviewed everything in the queue
 * Today
 * 1-on-1 with Rob
 * Fix Safari bug (#569)
 * Investigate the effect of pilot launches on image scalers (#523)
 * Implement unified image graph (#529)

2014-05-05
 
 * Yesterday
 * Implemented fix for issue that might have been preventing OS X users from having API calls cached by their browser (#566)
 * Implemented fix for issue where the "versus" graph wouldn't pick up recent data (#564)
 * Implemented fix for daily cutoff of stats not happening exactly at midnight that would make the graphs look weird (#565)
 * Increased the threshold of what is considered a browser cache hit for mmv.performance (#563)
 * Improved some sinon.js code style according to Gergo's advice
 * Studied the results of the mipmapping survey, it looks like the most "extreme" solution is a no-go because the visual quality loss is too noticeable
 * Reviewed everything in the queue
 * Tried following Dan's repro steps on a GWToolset bug, got pretty far but ran into an error I couldn't troubleshoot. Responded to the bug report.
 * Today
 * Fix Safari bug (#569)
 * Investigate the effect of pilot launches on image scalers (#523)
 * Implement unified image graph (#529)

2014-05-02
 
 * Yesterday
 * Ran more thumbnail chaining tests, results shared on the mailing list
 * Reviewed everything in the queue, including some old UploadWizard changesets rebased by Mark
 * Looked for any spikes in the API requests (#523) nothing looked worrying, asked Ops for confirmation.
 * Wrote the weekly team update
 * Implemented survey https://surveymonkey.com/s/FY89BTX to further discussion on image scaling optimizations
 * Today
 * Investigate the effect of pilot launches on image scalers (#523)
 * Implement new global image graph (#529)

2014-05-01
<section begin="2014-05-01" />
 * Yesterday
 * Weekly team meeting
 * Reviewed everything in the queue
 * Spent a bunch of time researching the bucketing issue/thumbnail intermediates and writing detailed responses on the mailing list
 * Set up limn production deployment thanks to Dan
 * Implemented the change to add the 95th percentile (#555)


 * Today
 * Review Dan's GWToolset changesets
 * Write the weekly status report

<section end="2014-05-01" />

Archived
2014-04

2014-03

2014-02