Talk:Requests for comment/Simplify thumbnail cache

Tracking Bug?
Does anyone have a good candidate for a tracking bug on this issue? My bugzilla fu is weak enough that I couldn't really find an obvious one.

--BDavis (WMF) (talk) 21:09, 7 October 2013 (UTC)

TTL Based Strategies
Would this force the URLs for the thumbnails to include the time at which the source was uploaded or the version of the source? Do they do that already? NEverett (WMF) (talk) 12:40, 8 October 2013 (UTC)


 * Thumbnail URLs do not currently contain any version information. At this time there is no provision in this plan to change the URL structure. --BDavis (WMF) (talk) 23:06, 9 October 2013 (UTC)

Similar to the sliding window, could we bump the TTL on a percentage of varnish hits? That'd mostly keep the TTL on popular items high. We could also combine the ideas in either order. The random check should be quick. I don't know about the bump the TTL process or the sliding window check process. NEverett (WMF) (talk) 12:40, 8 October 2013 (UTC)


 * This is a great question. I don't know if it would possible or reasonable to have Varnish talk to the backend to announce that a resource had been served from cache, but it should be possible to add something that watched the Varnish log stream and queued "touch" jobs as a result. --BDavis (WMF) (talk) 23:06, 9 October 2013 (UTC)

Cold varnish layer
Four out of five of the drawbacks you list for the "CDN only" solution could be addressed by having an extra layer of varnish servers as parents to the current caches, taking the place of Swift's thumbnail store. It would be like Swift's thumbnail store in terms of hardware, but it would have LRU eviction and wouldn't rely on special support in MediaWiki for purging -- it would share the HTCP stream with the frontend cache.

Reducing cache size increases image scaler CPU, increases latency due to reduced hit rate, and increases the rate at which originals are read, which I am told is likely to require increased read capacity in Swift. So I can understand that it is not necessarily a good idea to reduce cache size. My question is whether an ad-hoc combination of Swift and MediaWiki is really the best software to use for HTTP caching, and whether some purpose-built HTTP cache would be better at the job.

IIRC, Mark or Faidon or both had some objection to this idea -- it would be nice if they could remind me what it was. -- Tim Starling (talk) 05:05, 9 October 2013 (UTC)


 * I added an interpretation of this as an alternate strategy on the RFC. It seems to me that using Varnish backed by spinning disks would a simplier implementation path than the TTL+LRU in Swift options. I don't think it addresses the failure tolerance or vcl_hash issues. I'm of the opinion that the unknowns of (ab)using vcl_hash will be unavoidable in any implementation that gets rid of a listing of all possible thumb URLs. Fault tolerance may be a larger concern. --BDavis (WMF) (talk) 16:47, 17 October 2013 (UTC)

Updates from 2013-12-04 RFC review
I posted a rather large change from the previous version based on feedback received during the RFC review 2013-12-04.

The variation previously known as "option 5" is now the primary recommendation of the proposal with an optional variation that would increase the current "backend" Varnish storage capacity rather than adding a new Varnish cluster.