Thread:Project:Support desk/Dumping SVN content

Note I also posted this to [w:en:Wikipedia:Village pump (technical)], before realizing this was probably the more appropriate forum:

I'm doing some research into collaborative software repositories. I have already processed the entire log from the MediaWiki SVN (~91,000 revisions), which has indicated to me there are about 500,000 unique file versions. I would like to obtain all these versions for analysis, and I have the script needed to do so. I doubt the network bandwidth required to run 'svn co -r [#] [url] > [file]' half-a-million times is, (a) something I want to wait on, and (b) is something developers want taxing their server. I was curious if someone might be able to run it on the server directly for me, and provide a pointer to the very large file that results? Thanks, West.andrew.g 13:35, 10 July 2011 (UTC)