User:GWicke/Notes/Storage/Cassandra testing

Hosts:
 * cerium 10.64.16.147
 * praseodymium 10.64.16.149
 * xenon 10.64.0.200

Cassandra docs (we are testing 2.0.1):
 * Cassandra 2.0
 * CQL 3.1

Cassandra node setup
apt-get install cassandra openjdk-7-jdk libjna-java libjemalloc1

On older Ubuntu versions and until is fixed, upgrade jna according to : cd /tmp https_proxy=brewster.wikimedia.org:8080 wget https://raw.github.com/twall/jna/master/dist/jna.jar cp jna.jar /usr/share/java/jna.jar ln -s /usr/share/java/jna.jar /usr/share/cassandra/lib/

'jna.jar' should be listed in the jvm parameters when you start cassandra.

On Debian/Ubuntu, open /etc/cassandra/cassandra-env.sh and uncomment/edit this line (localhost is key here):

JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=localhost"

Set up /etc/cassandra/cassandra.yaml according to the docs. Main things to change:
 * listen_address : set to external IP of this node
 * seed_provider / seeds : set to list of other cluster node IPs: "10.64.16.147,10.64.16.149,10.64.0.200"

(Re)start cassandra. Right after install it does not seem to be running by default, so a simple  should be enough. If it is running, the restart might involve using kill, as the init scripts use the same rmi connection to control cassandra. After this fix, the command

nodetool status

should return information and show your node (and the other nodes) as being up. Example output: root@xenon:~# nodetool status Datacenter: datacenter1

=
========== Status=Up/Down -- Address       Load       Tokens  Owns   Host ID                               Rack UN 10.64.16.149  91.4 KB    256     33.4%  c72025f6-8ad8-4ab6-b989-1ce2f4b8f665  rack1 UN 10.64.0.200   30.94 KB   256     32.8%  48821b0f-f378-41a7-90b1-b5cfb358addb  rack1 UN 10.64.16.147  58.75 KB   256     33.8%  a9b2ac1c-c09b-4f46-95f9-4cb639bb9eca  rack1
 * / State=Normal/Leaving/Joining/Moving

Rashomon setup
We need node 0.10. We are running an old Ubuntu version, so we need to do some extra work to get this : apt-get install python-software-properties python g++ make add-apt-repository ppa:chris-lea/node.js apt-get update apt-get install nodejs # this ubuntu package also includes npm and nodejs-dev On Debian unstable, we'd just do  and get the latest node including security fixes rather than the old Ubuntu PPA package.

Now onwards to the actual rashomon setup: npm config set https-proxy http://brewster.wikimedia.org:8080 npm config set proxy http://brewster.wikimedia.org:8080 cd /var/lib https_proxy=brewster.wikimedia.org:8080 git clone https://github.com/gwicke/rashomon.git cd rashomon npm install cp contrib/upstart/rashomon.conf /etc/init/rashomon.conf adduser --system --no-create-home rashomon service rashomon start
 * 1) temporary proxy setup for testing
 * 1) will package node_modules later

Create the revison tables (on one node only): cqlsh < cassandra-revisions.cql

Note re nodejs version: The PPA listed above is not quite up to date with security fixes etc. Maybe we should try to build the Debian unstable source package on Ubuntu Precise and use that if successful.

Cassandra issues

 * With the default settings and without working jna (see install instructions above), cassandra would run out of heap space during a large compaction. The resulting state was inconsistent enough that it would not restart cleanly.
 * Increased heap from quarter of the RAM (4G in this case) to 7G and installed an up-to-date jna
 * This might actually be related to missing jna and subprocesses as explained in . Should check using the default heap size with JNA enabled.
 * Stopping and restarting the cassandra service with  did not work. Faidon tracked this down to a missing '$' in the init script:.
 * Compaction was fairly slow for a write benchmark. Changed  to   in cassandra.yaml. Compaction is also niced and single-threaded, so during high load it will use less disk bandwidth than this upper limit. See  for background.
 * Not relevant for our current use case, but good to double-check if we wanted to start using CAS: bugs in 2.0.0 Paxos implementation. The relevant bugs seem to be fixed in 2.0.1 which we are using.

Tests

 * Import several enwiki dumps with history in parallel
 * Read back random revisions from random wikis

du -sh. 11G. ls enwiki-20131001-pages-meta-history25.xml-p026204561p026624999.7z enwiki-20131001-pages-meta-history26.xml-p026625002p027446124.7z enwiki-20131001-pages-meta-history26.xml-p027446125p028014757.7z enwiki-20131001-pages-meta-history26.xml-p028014758p028973952.7z enwiki-20131001-pages-meta-history26.xml-p028973953p029625000.7z enwiki-20131001-pages-meta-history27.xml-p029625001p030587586.7z enwiki-20131001-pages-meta-history27.xml-p030587587p031240058.7z enwiki-20131001-pages-meta-history27.xml-p031240059p031839850.7z enwiki-20131001-pages-meta-history27.xml-p031839851p032101301.7z enwiki-20131001-pages-meta-history27.xml-p032101302p033177808.7z enwiki-20131001-pages-meta-history27.xml-p033177810p034316341.7z enwiki-20131001-pages-meta-history27.xml-p034316342p035749414.7z enwiki-20131001-pages-meta-history27.xml-p035749415p037161963.7z enwiki-20131001-pages-meta-history27.xml-p037161964p038849072.7z

Dump import, 600 writers
Six writer processes working on one of these dumps each with 100 concurrent requests each (600 concurrent requests max). Write consistency level quorum (2 nodes out of three need to ack). 6537159 revisions in 42130s (155/s); total size 85081864773 6375223 revisions in 42040s (151/s); total size 84317436542 6679729 revisions in 39042s (171/s); total size 87759806169 5666555 revisions in 32704s (173/s); total size 79429599007 5407901 revisions in 32832s (164/s); total size 72518858048 6375236 revisions in 37758s (168/s); total size 84318152281

=
================================================= 37041803 revisions total, 493425716820 total bytes (459.5G) 879/s, 11.1MB/s du -sS on revisions table, before compaction: 162 / 153 / 120 G (31.5% of raw text avg)
 * clients, rashomon and cassandra on the same machine
 * clients and cassandra CPU-bound, rashomon using little CPU
 * basically no IO wait time despite data on spinning disks. Compaction too throttled for heavy writes, but probably just right for actual production loads (<100 revisions/s).

Configurations

 * Commit log on ssd, data files on rotating metal