Fundraising tech/Message queues

This page gives an overview of the message queues used to decouple fundraising subsystems. For a description of the message format, see "Normalized donation messages". See also the article on WMF-specific configuration.

Message Queue
Queues are used to decouple the payments frontend from the CiviCRM server. This is important for several reasonsː it allows us to continue accepting donations even if the backend servers are down, it keeps our private database more secure, and it enforces write-only communication from the payments cluster.

The main data flow is over the donations queue. Completed payment transactions are encoded as JSON and sent over the wire, to be consumed by the queue2civicrm Drupal module and recorded in the CiviCRM database.

Another important queue is the limbo queue, which is used both as a key-value store and as a FIFO queue. Before we pass control to any hosted page or iframe, we record the donor's personal information we've collected to the limbo queue, indexed by the gateway and transaction ID. We store the information in this temporary fashion so that a) it does not leave the payments cluster, and b) so we aren't storing any data about people who aren't donors, which is mandated by our privacy policies. When (and if) control is returned to the payments server, the PHP session is used to build the key and search for a corresponding limbo message. We delete the message, and merge this information into the completed donation message sent to the regular queue.

However, if control is never returned, then limbo queue messages will sit around for some time. After about 20 minutes, they become eligible for "orphan slaying", which is currently only performed for GlobalCollect credit card transactions. We attempt to complete settlement on these orders, and if successful, the completed message is sent to the donations queue. If unsuccessful, the personal information should be purged.

At Wikimedia, we are currently using the ActiveMQ (http://activemq.apache.org/) message broker as the queue backend for everything but the limbo queue. Messages go over the wire using the aging STOMP protocol. The limbo queue on the other hand is stored in Redis on the payments-cluster.

Replace ActiveMQ
Motivation: ActiveMQ is a single point of failure, when it's unavailable we have to take campaigns down, disable the frontend and stop all jobs. The communication protocol is flawed with no remedy in sight, and queue disk storage is prone to bloat rot.

Implementation: We're most interested in having this layer buffer the low-latency frontend from our CRM and other sensitive backend pieces. Acting as a FIFO queue as well as a buffer is just a nice bonus. Therefore, a slimmer buffer abstraction like Kafka may be preferable.

I'm planning to have a single partition for each topic in this first iteration. More partitions are only helpful for distributing load, and for parallel consumers which we don't have yet.

Buffer contribution tracking
Motivation: Contribution tracking is another single point of failure. It has to come down for occasional database maintenance, but there aren't stability concerns. The risks from this are that we are bottlenecked on a single table, which has to be accessed by the donation frontend in real time.

Contribution tracking has not been thought of as a queue, but that's what we'll be fixing. A proper event log will be easier to extend.

Implementation: Rewrite direct writes as Kafka production. A consumer will keep the  table up to date.

Consolidate pending message handling
Motivation: There are eight variations on this topic, spanning all four storage backends, and some implementations are buggy.

Implementation: Use a smaller number of topics, and a single abstraction.

Rewrite banner impressions loader
Motivation: The legacy impressions loader is fragile and bloated. We require a one-of-a-kind kafkatee shim to simulate udp2log.

Implementation: Consume the Kafka impression stream directly and aggregate into summary tables.

Save to the existing schema as a first step. We can extend with a second stream processor after designing the next schema.