Platform Engineering Team/Event Platform Value Stream/Stream Processing Framework Evaluation

Stream Processing Framework Evaluations
The analysis for Flink and Kafka streams has been supplemented from the evaluation conducted by the Search team. Knative Evening evaluation details can be found in Phabricator.

See also T306797 and T322320.

General Remarks
It would be nice if we chose the same technology for Data Connectors, Stream Processors, Batch Processors, and event driven feature service development, but there is no requirement to do so. We are focusing on platform level choices, so we are likely to favor technology that allows us to implement a more comprehensive data platform rather than ease of use for event driven feature services.

Multi DC
There is no built in support for 'multi DC' (AKA multi region) in any of these frameworks, and as such, the multi-DC-ness is an application level concern. Generally, the more state that the stream processor is responsible for, the more difficult it is to architect. For simple event transformation/enrichment processors, the only state we will need is in the Kafka consumer offsets. Both Flink and Spark Streaming handle management of offsets themselves, and do no rely on Kafka's built in consumer offset API.

See also: Multi DC for Streaming Apps

Multi DC streaming would be much more easier to accomplish by default if Apache Kafka had support for multi region clusters. There is support for this in the Confluent Enterprise edition. We will be experimenting with this idea as part of the Kafka stretch cluster.

Automatic multi DC failover might be possible with Kafka streams (or any system that uses Kafka offset API) in a Kafka stretch cluster. This would look like a single streaming app deployed across both DCs. However, because Flink and Spark Streaming manage Kafka offsets themselves, this is not possible. Kafka stretch may still help in an active/passive deployment, as offsets stored in their state checkpoints will work in both DCs.

Flink
A general purpose stream and batch processing framework and scheduler, supporting any input and output data systems (not just Kafka ).
 * Java API is somewhat limited, because of type erasure (doc). Because of this, Scala seems a better choice.
 * Testing API enables both stateless and stateful testing. Same with timely UDFs (user defined functions) (doc)
 * There is a script to launch Scala Flink REPL, seems useful
 * There are few different levels of API here, ranging from SQL analytics to low level stateful stream processing (1.10 Documentation: Dataflow Programming Model)

Spark Structured Streaming
Another general purpose stream and batch processing framework and scheduler. Deployment models are similar to Flink.


 * Very few downstream data store connectors, only has Kafka and 'file'. Easy to write to HDFS and S3, but not e.g. Cassanrda.  Would need to implement custom sinks.
 * Possible that sinks are streaming specific; it may not be possible to use a custom streaming sink with a batch DataFrame.
 * Easier to learn and use than Flink.
 * Poorer streaming performance (latency) than Flink. Spark is much better for ad hoc and analytics workloads, when working with a big data lake (HDFS, S3, Hive, etc.).

Kafka Streams
A library for developing stream applications using Kafka.
 * It focuses more heavily on SQL-like - called KSQL- approach, when it comes to data mangling
 * It looks cool for simple operations on Kafka topics, but the philosophy here is to augment existing applications (Kafka Streams API is a library) with a dash of data processing, rather than create standalone processing applications. They say so basically in the first, introductory video (1. Intro to Streams | Apache Kafka® Streams API)
 * It’s difficult to find code examples in their documentation - Apache Flink’s is much better in that regard.

Knative Eventing
'Event routing' k8s primitives to trigger service API calls based on events.
 * NOT a comprehensive stream processing framework.
 * More focused on abstraction of event streams, so application developers only have to develop HTTP request based services.
 * Has Kafka integration, but within the Eventing system fully abstracts this away. Eventing KafkaBroker uses a single topic, and filters all events to fowards requests to subscribing services.
 * Uses CloudEvents as the event data envelope.
 * Looks very nice if you are building a self contained event driven application. Not so great for any kind of CEP.
 * Requires newer versions of kubernetes that we currently have at WMF (as of 2022-05).

Use Case Considerations
These technologies are on a spectrum of more complex and featurful, too simple and less featurful, with Flink being the the most complex and Knative Eventing the simplest.

Given the use cases we are considering, at times we will need a complex stream processors (e.g. WDQS updater, diff calculation, database connectors), and at others, a simpler and language agnostic event driven application framework (event enrichment, change prop, job queue). We'd like to make a 'platform' that makes it easy for engineers to build stream based applications. Sometimes those applications will be about complex data state sourcing and transformation problems, and other times they will be for triggering actions based on events. Attempting to support those different types of uses with the same technology may not be the right decision.

We should keep this in mind, and try to place incoming use cases into one of 2 categories: simple event driven applications, and complex stream processing.

Decision Record
Kafka Streams is easier than Flink for developing event driven applications, but less flexible than Knative Eventing, and less powerful and featureful than Flink. We eliminate it from further consideration based on this.

Spark streaming is a little easier to use than Flink if you assume you will always be operating around a Data Lake. However, its lack of built in Sink connectors is a huge downside, and from a brief attempt, implementing a streaming enrichment was equivalent to Flink, including all of the usual java dependency conflicts.

For the initial capabilities development and experiment, we choose Flink. This will allow us to get our hands dirty and investigate how we can use it to build platform capabilities to support our initial use cases, while considering future ones.

In the future, we may want to also support something like Knative Eventing for simple event driven feature products.

Supplementary Research
During analysis of the various solutions, specifically how each of them work in a multi-dc environment, Kafka Stretch was discovered as a potential solution to allow a single Kafka cluster to span multiple dc's.

Details of this additional evaluation can be found here