http://kafka.apache.org/documentation/#ecosystem
https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
Here is a list of tools we have been told about that integrate with Kafka outside the main distribution. We haven't tried them all, so they may not work!
Clients, of course, are listed separately .
Kafka Connect
Kafka has a called Kafka Connect for writing sources and sinks that either continuously ingest data into Kafka or continuously ingest data in Kafka into external systems. The connectors themselves for different applications or data systems are federated and maintained separately from the main code base. You can find a list of available connectors at the .
Distributions & Packaging
- Confluent Platform - . Downloads - .
- Cloudera Kafka source (0.11.0) and release
- Hortonworks Kafka source and release
- Stratio Kafka source for ubuntu and for RHEL
- IBM Event Streams - - Apache Kafka on premise and the public cloud
- Strimzi - - Apache Kafka Operator for Kubernetes and Openshift. Downloads and Helm Chart -
- TIBCO Messaging - Apache Kafka Distribution - Downloads -
Stream Processing
- - the built-in stream processing library of the Apache Kafka project
- Kafka Streams Ecosystem:
- Complex Event Processing (CEP): .
- - A stream-processing framework.
- - A YARN-based stream processing framework.
- - Consume messages from Kafka and emit as Storm tuples
- - Kafka 0.8, Storm 0.9, Avro integration
- - Kafka receiver supports Kafka 0.8 and above
- - Apache Flink has an integration with Kafka
- - A stream processing framework with Kafka source and sink to consume and produce Kafka messages
- - a framework for building event-driven microservices, - a cloud-native orchestration service for Spring Cloud Stream applications
- - Stream processing framework with connectors for Kafka as source and sink.
Hadoop Integration
- - A sink connector for the Kafka Connect framework for writing data from Kafka to Hadoop HDFS
- - LinkedIn's Kafka=>HDFS pipeline. This one is used for all data at LinkedIn, and works great.
- A different take on Hadoop loading functionality from what is included in the main distribution.
- - Contains Kafka source (consumer) and sink (producer)
- - A high-performance HDFS data loader
Database Integration
- - A source connector for the Kafka Connect framework for writing data from RDBMS (e.g. MySQL) to Kafka
- - Source connector that collects CDC operations via Golden Gate and writes them to Kafka
Search and Query
- - This project, Kafka Standalone Consumer will read the messages from Kafka, processes and index them in ElasticSearch. There are also several .
- - The Presto Kafka connector allows you to query Kafka in SQL using Presto.
- - Hive SerDe that allows querying Kafka (Avro only for now) using Hive SQL
Management Consoles
- - A tool for managing Apache Kafka.
- - Simplified command-line administration for Kafka brokers.
- - Displays information about your Kafka cluster including which nodes are up and what topics they host data for.
- - Displays the state of all consumers and how far behind the head of the stream they are.
- – Displays the state and deltas of Kafka-based topologies. Supports Kafka >= 0.8. It also provides an API for fetching this information for monitoring purposes.
- - Service for cluster auto healing and workload balancing.
- - Fully automate the dynamic workload rebalance and self-healing of a Kafka cluster.
- - Monitoring companion that provides consumer lag checking as a service without the need for specifying thresholds.
- - An audit system that monitors the completeness and latency of data stream.
AWS Integration
- from Pinterest.
- Alternative tool
Logging
- syslog (1M)
- : A producer that supports both raw data and protobuf with meta data for deep analytics usage.
- syslog-ng () is one of the most widely used open source log collection tools, capable of filtering, classifying, parsing log data and forwarding it to a wide variety of destinations. Kafka is a first-class destination in the syslog-ng tool; details on the integration can be found at .
- - A python syslog publisher
- - A java syslog publisher
- - A simple log tailing utility
- - Integration with
- - Integration with and
- written in Go
- - A simple proxy service for Kafka.
- : A file system logging agent based on Kafka
- : Another syslog integration, this one in C and uses librdkafka library
- - Collect logs and send lines to Apache Kafka
Flume - Kafka plugins
- - Integration with
- - Integration with
Metrics
- - A Kafka and Protocol Buffers based metrics and logging system
- - Register built-in Kafka client and stream metrics to Dropwizard Metrics
Packing and Deployment
- Puppet Integration
Kafka Camel Integration
Misc.
- - A proxy that interoperates with websockets for delivering Kafka data to browsers.
- - A native, command line producer and consumer.
- - An alternative to the built-in mirroring tool
- – curses-based tool which displays state of based Kafka consumers (Kafka 0.7 only).
- - Provides the ability to replicate across Kafka clusters in other data centers
- - A tool for distributed, high-volume replication between Apache Kafka clusters based on Kafka Connect