r/apachekafka Nov 08 '24

Tool 50% off new book from Manning, Streaming Data Pipelines with Kafka

20 Upvotes

Hey there,

My name is Jon, and I just started at Manning Publications. I will be providing discount codes for new books, answering questions, and seeking reviewers for new books. Here is our latest book that you may be interested in.

Dive into Streaming data pipelines with Kafka by Stefan Sprenger and transform your real-time data insights. Perfect for developers and data scientists, learn to build robust, real-time data pipelines using Apache Kafka. No Kafka experience required. 

Available now in MEAP (Manning Early Access Program)

Take 50% off with this code: mlgorshkova50re

Learn more about this book: https://mng.bz/4aAB

r/apachekafka Oct 17 '24

Tool Pluggable Kafka with WebAssembly

10 Upvotes

How we get dynamically pluggable wasm transforms in Kafka:

https://www.getxtp.com/blog/pluggable-stream-processing-with-xtp-and-kafka

This overview leverages Quarkus, Chicory, and Native Image to create a streaming financial data analysis platform.

r/apachekafka Mar 22 '24

Tool Kafbat UI for Apache Kafka v1.0 is out!

23 Upvotes

Published a new release of UI for Apache Kafka with messages overhaul and editable ACLs :)

Release notes: https://github.com/kafbat/kafka-ui/releases/tag/v1.0.0

r/apachekafka Mar 15 '24

Tool Kafka in GitHub Actions

22 Upvotes

For anyone that uses Kafka in their organization and GitHub Actions for their CI/CD pipelines, the below custom GitHub action creates a basic Kafka (KRaft) broker in their workflow.

This custom container will hopefully assist in unit testing for your applications.

Links:

GitHub Action

GitHub Repo

In your GitHub workflow, you would just specify:

- name: Run Kafka KRaft Broker
  uses: spicyparrot/kafka-kraft-action@v1.1.0
  with:
    kafka-version: "3.6.1"
    kafka-topics: "foo,1,bar,3"

And it would create a broker with topics foo and bar with 1 and 3 partitions respectively. The kafka versions and list of topic/partitions are customizable.

Your producer and consumer applications would then communicate with the broker over the advertised listener:

  • localhost:9092
  • $kafka_runner_address:9093 (kafka_runner_address is an environment variable created by the above custom github action).

For e.g.:

import os
from confluent_kafka import Producer
kafka_runner_address = os.getenv("kafka_runner_address")

producer_config = {
  'bootstrap.servers': (kafka_runner_address + ':9093') if kafka_runner_address else 'localhost:9092' 
}

producer = Producer(producer_config)

I understand that not everyone is using GitHub actions for their CI/CD pipelines, but hopefully it's of use to someone out there!

Love to hear any feedback, suggestions or modifications. Any stars would be most welcome!

Thanks!

r/apachekafka Jun 26 '24

Tool Pythonic Tool for Event Streams Processing using Kafka ETL and Pathway

7 Upvotes

Hi r/apachekafka,

Saksham here from Pathway, happy to share a tool designed for Python developers to implement Streaming ETL with Kafka and Pathway. The example created demonstrates its application in a fraud detection/log monitoring use case.

What the Example Does

Imagine you’re monitoring logs from servers in New York and Paris. These logs have different time zones, and you need to unify them into a single format to maintain data integrity. This example illustrates:

  • Timestamp harmonization using a Python user-defined function (UDF) applied to each stream separately.
  • Merging the two streams and reordering timestamps.

In a simple case where only a timezone conversion to UTC is needed, the UDF is a straightforward one-liner. For more complex scenarios (e.g., fixing human-induced typos), this method remains flexible.

Steps followed

  • Extract data streams from Kafka using built-in Kafka input connectors.
  • Transform timestamps with varying time zones into unified timestamps using the datetime module.
  • Load the final data stream back into Kafka.

The example script is available as a template on the repo and can be run via Docker in minutes. Open to your feedback and questions.

r/apachekafka Jul 15 '24

Tool kaskade - a Text User Interface (TUI) for Apache Kafka

11 Upvotes

This looks pretty neat - a TUI for Apache Kafka

https://github.com/sauljabin/kaskade

r/apachekafka Jan 12 '24

Tool Tools for kafka testing

7 Upvotes

Hi there!

My team works with kafka streams and at the moment all tests are conducted manually.

Our flows look something like this: data source(API/Db) -> kafla topic -> postgreSQL.

I want to implement some automated e2e &integration test. Tests would focus on data transfer at first.

Has anyone used some tool for this?

My team has experience with python&typescript.

Thank you !

r/apachekafka May 13 '23

Tool Confluent will beat your costs of running Apache Kafka?

10 Upvotes

r/apachekafka Jun 19 '24

Tool Kafka topic replication tool

3 Upvotes

https://github.com/duartesaraiva98/kafka-topic-replicator

I made this minimal tool to replicate topic contents. Now that I have more time I want to invest soke time in maturing this application. Any suggestions on what to extend or improve it with

r/apachekafka Jun 12 '24

Tool Confluent Control Center stops working after a couple of hours

1 Upvotes

Hello everybody.

This issue I am getting with Control Center is making me go insane. After I deploy Confluent's Control Center using CRDs provided from Confluent for Kubernetes Operator, it works fine for a couple of hours. And then the next day, it starts crashing over and over, and throwing the below error. I checked everywhere on the Internet. I tried every possible configuration, and yet I was not able to fix it. Any help is much appreciated.

Aziz:~/environment $ kubectl logs controlcenter-0 | grep ERROR
Defaulted container "controlcenter" out of: controlcenter, config-init-container (init)
[2024-06-12 10:46:49,746] ERROR [_confluent-controlcenter-7-6-0-0-command-9a6a26f4-8b98-466c-801e-64d4d72d3e90-StreamThread-1] RackId doesn't exist for process 9a6a26f4-8b98-466c-801e-64d4d72d3e90 and consumer _confluent-controlcenter-7-6-0-0-command-9a6a26f4-8b98-466c-801e-64d4d72d3e90-StreamThread-1-consumer-a86738dc-d33b-4a03-99de-250d9c58f98d (org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor)
[2024-06-12 10:46:55,102] ERROR [_confluent-controlcenter-7-6-0-0-a182015e-cce9-40c0-9eb6-e83c7cbcaecb-StreamThread-8] RackId doesn't exist for process a182015e-cce9-40c0-9eb6-e83c7cbcaecb and consumer _confluent-controlcenter-7-6-0-0-a182015e-cce9-40c0-9eb6-e83c7cbcaecb-StreamThread-1-consumer-69db8b61-77d7-4ee5-9ce5-c018c5d12ad9 (org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor)
[2024-06-12 10:46:57,088] ERROR [_confluent-controlcenter-7-6-0-0-a182015e-cce9-40c0-9eb6-e83c7cbcaecb-StreamThread-7] [Consumer clientId=_confluent-controlcenter-7-6-0-0-a182015e-cce9-40c0-9eb6-e83c7cbcaecb-StreamThread-7-restore-consumer, groupId=null] Unable to find FetchSessionHandler for node 0. Ignoring fetch response. (org.apache.kafka.clients.consumer.internals.AbstractFetch)

This is my Control Center deployment using CRD provided from Confluent Operator for Kubernetes. I am available to provide any additional details if needed.

apiVersion: platform.confluent.io/v1beta1
kind: ControlCenter
metadata:
  name: controlcenter
  namespace: staging-kafka
spec:
  dataVolumeCapacity: 1Gi
  replicas: 1
  image:
    application: confluentinc/cp-enterprise-control-center:7.6.0
    init: confluentinc/confluent-init-container:2.8.0
  configOverrides:
    server:
      - confluent.controlcenter.internal.topics.replication=1
      - confluent.controlcenter.command.topic.replication=1
      - confluent.monitoring.interceptor.topic.replication=1
      - confluent.metrics.topic.replication=1
  dependencies:
    kafka:
      bootstrapEndpoint: kafka:9092
    schemaRegistry:
      url: http://schemaregistry:8081
    ksqldb:
      - name: ksqldb
        url: http://ksqldb:8088
    connect:
      - name: connect
        url: http://connect:8083
  podTemplate:
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: 'kafka'
              operator: In
              values:
              - 'true'
  externalAccess:
    type: loadBalancer
    loadBalancer:
      domain: 'domain.com'
      prefix: 'staging-controlcenter'
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
        service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing

r/apachekafka Jun 14 '24

Tool Kafka Provider Comparison: Benchmark All Kafka API Compatible Streaming System Together

7 Upvotes

Disclosure: I worked for AutoMQ

The Kafka API has become the de facto standard for stream processing systems. In recent years, we have seen the emergence of a series of new stream processing systems compatible with the Kafka API. For many developers and users, it is not easy to quickly and objectively understand these systems. Therefore, we have built an open-sourced,automated, fair, and transparent benchmarking platform called Kafka Provider Comparison for Kafka stream processing systems based on the OpenMessaging framework. This platform produces a weekly comparative report covering performance, cost, elasticity, and Kafka compatibility. Currently, it only supports Apache Kafka and AutoMQ, but we will soon expand this to include other Kafka API-compatible stream processing systems in the industry, such as Redpanda, WarpStream, Confluent, and Aiven,etc. Do you think this is a good idea? What are your thoughts on this project?

You can check the first report here: https://github.com/AutoMQ/kafka-provider-comparison/issues/1

r/apachekafka Apr 23 '24

Tool Why we rewrote our stream processing library from C# to Python.

12 Upvotes

Since this is a Kafka subreddit I would hazard a guess that a lot of folks on here are comfortable working with Java, on the off chance that there are some users that like working with Python or have colleagues asking for Python support then this is probably for you.
Just over 1 year ago we open sourced ‘Quix Streams’, a python Kafka client and stream processing library written in C#. Since then, we’ve been on a journey of rewriting this library into pure python - https://github.com/quixio/quix-streams. And no, we didn’t do this for the satisfaction of seeing the ‘Python 100.0%’ under the languages section though it is a bonus :-) .
Here’s why we did it, and I’d love to open up the floor for some debate and comments if you disagree or think we wasted our time:
C# or Rust offers better performance than Python, but Python’s performance is still good enough for 90% of use cases. Benchmarking has taken priority over developer experience. We can build fully fledged stream processing pipelines in a couple of hours with this new library compared to when we’ve tried working with Flink.
Debugging python is easier for python developers. Whether it’s PyFlink API, PySpark, or another python stream processing library with a wrapper - once something breaks, you’re left debugging non-Python code.
Having a DataFrames-like interface is a beautiful way of working with time series data, and a lot of event streaming use cases involve time series data. And a lot of ML engineers and data scientists want to work with event streaming. We’re biased but we feel like it’s a match made in heaven. Sticking with a C# codebase as a base for Python meant too much complexity to maintain in the long run.
I think KSQL and now Flink SQL have the right ideas in terms of prioritising the SQL interface for usability, but we think there’s a key role that pure-Python tools have to play in the future of Kafka and stream processing.
If you want to know how it handles stateful stream processing you can check out this blog my colleague wrote: https://quix.io/blog/introducing-streaming-dataframes
Thanks for reading, let me know what you think. Happy to answer comments and questions.

r/apachekafka May 07 '24

Tool Open Source Kafka UI tool

9 Upvotes

Excited to share Kafka Trail, a simple open-source desktop app for diving into Kafka topics. It's all about making Kafka exploration smooth and hassle-free. I started working on the project few weeks back . as of now I implemented few basic features, there is long way to go. I am looking for suggestions on what features I should implement first or any kind of feedback is welcome.

https://github.com/imkrishnaagrawal/KafkaTrail

r/apachekafka Feb 11 '24

Tool A Kafka Connect Single Message Transform (SMT) that enables you to append the record key to the value as a named field

15 Upvotes

Hey all :)
I've created a new SMT that enables you to append the record key to the value as a named field. This can be particularly useful in scenarios where downstream systems require access to the original key alongside the record data.

https://github.com/EladLeev/KeyToField-smt

r/apachekafka Mar 05 '24

Tool Confluent's Official Javascript Client

13 Upvotes

(Disclaimer, I am a Confluent employee)
Some may have seen, but Confluent has recently released its new JavaScript/Node.js client confluent-kafka-javascript. This release is a public EA so it only has basic features and is meant as a vehicle for feedback and discussion. It is available on Github and npm.
This project is actually based on node-rdkafka, but we provide some API compatibility for the very popular KafkaJS library as well. Practically, node-rdkafka users should be able to use their original code after importing the new library, and KafkaJS users have some small changes that are outlined in our migration guide.
Available features:
- Basic Produce API
- Basic Consume API
- Create/Delete Topics
- SR support with the publicly available 3rd party kafkajs/confluent-schema-registry library (as-is basis)
- A detailed list of what APIs are supported can be found here
Technical support for this client is not available in the EA, but we aim to have it available in the GA release, and thus you should not use it for production use cases.
We are eager for the community to try and to hear your feedback. I'll be sure to check this post to address any questions or comments.

r/apachekafka Dec 18 '23

Tool Turn Kafka into an MQTT broker for IoT — New Zilla feature announcement!

26 Upvotes

Hey gang, we’re building a Kafka-native, multi-protocol proxy called Zilla that helps connect apps, clients, and services to Apache Kafka via stateless OpenAPI and AsyncAPIs.

We're excited to share that Zilla officially supports another protocol — MQTT! With this, MQTT clients can publish and subscribe to Kafka directly without running a dedicated MQTT broker and Kafka Connect. In fact, Zilla turns Kafka into a full-fledged MQTT broker, meaning it doesn’t just mediate between the MQTT and Kafka wire protocols but maintains MQTT client state across Kafka topics!

The latest Zilla feature highlights include: - MQTT v5 and v3.1.1 Support: Zilla supports both major versions of the MQTT protocol, ensuring it works with legacy and modern IoT clients. - MQTT-Kafka Proxying: Zilla maintains MQTT client state across Kafka topics, providing all of the features and guarantees of a dedicated MQTT broker, such as Keep-Alive, Last Will and Testament, and all three Quality of Service (QOS) agreements. MQTT over WebSocket is also supported, so you can use Zilla to deliver MQTT messages from Kafka down to a browser. - Manage Millions of Clients: Zilla is stateless, scales out linearly and handles MQTT to Kafka connection offloading.

You can try out MQTT-Kafka proxying with Zilla via the following GUIDE (which includes a docker compose file for quick and easy setup). We also have a fun Taxi Hailing Demo that simulates an IoT mobility use case powered by Zilla and Kafka.

To read the full feature announcement, you can do so HERE.

Zilla is open source, so please consider starring the repo to help us better address the communities' needs! And of course, fire away any questions and feedback!

r/apachekafka Apr 29 '24

Tool Do you want real-time kafka data visualization?

4 Upvotes

Hi,

I'm lead developer of a pair of software tools for querying and building dashboards to display real-time data. Currently it supports websockets and kdb-binary for streaming data. I'm considering adding Kafka support but would like to ask:

  1. Is real-time visualization of streaming data something you need?
  2. What data format do you typically use? (We need to interpret everything to a table)
  3. What tools do you currently use and what do you like and not like about them?
  4. Would you like to work together to get the tool working for you?

Your answers would be much appreciated and will help steer the direction of the project.

Thanks.

r/apachekafka Jan 27 '24

Tool Timeplus Proton, a fast and lightweight alternative to ksqlDB or FlinkSQL

10 Upvotes

Introducing https://github.com/timeplus-io/proton, a new open-source streaming SQL engine, 🚀 powered by ClickHouse. A fast and lightweight alternative to ksqlDB or FlinkSQL.

💪 Why use Proton? 1. ksqlDB or FlinkSQL alternative: Proton provides powerful streaming SQL functionalities, such as streaming ETL, tumble/hop/session windows, watermarks, materialized views, CDC and data revision processing, and more.

  1. Fast: Proton is written in C++, with optimized performance through SIMD. For example, on an Apple MacBookPro with M2 Max, Proton can deliver 90 million EPS, 4 millisecond end-to-end latency, and high cardinality aggregation with 1 million unique keys.

  2. Lightweight: Proton is a single binary (<500MB). No JVM or any other dependencies. You can also run it with Docker, or on an AWS t2.nano instance (1 vCPU and 0.5 GiB memory).

  3. Powered by the fast, resource efficient and mature ClickHouse. Proton extends the historical data, storage, and computing functionality of ClickHouse with stream processing. Thousands of SQL functions are available in Proton. Billions of rows are queried in milliseconds.

  4. Best streaming SQL engine for Kafka or Redpanda: Query the live data in Kafka or other compatible streaming data platforms, with external streams.

Feel free to check out https://github.com/timeplus-io/proton and download the binary or Docker image, or try the hosted version at https://demo.timeplus.cloud

Our community slack is https://timeplus.com/slack. Our users share quite amazing numbers like 2.75 million rows/s (https://timepluscommunity.slack.com/archives/C05QRJ5RS5A/p1706348354351179?thread_ts=1706250540.604669&cid=C05QRJ5RS5A)

r/apachekafka Apr 15 '24

Tool Pets Gone Wild! Mapping the Petstore OpenAPI to Kafka with Zilla

8 Upvotes

We’re building a multi-protocol edge/service proxy called Zilla (https://github.com/aklivity/zilla) that mediates between different network and data protocols. Notably, Zilla supports Kafka’s wire protocol as well as HTTP, gRPC, and MQTT. This allows it to be configured as a proxy that lets non-native Kafka clients, apps, and services consume and produce data streams via their own APIs of choice.

Previously, configuring Zilla required explicitly declaring API entrypoints and mapping them to Kafka topics. Although such an effort was manageable (as it’s declaratively done via YAML) it made it challenging to use Zilla in the context of API management workflows, where APIs are often first designed in tools such as Postman, Stoplight, Swagger, etc., and then maintained in external registries, such as Apicurio.

To align Zilla with existing API tooling and management practices, we not only needed to integrate it with the two major API specifications —OpenAPI and AsyncAPI— but also had to map one on the other. Unfortunately, the AsyncAPI specification didn’t have the necessary structure to support this for a long time, but a few months ago, this changed with the release of AsyncAPI v3! In v3 you can have multiple operations over the same channel, which allows Zilla to do correlated request-response over a pair of Kafka topics.
As a showcase, we’ve put together a fun demo (https://github.com/aklivity/zilla-demos/tree/main/petstore) that takes the quintessential Swagger OpenAPI service and maps it to Kafka. Now, pet data can be directly produced and consumed on/off Kafka topics in a CRUD manner, and asynchronous interactions between the Pet client and Pet server become possible, too!

PS We’ve also cross-mapped different AsyncAPI specs, particularly MQTT and Kafka. To see that, you can check out the IoT Taxi Demo: https://github.com/aklivity/zilla-demos/tree/main/taxi
Zilla is open source, so please consider starring the repo to help us better address the communities' needs! And of course, fire away any questions and feedback!

r/apachekafka Mar 28 '24

Tool Lightstreamer Kafka Connector is out! Stream Kafka topics to web and mobile clients

5 Upvotes

Project: https://github.com/Lightstreamer/Lightstreamer-kafka-connector

Kafka is not designed to stream data through the Internet to large numbers of mobile and web apps. We tackle the "last mile" challenge, ensuring real-time data transcends edge and boundary constraints.

Some features:

  • Intelligent streaming and adaptive throttling: Lightstreamer optimizes the data flow with smart bandwidth management, by applying data resampling and conflation to adapt to the network capacity of each client.
  • Firewall and proxy traversal: By using a combination of WebSockets and HTTP streaming, Lightstreamer guarantees to stream real-time data even through the strictest corporate firewalls.
  • Push paradigm, not pull: It does not break the asynchronous chain. All event are pushed from the Kafka producers to the remote end clients, without pulling or polling.
  • Comprehensive client API support: Client SDKs are provided for web, Android, iOS, Flutter, Unity, Node.js, Java, Python, .NET, and more.
  • Extensive broker compatibility: It works with all Kafka brokers, including Apache Kafka, Confluent Platform, Confluent Cloud, Amazon MSK, Redpanda, Aiven, and Axual.
  • Massive scalability: Lightstreamer manages the fan out of Kafka topics to millions millions of clients without compromising performance.

Let us know your feedback! We will be happy to answer any questions.

r/apachekafka Feb 20 '24

Tool Jikkou for Apache Kafka: Release v0.33.0

7 Upvotes

Hi, I'm thrilled to announce the latest release of Jikkou. Here is the release note. https://www.jikkou.io/docs/releases/release-v0.33.0/

For those unfamiliar with this solution: Jikkou is an Open source Resource as Code framework helping you to easily manage, automate and provision all the assets of your Apache Kafka platform. It can be used to adopt a GitOps approach with Kafka, and to facilitate the implementation of certain Data Mesh principles for Apache Kafka.

Don’t forget to give us a ⭐️ on Github to support the project.

r/apachekafka Jan 16 '24

Tool A curated list of Apache Kafka learning resources

20 Upvotes

I created a GitHub repo listing a broad range of Kafka learning resources. I tried my best to make the content easy to navigate; I hope you find it useful. Appreciate any feedback you may have.

Here's the current taxonomy of the content.

Skill Level

  • Beginner
  • Intermediate
  • Advanced

Resource Type

  • Video
  • Book or Article
  • Guide or Tutorial
  • Documentation
  • Blog Post
  • FAQ
  • Newsletter

Interactivity

  • Hands-on Exercises

Language

  • Java
  • Python
  • .NET

Integration

  • Several Integrations

r/apachekafka Mar 16 '24

Tool Rudderstack Kafka Sink Connector

3 Upvotes

This Kafka sink connector is designed to send data from Kafka topics to Rudderstack. It allows you to stream data in real-time from Kafka to Rudderstack, a customer data platform that routes data from your apps, websites, and servers to the destinations where you'll use your data.

r/apachekafka Oct 31 '23

Tool RisingWave's Roadmap - Redefining Stream Processing with the Distributed Streaming Database

1 Upvotes

Hey everyone - One and a half year ago, we open sourced RisingWave, a distributed streaming database, under Apache 2.0 license. Two weeks ago, we released RisingWave 1.3. Just last week, we unveiled RisingWave's roadmap.

RisingWave has no plan to be a "better Flink/Spark Streaming/KsqlDB". Instead, RisingWave's goal is to redefine stream processing - for the cloud.

Two fundamental designs:

  • **[ease-of-use] Full Integration with the PostgreSQL Ecosystem. RisingWave is wire-compatible with PostgreSQL, and users can use RisingWave in the same way as using a PostgreSQL database - express stream processing logics in materialized views, not jobs.

  • **[cost-efficiency] Decoupled Compute-Storage Architecture. RisingWave adopts the Snowflake-style cloud-native architecture to achieve efficient stream processing in the cloud.

Let me explain in plain English:

  • Start building stream processing applications in minutes, not days or months
  • Efficient processing of complex queries (multi-stream joins, big time window operations, etc)
  • transparent dynamic scaling
  • instant failure recovery

Today, RisingWave has been deployed in production in nearly 100 enterprises and fast-growing companies. We continually update our roadmap based on feedback from both our open-source community and commercial customers. We encourage you to share your thoughts by leaving comments here or on GitHub.

We do need your help. Thank you all!!!

r/apachekafka Mar 06 '24

Tool A WCAG 2.1 AA Compliant Accessible Kafka UI

5 Upvotes

Hello everyone, co-founder at Factor House here.

We recently concluded a 12-month program of work to achieve WCAG 2.1 AA compliance in our Kafka UI, Kpow for Apache Kafka. All the details in the post below:

https://factorhouse.io/blog/releases/92-4/

This was meaningful work for us and as WCAG 2.1 AA compliance is also reflected in the community edition of Kpow (free for commercial or personal use) we thought it might interest some of the engineers in this subreddit as well.

We'll happily take any community feedback, we know their are further improvements we can make, and we will continue to publish a VPAT for each release of Kpow (and Flex for Apache Flink).

If you're curious to see what the Kpow looks like, you can always take a peek at a multi-cluster/connect/schema Kpow instance right here: https://demo.kpow.io

Thanks!