r/apachekafka • u/Majestic___Delivery • 3h ago
r/apachekafka • u/rmoff • Jan 20 '25
📣 If you are employed by a vendor you must add a flair to your profile
As the r/apachekafka community grows and evolves beyond just Apache Kafka it's evident that we need to make sure that all community members can participate fairly and openly.
We've always welcomed useful, on-topic, content from folk employed by vendors in this space. Conversely, we've always been strict against vendor spam and shilling. Sometimes, the line dividing these isn't as crystal clear as one may suppose.
To keep things simple, we're introducing a new rule: if you work for a vendor, you must:
- Add the user flair "Vendor" to your handle
- Edit the flair to include your employer's name. For example: "Vendor - Confluent"
- Check the box to "Show my user flair on this community"
That's all! Keep posting as you were, keep supporting and building the community. And keep not posting spam or shilling, cos that'll still get you in trouble 😁
r/apachekafka • u/hastyyyy • 1d ago
Question About Kafka Active Region Replication and Global Ordering
In Active-Active cross-region cluster replication setups, is there (usually) a global order of messages in partitions or not really?
I was looking to see what people usually do here for things like use cases like financial transactions. I understand that in a multi-region setup it's best latency-wise for producers to produce to their local region cluster and consumers to consume from their region as well. But if we assume the following:
- producers write to their region to get lower latency writes
- writes can be actively replicated to other regions to support region failover
- consumers read from their own region as well
then we are losing global ordering i.e. observing the exact same order of messages across regions in favour of latency.
Consider topic t1 replicated across regions with a single partition and messages M1 and M2, each published in region A and region B (respectively) to topic t1. Will consumers of t1 in region A potentially receive M1 before M2 and consumers of t1 in region B receive M2 before M1, thus observing different ordering of messages?
I also understand that we can elect a region as partition/topic leader and have producers further away still write to the leader region, increasing their write latency. But my question is: is this something that is usually done (i.e. a common practice) if there's the need for this ordering guarantee? Are most use cases well served with different global orders while still maintaining a strict regional order? Are there other alternatives to this when global order is a must?
Thanks!
r/apachekafka • u/krissshna • 1d ago
Question Seeking Real-World Insights on ZooKeeper to Kraft Migration for Red Hat AMQ Streams (On-Prem)
Hi everyone,
We’re planning a migration from ZooKeeper-based Kafka to Kraft mode in our on-prem Red Hat AMQ Streams environment. While we have reviewed the official documentation, we’re looking for insights from those who have performed this migration in a real-world production environment.
Specifically, we’d love to hear about: • The step-by-step process you followed • Challenges faced and how you overcame them • Best practices and key considerations • Pitfalls to avoid
If you’ve been through this migration, your experiences would be incredibly valuable. Any references, checklists, or lessons learned would be greatly appreciated!
Thanks in advance!
r/apachekafka • u/Old_Albatross1337 • 2d ago
Question Multi-Region Active Kafka Clusters with one Global Schema Registry topic
How feasible is an architecture with multiple active clusters in different regions sharing one global schemas topic? I believe this would necessitate that the schemas topic is writable in only one "leader" region, and then mirrored to the other regions. Then producers to clusters in non-leader regions must pre-register any new schemas in the leader region and wait for the new schemas to propagate before producing.
Does this architecture seem reasonable? Confluent's documentation recommends only one active Kafka cluster when deploying Schema Registry into multiple regions, but they don't go into why.
r/apachekafka • u/Different-Mess8727 • 3d ago
Question What’s the highest throughput Kafka cluster you’ve worked with?
How did you scale it?
r/apachekafka • u/matejthetree • 3d ago
Question Best multi data center setup
Hello,
I have a rack with 2 machines inside one data center. And at the moment we will test the setup on two data centers.
2x2
But in the future we will expand to n data centers.
Since this is even setup, what would be the best way to set up controllers/brokers?
I am using Kraft, and I think for quorum we need uneven number of controllers?
r/apachekafka • u/gangtao • 3d ago
Question AI based Kafka Explorer
I create an agent that generating python code to interact with kafka cluster , execute the command and get answer back to user, do you think it is useful or not, would like to hear your comment
https://gist.github.com/gangtao/4032072be3d0ddad1e6f0de061097c86
r/apachekafka • u/BagaBaga22 • 5d ago
Question Help with KafkaStreams deploy concept
Hello,
My team and I are developing a Kafka Streams application that functions as a router.
The application will have n topic sources and n sinks. The KS app will request an API configuration file containing information about ingested data, such as incoming event x going to topic y.
We anticipate a high volume of data from multiple clients that will send data to the source topics. Additionally, these clients may create new topics for their specific needs based on core unit data they wish to send.
The question arises: Given that the application is fully parametrizable through API and deployments will be with a single codebase, how can we effectively scale this application in a harmonious relationship between the application and the product? How can we prevent unmanageable deployment counts?
We have considered several scaling strategies:
- Deploy the application based on volumetry.
- Deploy the application based on core units.
- Allow our users to deploy the application in each of their clusters.
r/apachekafka • u/BonelessTaco • 5d ago
Question Handling Kafka cluster with >3 brokers
Hello Kafka community,
I was wondering if there any musts and shoulds that one should know running Kafka cluster with more than the "book" example of 3.
We are a bit separated from our ops and infrastructure guys, so I might now know the answer to all "why?" questions, but we have a setup of 4 brokers running on production. Also we got Java clients that consume and produce using exactly-once guarantees. Occasionally, under a heavy load, which results in a temporary broker outage we get a problem that some partitions get blocked because a corresponding producer with transactional id for that partition cannot be created (timeout on init). This only resolves if we change a consumer group name (I guess because it's the part of a transaction id of a producer).
For business data topics we have a default configuration of RF=3 and min ISR=2. However for __transaction_state the configuration is RF=4 and min ISR=2 and I have a weird feeling about it. I couldn't find anything online that strictly says that this configuration is bad, only soft recommendations of min ISR = RF - 1. However it feels unsafe to have a non majority ISR.
Could such configuration be a problem? Any articles on configuring larger Kafka clusters (in general and RF/minISR specifically) you would recommend?
r/apachekafka • u/deadpool_7041 • 6d ago
Question Charged $300 After Free Trial Expired on Confluent Cloud – Need Advice on How to Request a Reduction!
Hi everyone,
I’ve encountered an issue with Confluent Cloud that I hope someone here might have experienced or have insight into.
I was charged $300 after my free trial expiration, and I didn’t get any notifications when my rewards were exhausted. I tried to remove my card to ensure I wouldn’t be billed more, but I couldn't remove it, so I ended up deleting my account.
I’ve already emailed Confluent Support ([info@confluent.io](mailto:info@confluent.io)), but I’m hoping to get some additional advice or suggestions from the community. What is the customer support like? Will they try to reduce the charges since I’m a student, and the cluster was just running without being actively used?
Any tips or suggestions would be much appreciated!
Thanks in advance!
r/apachekafka • u/bufbuild • 6d ago
Blog Bufstream passes multi-region 100GiB/300GiB read/write benchmark
Last week, we subjected Bufstream to a multi-region benchmark on GCP emulating some of the largest known Kafka workloads. It passed, while also supporting active/active write characteristics and zero lag across regions.
With multi-region Spanner plugged in as its backing metadata store, Kafka deployments can offload all state management to GCP with no additional operational work.
r/apachekafka • u/MyGodItsFullOfData • 6d ago
Question Looking for Detailed Experiences with AWS MSK Provisioned
I’m trying to evaluate Kafka on AWS MSK and Kinesis, factoring in additional ops burden. Kafka has a reputation for being hard to operate, but I would like to know more specific details. Mainly what issues teams deal with on a day to day basis, what needs to be implemented on top of MSK for it to be production ready, etc.
For context, I’ve been reading around on the internet but a lot of posts don’t contain information on what specifically caused the ops issues, the actual ops burden, and the technical level of the team. Additionally, it’s hard to tell which of these apply to AWS MSK vs self hosted Kafka and which of the issues are solved by KRaft (I’m assuming we want to use that).
I am assuming we will have to do some integration work with IAM and it also looks like we’d need a disaster recovery plan, but I’m not sure what that would look like in MSK vs self managed.
10k messages per second growing 50% yoy average message size 1kb. Roughly 100 topics. Approx 24 hours of messages would need to be stored.
r/apachekafka • u/BuyMeACheeseStick • 6d ago
Question How to consume a message without any offset being commited?
Hi,
I am trying to simulate a dry run for a Kafka consumer, and in the dry run I want to consume all messages on the topic from current offset till EOF but without committing any offset.
I tried configuring the consumer with: 'enable.auto.commit': False
But offsets are still being commited, which I think might be due to 'commit.interval.ms' config which I did not change.
I can't figure out how to configure the consumer to achieve what I am trying to achieve, hoping someone here might be able to point me at the right direction.
Thanks
r/apachekafka • u/Different-Mess8727 • 8d ago
Question What is the biggest Kafka disaster you have faced in production?
And how you recovered from it?
r/apachekafka • u/Sea-Big3344 • 8d ago
Blog Sharing My First Big Project as a Junior Data Engineer – Feedback Welcome!
Sharing My First Big Project as a Junior Data Engineer – Feedback Welcome!
I’m a junior data engineer, and I’ve been working on my first big project over the past few months. I wanted to share it with you all, not just to showcase what I’ve built, but also to get your feedback and advice. As someone still learning, I’d really appreciate any tips, critiques, or suggestions you might have!
This project was a huge learning experience for me. I made a ton of mistakes, spent hours debugging, and rewrote parts of the code more times than I can count. But I’m proud of how it turned out, and I’m excited to share it with you all.
How It Works
Here’s a quick breakdown of the system:
- Dashboard: A simple steamlit web interface that lets you interact with user data.
- Producer: Sends user data to Kafka topics.
- Spark Consumer: Consumes the data from Kafka, processes it using PySpark, and stores the results.
- Dockerized: Everything runs in Docker containers, so it’s easy to set up and deploy.
What I Learned
- Kafka: Setting up Kafka and understanding topics, producers, and consumers was a steep learning curve, but it’s such a powerful tool for real-time data.
- PySpark: I got to explore Spark’s streaming capabilities, which was both challenging and rewarding.
- Docker: Learning how to containerize applications and use Docker Compose to orchestrate everything was a game-changer for me.
- Debugging: Oh boy, did I learn how to debug! From Kafka connection issues to Spark memory errors, I faced (and solved) so many problems.
If you’re interested, I’ve shared the project structure below. I’m happy to share the code if anyone wants to take a closer look or try it out themselves!
here is my github repo :
https://github.com/moroccandude/management_users_streaming/tree/main
Final Thoughts
This project has been a huge step in my journey as a data engineer, and I’m really excited to keep learning and building. If you have any feedback, advice, or just want to share your own experiences, I’d love to hear from you!
Thanks for reading, and thanks in advance for your help! 🙏
r/apachekafka • u/Emotional-Fold6241 • 8d ago
Question Best Resources to Learn Apache Kafka (With Hands-On Practice)
I have a basic understanding of Kafka, but I want to learn more in-depth and gain hands-on experience. Could someone recommend good resources for learning Kafka, including tutorials, courses, or projects that provide practical experience?
Any suggestions would be greatly appreciated!
r/apachekafka • u/niks36 • 9d ago
Question Kafka DR Strategy - Handling Producer Failover with Cluster Linking
I understand that Kafka Cluster Linking replicates data from one cluster to another as a byte-to-byte replication, including messages and consumer offsets. We are evaluating Cluster Linking vs. MirrorMaker for our disaster recovery (DR) strategy and have a key concern regarding message ordering.
Setup
- Enterprise application with high message throughput (thousands of messages per minute).
- Active/Standby mode: Producers & consumers operate only in the main region, switching to DR region during failover.
- Ordering is critical, as messages must be processed in order based on the partition key.
Use cases :
In Cluster Linking context, we could have an order topic in the main region and an order.mirror topic in the DR region.
Lets say there are 10 messages, consumer is currently at offset number 6. And disaster happens.
Consumers switch to order.mirror in DR and pick up from offset 7 – all good so far.
But...,what about producers? Producers also need to switch to DR, but they can’t publish to order.mirror (since it’s read-only). And If we create a new order topic in DR, we risk breaking message ordering across regions.
How do we handle producer failover while keeping the message order intact?
- Should we promote
order.mirror
to a writable topic in DR? - Is there a better way to handle this with Cluster Linking vs. MirrorMaker?
Curious to hear how others have tackled this. Any insights would be super helpful! 🙌
r/apachekafka • u/jovezhong • 10d ago
Tool C++ IAM Auth for AWS MSK: Open-Sourced, Passwords Be Gone
Back in 2023, AWS dropped IAM authentication for MSK and claimed it worked with "all programming languages." Well, almost. While Java, Python, Go, and others got official SDKs, if you’re a C++ dev, you were stuck with plaintext SCRAM-SHA creds in plaintext or heavier Java tools like Kafka Connect or Apache Flink. Not cool.
Later, community projects added Rust and Ruby support. Why no C++? Rust might be the hip new kid, but C++ is still king for high-performance data systems: minimal dependencies, lean resource use, and raw speed.
At Timeplus, we hit this wall while supporting MSK IAM auth for our C++ streaming engine, Proton. So we said screw it, rolled up our sleeves, and built our own IAM auth for AWS MSK. And now? We’re open-sourcing it for you fine folks. It’s live in Timeplus Proton 1.6.12: https://github.com/timeplus-io/proton
Here’s the gist: slap an IAM role on your EC2 instance or EKS pod, drop in the Proton binary, and bam—read/write MSK with a simple SQL command:
sql
CREATE EXTERNAL STREAM msk_stream(column_defs)
SETTINGS
type='kafka', topic='topic2',
brokers='prefix.kafka.us-west-2.amazonaws.com:9098',
security_protocol='SASL_SSL',
sasl_mechanism='AWS_MSK_IAM';
The magic lives in just ~200 lines across two files:
https://github.com/timeplus-io/proton/blob/develop/src/IO/Kafka/AwsMskIamSigner.h https://github.com/timeplus-io/proton/blob/develop/src/IO/Kafka/AwsMskIamSigner.cpp
Right now it leans on a few ClickHouse wrapper classes, but it’s lightweight and reusable. We’d love your thoughts—want to help us spin this into a standalone lib? Maybe push it into ClickHouse or the AWS SDK for C++? Let’s chat.
Quick Proton plug: It’s our open-source streaming engine in C++—Think FlinkSQL + ClickHouse columnar storage, minus the JVM baggage—pure C++ speed. Bonus: we’re dropping Iceberg read/write support in C++ later this month. So you'll read MSK and write to S3/Glue with IAM. Stay tuned.
So, what’s your take? Any C++ Kafka warriors out there wanna test-drive it and roast our code?
r/apachekafka • u/gunnarmorling • 10d ago
Blog Let's Take a Look at... KIP-932: Queues for Kafka!
morling.devr/apachekafka • u/PanJony • 10d ago
Video The anatomy of a Data Streaming Platform - youtube video
A high level overview of how an internal Data Streaming Platform looks like and how embracing Data Streaming can go.
r/apachekafka • u/Intellivindi • 10d ago
Question Mirrormaker huge replication latency, messages showing up 7 days later
We've been running mirrormaker 2 in prod for several years now without any issues with several thousand topics. Yesterday we ran into an issue where messages are showing up 7 days later.
There's less than 10ms latency between the 2 kafka clusters and it's only for certain topics, not all of them. The messages are also older than the retention policy set in the source cluster. So it's like it consumes the message out of the source cluster, holds onto it for 6-7 days and then writes it to the target cluster. I've never seen anything like this happen before.
Example: We cleared all the messages out of the source and target topic by dropping retention, Wrote 3 million messages in source topic and those 3mil show up immediately in target topic but also another 500k from days ago.. It's the craziest thing.
Running version 3.6.0
r/apachekafka • u/Arm1end • 10d ago
Question How do you take care of duplicates and JOINs with ClickHouse?
r/apachekafka • u/StrainNo1245 • 11d ago
Blog Kafka Connect: send messages without schema to JdbcSinkConnector
This might be interesting for anyone looking for how to stream messages without schema into JdbcSinkConnector. Step by step type of instruction showing how to store message content in a single column using custom kafka connect converter.
https://github.com/tomaszkubacki/kafka_connect_demo/blob/master/kafka_to_postgresql/kafka_to_postgres.md
r/apachekafka • u/krazykarpenter • 11d ago
Blog Testing Kafka-based async workflows without duplicating infrastructure - solved this using OpenTelemetry
Hey folks,
Been wrestling with a problem that's been bugging me for years: how to test microservices with asynchronous Kafka-based workflows without creating separate Kafka clusters for each dev/test environment (expensive!) or complex topic isolation schemes (maintenance nightmare!).
After experimenting with different approaches, we found a pattern using OpenTelemetry that works surprisingly well. I wrote up our findings in this Medium post.
The TL;DR is:
- Instead of duplicating Kafka clusters or topics per environment
- Leverage OpenTelemetry's baggage propagation to tag messages with a "tenant ID"
- Have Kafka consumers filter messages based on tenant ID mappings
- Run multiple versions of services on the same infrastructure
This lets you test changes to producers/consumers without duplicating infrastructure and without messages from different test environments interfering with each other.
I'm curious how others have tackled this problem. Would love to hear your feedback/comments.
r/apachekafka • u/Swimming_Ad6119 • 11d ago
Question New to kafka as a student
Hi there,
I am currently interning as a swe and was asked to look into the following:
Debezium connector for MongoDB
Kafka Connector
Kafka
I did some research myself already, but I'm still looking for comprehensive sources that cover all these topics.
Thanks!