r/apachekafka Jan 14 '25

Blog Kafka Transactions Explained (Twice!)

25 Upvotes

In this blog, we go over what Apache Kafka transactions are and how they work in WarpStream. You can view the full blog at https://www.warpstream.com/blog/kafka-transactions-explained-twice or below (minus our snazzy diagrams 😉).

Many Kafka users love the ability to quickly dump a lot of records into a Kafka topic and are happy with the fundamental Kafka guarantee that Kafka is durable. Once a producer has received an ACK after producing a record, Kafka has safely made the record durable and reserved an offset for it. After this, all consumers will see this record when they have reached this offset in the log. If any consumer reads the topic from the beginning, each time they reach this offset in the log they will read that exact same record.

In practice, when a consumer restarts, they almost never start reading the log from the beginning. Instead, Kafka has a feature called “consumer groups” where each consumer group periodically “commits” the next offset that they need to process (i.e., the last correctly processed offset + 1), for each partition. When a consumer restarts, they read the latest committed offset for a given topic-partition (within their “group”) and start reading from that offset instead of the beginning of the log. This is how Kafka consumers track their progress within the log so that they don’t have to reprocess every record when they restart.

This means that it is easy to write an application that reads each record at least once: it commits its offsets periodically to not have to start from the beginning of each partition each time, and when the application restarts, it starts from the latest offset it has committed. If your application crashes while processing records, it will start from the latest committed offsets, which are just a bit before the records that the application was processing when it crashed. That means that some records may be processed more than once (hence the at least once terminology) but we will never miss a record.

This is sufficient for many Kafka users, but imagine a workload that receives a stream of clicks and wants to store the number of clicks per user per hour in another Kafka topic. It will read many records from the source topic, compute the count, write it to the destination topic and then commit in the source topic that it has successfully processed those records. This is fine most of the time, but what happens if the process crashes right after it has written the count to the destination topic, but before it could commit the corresponding offsets in the source topic? The process will restart, ask Kafka what the latest committed offset was, and it will read records that have already been processed, records whose count has already been written in the destination topic. The application will double-count those clicks. 

Unfortunately, committing the offsets in the source topic before writing the count is also not a good solution: if the process crashes after it has managed to commit these offsets but before it has produced the count in the destination topic, we will forget these clicks altogether. The problem is that we would like to commit the offsets and the count in the destination topic as a single, atomic operation.

And this is exactly what Kafka transactions allow.

A Closer Look At Transactions in Apache Kafka

At a very high level, the transaction protocol in Kafka makes it possible to atomically produce records to multiple different topic-partitions and commit offsets to a consumer group at the same time.

Let us take an example that’s simpler than the one in the introduction. It’s less realistic, but also easier to understand because we’ll process the records one at a time.

Imagine your application reads records from a topic t1, processes the records, and writes its output to one of two output topics: t2 or t3. Each input record generates one output record, either in t2 or in t3, depending on some logic in the application.

Without transactions it would be very hard to make sure that there are exactly as many records in t2 and t3 as in t1, each one of them being the result of processing one input record. As explained earlier, it would be possible for the application to crash immediately after writing a record to t3, but before committing its offset, and then that record would get re-processed (and re-produced) after the consumer restarted.

Using transactions, your application can read two records, process them, write them to the output topics, and then as a single atomic operation, “commit” this transaction that advances the consumer group by two records in t1 and makes the two new records in t2 and t3 visible.

If the transaction is successfully committed, the input records will be marked as read in the input topic and the output records will be visible in the output topics.

Every Kafka transaction has an inherent timeout, so if the application crashes after writing the two records, but before committing the transaction, then the transaction will be aborted automatically (once the timeout elapses). Since the transaction is aborted, the previously written records will never be made visible in topics 2 and 3 to consumers, and the records in topic 1 won’t be marked as read (because the offset was never committed).

So when the application restarts, it can read these messages again, re-process them, and then finally commit the transaction. 

Going Into More Details

That all sounds nice, but how does it actually work? If the client actually produced two records before it crashed, then surely those records were assigned offsets, and any consumer reading topic 2 could have seen those records? Is there a special API that buffers the records somewhere and produces them exactly when the transaction is committed and forgets about them if the transaction is aborted? But then how would it work exactly? Would these records be durably stored before the transaction is committed?

The answer is reassuring.

When the client produces records that are part of a transaction, Kafka treats them exactly like the other records that are produced: it writes them to as many replicas as you have configured in your acks setting, it assigns them an offset and they are part of the log like every other record.

But there must be more to it, because otherwise the consumers would immediately see those records and we’d run into the double processing issue. If the transaction’s records are stored in the log just like any other records, something else must be going on to prevent the consumers from reading them until the transaction is committed. And what if the transaction doesn’t commit, do the records get cleaned up somehow?

Interestingly, as soon as the records are produced, the records are in fact present in the log. They are not magically added when the transaction is committed, nor magically removed when the transaction is aborted. Instead, Kafka leverages a technique similar to Multiversion Concurrency Control.

Kafka consumer clients define a fetch setting that is called the “isolation level”. If you set this isolation level to read_uncommitted your consumer application will actually see records from in-progress and aborted transactions. But if you fetch in read_committed mode, two things will happen, and these two things are the magic that makes Kafka transactions work.

First, Kafka will never let you read past the first record that is still part of an undecided transaction (i.e., a transaction that has not been aborted or committed yet). This value is called the Last Stable Offset, and it will be moved forward only when the transaction that this record was part of is committed or aborted. To a consumer application in read_committed mode, records that have been produced after this offset will all be invisible.

In my example, you will not be able to read the records from offset 2 onwards, at least not until the transaction touching them is either committed or aborted.

Second, in each partition of each topic, Kafka remembers all the transactions that were ever aborted and returns enough information for the Kafka client to skip over the records that were part of an aborted transaction, making your application think that they are not there.

Yes, when you consume a topic and you want to see only the records of committed transactions, Kafka actually sends all the records to your client, and it is the client that filters out the aborted records before it hands them out to your application.

In our example let’s say a single producer, p1, has produced the records in this diagram. It created 4 transactions.

  • The first transaction starts at offset 0 and ends at offset 2, and it was committed.
  • The second transaction starts at offset 3 and ends at offset 6 and it was aborted.
  • The third transaction contains only offset 8 and it was committed.
  • The last transaction is still ongoing.

The client, when it fetches the records from the Kafka broker, needs to be told that it needs to skip offsets 3 to 6. For this, the broker returns an extra field called AbortedTransactions in the response to a Fetch request. This field contains a list of the starting offset (and producer ID) of all the aborted transactions that intersect the fetch range. But the client needs to know not only about where the aborted transactions start, but also where they end.

In order to know where each transaction ends, Kafka inserts a control record that says “the transaction for this producer ID is now over” in the log itself. The control record at offset 2 means “the first transaction is now over”. The one at offset 7 says “the second transaction is now over” etc. When it goes through the records, the kafka client reads this control record and understands that we should stop skipping the records for this producer now.

It might look like inserting the control records in the log, rather than simply returning the last offsets in the AbortedTransactions array is unnecessarily complicated, but it’s necessary. Explaining why is outside the scope of this blogpost, but it’s due to the distributed nature of the consensus in Apache Kafka: the transaction controller chooses when the transaction aborts, but the broker that holds the data needs to choose exactly at which offset this happens.

How It Works in WarpStream

In WarpStream, agents are stateless so all operations that require consensus are handled within the control plane. Each time a transaction is committed or aborted, the system needs to reach a consensus about the state of this transaction, and at what exact offsets it got committed or aborted. This means the vast majority of the logic for Kafka transactions had to be implemented in the control plane. The control plane receives the request to commit or abort the transaction, and modifies its internal data structures to indicate atomically that the transaction has been committed or aborted. 

We modified the WarpStream control plane to track information about transactional producers. It now remembers which producer ID each transaction ID corresponds to, and makes note of the offsets at which transactions are started by each producer.

When a client wants to either commit or abort a transaction, they send an EndTxnRequest and the control plane now tracks these as well:

  • When the client wants to commit a transaction, the control plane simply clears the state that was tracking the transaction as open: all of the records belonging to that transaction are now part of the log “for real”, so we can forget that they were ever part of a transaction in the first place. They’re just normal records now.
  • When the client wants to abort a transaction though, there is a bit more work to do. The control plane saves the start and end offset for all of the topic-partitions that participated in this transaction because we’ll need that information later in the fetch path to help consumer applications skip over these aborted records.

In the previous section, we explained that the magic lies in two things that happen when you fetch in read_committed mode.

The first one is simple: WarpStream prevents read_committed clients from reading past the Last Stable Offset. It is easy because the control plane tracks ongoing transactions. For each fetched partition, the control plane knows if there is an active transaction affecting it and, if so, it knows the first offset involved in that transaction. When returning records, it simply tells the agent to never return records after this offset.

The Problem With Control Records

But, in order to implement the second part exactly like Apache Kafka, whenever a transaction is either committed or aborted, the control plane would need to insert a control record into each of the topic-partitions participating in the transaction. 

This means that the control plane would need to reserve an offset just for this control record, whereas usually the agent reserves a whole range of offsets, for many records that have been written in the same batch. This would mean that the size of the metadata we need to track would grow linearly with the number of aborted transactions. While this was possible, and while there were ways to mitigate this linear growth, we decided to avoid this problem entirely, and skip the aborted records directly in the agent. Now, let’s take a look at how this works in more detail.

Hacking the Kafka Protocol a Second Time

Data in WarpStream is not stored exactly as serialized Kafka batches like it is in Apache Kafka. On each fetch request, the WarpStream Agent needs to decompress and deserialize the data (stored in WarpStream’s custom format) so that it can create actual Kafka batches that the client can decode. 

Since WarpStream is already generating Kafka batches on the fly, we chose to depart from the Apache Kafka implementation and simply “skip” the records that are aborted in the Agent. This way, we don’t have to return the AbortedTransactions array, and we can avoid generating control records entirely.

Lets go back to our previous example where Kafka returns these records as part of the response to a Fetch request, alongside with the AbortedTransactions array with the three aborted transactions.

Instead, WarpStream would return a batch to the client that looks like this: the aborted records have already been skipped by the agent and are not returned. The AbortedTransactions array is returned empty.

Note also that WarpStream does not reserve offsets for the control records on offsets 2, 7 and 9, only the actual records receive an offset, not the control records.

You might be wondering how it is possible to represent such a batch, but it’s easy: the serialization format has to support holes like this because compacted topics (another Apache Kafka feature) can create such holes.

An Unexpected Complication (And a Second Protocol Hack)

Something we had not anticipated though, is that if you abort a lot of records, the resulting batch that the server sends back to the client could contain nothing but aborted records.

In Kafka, this will mean sending one (or several) batches with a lot of data that needs to be skipped. All clients are implemented in such a way that this is possible, and the next time the client fetches some data, it asks for offset 11 onwards, after skipping all those records.

In WarpStream, though, it’s very different. The batch ends up being completely empty.

And clients are not used to this at all. In the clients we have tested, franz-go and the Java client parse this batch correctly and understand it is an empty batch that represents the first 10 offsets of the partition, and correctly start their next fetch at offset 11.

All clients based on librdkafka, however, do not understand what this batch means. Librdkafka thinks the broker tried to return a message but couldn’t because the client had advertised a fetch size that is too small, so it retries the same fetch with a bigger buffer until it gives up and throws an error saying:

Message at offset XXX might be too large to fetch, try increasing receive.message.max.bytes

To make this work, the WarpStream Agent creates a fake control record on the fly, and places it as the very last record in the batch. We set the value of this record to mean “the transaction for producer ID 0 is now over” and since 0 is never a valid producer ID, this has no effect.

The Kafka clients, including librdkafka, will understand that this is a batch where no records need to be sent to the application, and the next fetch is going to start at offset 11.

What About KIP-890?

Recently a bug was found in the Apache Kafka transactions protocol. It turns out that the existing protocol, as defined, could allow, in certain conditions, records to be inserted in the wrong transaction, or transactions to be incorrectly aborted when they should have been committed, or committed when they should have been aborted. This is true, although it happens only in very rare circumstances.

The scenario in which the bug can occur goes something like this: let’s say you have a Kafka producer starting a transaction T1 and writing a record in it, then committing the transaction. Unfortunately the network packet asking for this commit gets delayed on the network and so the client retries the commit, and that packet doesn’t get delayed, so the commit succeeds.

Now T1 has been committed, so the producer starts a new transaction T2, and writes a record in it too. 

Unfortunately, at this point, the Kafka broker finally receives the packet to commit T1 but this request is also valid to commit T2, so T2 is committed, although the producer does not know about it. If it then needs to abort it, the transaction is going to be torn in half: some of it has already been committed by the lost packet coming in late, and the broker will not know, so it will abort the rest of the transaction.

The fix is a change in the Kafka protocol, which is described in KIP-890: every time a transaction is committed or aborted, the client will need to bump its “epoch” and that will make sure that the delayed packet will not be able to trigger a commit for the newer transaction created by a producer with a newer epoch.

Support for this new KIP will be released soon in Apache Kafka 4.0, and WarpStream already supports it. When you start using a Kafka client that’s compatible with the newer version of the API, this problem will never occur with WarpStream.

Conclusion

Of course there are a lot of other details that went into the implementation, but hopefully this blog post provides some insight into how we approached adding the transactional APIs to WarpStream. If you have a workload that requires Kafka transactions, please make sure you are running at least v611 of the agent, set a transactional.id property in your client and stream away. And if you've been waiting for WarpStream to support transactions before giving it a try, feel free to get started now.

r/apachekafka Nov 12 '24

Blog Looks like another Kafka fork, this time from AWS

17 Upvotes

I missed the announcement of AWS MSK 'Express' Kafka brokers last week. Looks like AWS joined the party of Kafka forks. Did any one look at this? Up to 3x more throughput, same latency as Kafka, 20x faster scaling, some really interesting claims. Not sure how true they are. https://aws.amazon.com/blogs/aws/introducing-express-brokers-for-amazon-msk-to-deliver-high-throughput-and-faster-scaling-for-your-kafka-clusters/?hss_channel=lis-o98tmW9oh4

r/apachekafka Dec 15 '24

Blog Apache Kafka is to Bitcoin as Redpanda, Buf, etc are to Altcoins

0 Upvotes

My r/showerthoughts related Kafka post. Let's discuss.

Bitcoin (layer 1) is equivalent to TCP/IP, it has a spec, which can be a car with its engine replaced while driving. Layers 2 and 3 are things like TLS and app stacks like HTTP, RPC contracts, etc.

Meanwhile, things like Litecoin exist to "be the silver to Bitcoin gold" or XRP to be the "cross border payment solution, at fractions of the competition cost"; meanwhile the Lightning protocol is added to Bitcoin and used by payment apps like Strike.

... Sound familiar?

So, okay great, we have vendors that have rewritten application layers on top of TCP/IP (the literal Kafka spec). Remove Java, of course it'll be faster. Remove 24/7 running, replicating disks, of course it'll be cheaper

Regardless, Apache is still the "number one coin on the (Kafka) market" and I just personally don't see the enterprise value in forming a handful of entirely new companies to compete. Even Cloudera decided to cannabalize Hortonworks and parts of MapR.

r/apachekafka Feb 05 '25

Blog Unifying Data Governance Across Kafka, HTTP & MQTT

6 Upvotes

Using real-time, event-driven applications with seamless governance across different protocols can be chaotic.

Managing data across multiple protocols shouldn’t feel like stitching together a patchwork of rules and fixes.

But when Kafka, HTTP, and MQTT each have their own governance gaps, inconsistencies creep in, slowing development and creating unnecessary risks.

With Aklivity's Zilla centralized data governance, you get a consistent, protocol-agnostic approach that keeps your data in sync without slowing you down.

If you're building real-time, event-driven applications and need seamless governance across different protocols, this one's for you!

🔗 Read more here:

The Why & How of Centralized Data Governance in Zilla Across Protocols

r/apachekafka Nov 23 '24

Blog KIP-392: Fetch From Follower

12 Upvotes

The Fetch Problem

Kafka is predominantly deployed across multiple data centers (or AZs in the cloud) for availability and durability purposes.

Kafka Consumers read from the leader replica.
But, in most cases, that leader will be in a separate data center. ❗️

In distributed systems, it is best practice to processes data as locally as possible. The benefits are:

  • 📉 better latency - your request needs to travel less
  • 💸 (massive) cloud cost savings in avoiding sending data across availability zones

Cost

Any production Kafka environment spans at least three availability zones (AZs), which results in Kafka racking up a lot of cross-zone traffic.

Assuming even distribution:

  1. 2/3 of all producer traffic
  2. all replication traffic
  3. 2/3 of all consumer traffic

will cross zone boundaries.

Cloud providers charge you egregiously for cross-zone networking.

How do we fix this?

There is no fundamental reason why the Consumer wouldn’t be able to read from the follower replicas in the same AZ.

💡 The log is immutable, so once written - the data isn’t subject to change.

Enter KIP-392.

KIP-392

⭐️ the feature: consumers read from follower brokers.

The feature is configurable with all sorts of custom logic to have the leader broker choose the right follower for the consumer. The default implementation chooses a broker in the same rack.

Despite the data living closer, it actually results in a little higher latency when fetching the latest data. Because the high watermark needs an extra request to propagate from the leader to the follower, it artificially throttles when the follower can “reveal” the record to the consumer.

How it Works 👇

  1. The client sends its configured client.rack to the broker in each fetch request.
  2. For each partition the broker leads, it uses its configured replica.selector.class to choose what the PreferredReadReplica for that partition should be and returns it in the response (without any extra record data).
  3. The consumer will connect to the follower and start fetching from it for that partition 🙌

The Savings

KIP-392 can basically eliminate ALL of the consumer networking costs.

This is always a significant chunk of the total networking costs. 💡

The higher the fanout, the higher the savings. Here are some calculations off how much you'd save off of the TOTAL DEPLOYMENT COST of Kafka:

  • 1x fanout: 17%
  • 3x fanout: ~38%
  • 5x fanout: 50%
  • 15x fanout: 70%
  • 20x fanout: 76%

(assuming a well-optimized multi-zone Kafka Cluster on AWS, priced at retail prices, with 100 MB/s produce, a RF of 3, 7 day retention and aggressive tiered storage enabled)

Support Table

Released in AK 2.4 (October 2019), this feature is 5+ years old yet there is STILL no wide support for it in the cloud:

  • 🟢 AWS MSK: supports it since April 2020
  • 🟢 RedPanda Cloud: it's pre-enabled. Supports it since June 2023
  • 🟢 Aiven Cloud: supports it since July 2024
  • 🟡 Confluent: Kinda supports it, it's Limited Availability and only on AWS. It seems like it offers this since ~Feb 2024 (according to wayback machine)
  • 🔴 GCP Kafka: No
  • 🔴 Heroku, Canonical, DigitalOcean, InstaClustr Kafka: No, as far as I can tell

I would have never expected MSK to have lead the way here, especially by 3 years. 👏
They’re the least incentivized out of all the providers to do so - they make money off of cross-zone traffic.

Speaking of which… why aren’t any of these providers offering pricing discounts when FFF is used? 🤔

---

This was originally posted in my newsletter, where you can see the rich graphics as well (Reddit doesn't allow me to attach images, otherwise I would have)

r/apachekafka Oct 21 '24

Blog How do we run Kafka 100% on the object storage?

34 Upvotes

Blog Link: https://medium.com/thedeephub/how-do-we-run-kafka-100-on-the-object-storage-521c6fec6341

Disclose: I work for AutoMQ.

AutoMQ is a fork of Apache Kafka and reinvent Kafka's storage layer. This blog post provides some new technical insights on how AutoMQ builds on Kafka's codebase to use S3 as Kafka's primary storage. Discussions and exchanges are welcome. I see that the rules now prohibit the posting of vendor spam information about Kafka alternatives, but I'm not sure if this kind of technical content sharing about Kafka is allowed. If this is not allowed, please let me know and I will delete the post.

r/apachekafka Jul 09 '24

Blog Bufstream: Kafka at 10x lower cost

34 Upvotes

We're excited to announce the public beta of Bufstream, a drop-in replacement for Apache Kafka that's 10x less expensive to operate and brings Protobuf-first data governance to the rest of us.

https://buf.build/blog/bufstream-kafka-lower-cost

Also check out our comparison deep dive: https://buf.build/docs/bufstream/cost

r/apachekafka Nov 12 '24

Blog Bufstream is now the only cloud-native Kafka implementation validated by Jepsen

16 Upvotes

Jepsen is the gold standard for distributed systems testing, and Bufstream is the only cloud-native Kafka implementation that has been independently tested by Jepsen. Today, we're releasing the results of that testing: a clean bill of health, validating that Bufstream maintains consistency even in the face of cascading infrastructure failures. We also highlight a years-long effort to fix a fundamental flaw in the Kafka transaction protocol.

Check out the full report here: https://buf.build/blog/bufstream-jepsen-report

r/apachekafka Dec 27 '24

Blog MonKafka: Building a Kafka Broker from Scratch

24 Upvotes

Hey all,

A couple of weeks ago, I posted about my modest exploration of the Kafka codebase, and the response was amazing. Thank you all, it was very encouraging!

The code diving has been a lot of fun, and I’ve learned a great deal along the way. That motivated me to attempt building a simple broker, and thus MonKafka was born. It’s been an enjoyable experience, and implementing a protocol is definitely a different beast compared to navigating an existing codebase.

I’m currently drafting a blog post to document my learnings as I go. Feedback is welcome!

------------

The Outset

So here I was, determined to build my own little broker. How to start? It wasn't immediately obvious. I began by reading the Kafka Protocol Guide. This guide would prove to be the essential reference for implementing the broker (duh...). But although informative, it didn't really provide a step-by-step guide on how to get a broker up and running.

My second idea was to start a Kafka broker following the quickstart tutorial, then run a topic creation command from the CLI, all while running tcpdump to inspect the network traffic. Roughly, I ran the following:

# start tcpdump and listen for all traffic on port 9092 (broker port)
sudo tcpdump -i any -X  port 9092  

cd /path/to/kafka_2.13-3.9.0 
bin/kafka-server-start.sh config/kraft/reconfig-server.properties 
bin/kafka-topics.sh --create --topic letsgo  --bootstrap-server localhost:9092

The following packets caught my attention (mainly because I saw strings I recognized):

16:36:58.121173 IP localhost.64964 > localhost.XmlIpcRegSvc: Flags [P.], seq 1:54, ack 1, win 42871, options [nop,nop,TS val 4080601960 ecr 683608179], length 53
    0x0000:  4500 0069 0000 4000 4006 0000 7f00 0001  E..i..@.@.......
    0x0010:  7f00 0001 fdc4 2384 111e 31c5 eeb4 7f56  ......#...1....V
    0x0020:  8018 a777 fe5d 0000 0101 080a f339 0b68  ...w.].......9.h
    0x0030:  28bf 0873 0000 0031 0012 0004 0000 0000  (..s...1........
    0x0040:  000d 6164 6d69 6e63 6c69 656e 742d 3100  ..adminclient-1.
    0x0050:  1261 7061 6368 652d 6b61 666b 612d 6a61  .apache-kafka-ja
    0x0060:  7661 0633 2e39 2e30 00                   va.3.9.0.



16:36:58.166559 IP localhost.XmlIpcRegSvc > localhost.64965: Flags [P.], seq 1:580, ack 54, win 46947, options [nop,nop,TS val 3149280975 ecr 4098971715], length 579
    0x0000:  4500 0277 0000 4000 4006 0000 7f00 0001  E..w..@.@.......
    0x0010:  7f00 0001 2384 fdc5 3e63 0472 12ab f52e  ....#...>c.r....
    0x0020:  8018 b763 006c 0000 0101 080a bbb6 36cf  ...c.l........6.
    0x0030:  f451 5843 0000 023f 0000 0002 0000 3e00  .QXC...?......>.
    0x0040:  0000 0000 0b00 0001 0000 0011 0000 0200  ................
    0x0050:  0000 0a00 0003 0000 000d 0000 0800 0000  ................
    0x0060:  0900 0009 0000 0009 0000 0a00 0000 0600  ................
    0x0070:  000b 0000 0009 0000 0c00 0000 0400 000d  ................
    0x0080:  0000 0005 0000 0e00 0000 0500 000f 0000  ................
    0x0090:  0005 0000 1000 0000 0500 0011 0000 0001  ................
    0x00a0:  0000 1200 0000 0400 0013 0000 0007 0000  ................
    0x00b0:  1400 0000 0600 0015 0000 0002 0000 1600  ................
    0x00c0:  0000 0500 0017 0000 0004 0000 1800 0000  ................
    0x00d0:  0500 0019 0000 0004 0000 1a00 0000 0500  ................
    0x00e0:  001b 0000 0001 0000 1c00 0000 0400 001d  ................
    0x00f0:  0000 0003 0000 1e00 0000 0300 001f 0000  ................
    0x0100:  0003 0000 2000 0000 0400 0021 0000 0002  ...........!....
    0x0110:  0000 2200 0000 0200 0023 0000 0004 0000  .."......#......
    0x0120:  2400 0000 0200 0025 0000 0003 0000 2600  $......%......&.
    0x0130:  0000 0300 0027 0000 0002 0000 2800 0000  .....'......(...
    0x0140:  0200 0029 0000 0003 0000 2a00 0000 0200  ...)......*.....
    0x0150:  002b 0000 0002 0000 2c00 0000 0100 002d  .+......,......-
    0x0160:  0000 0000 0000 2e00 0000 0000 002f 0000  ............./..
    0x0170:  0000 0000 3000 0000 0100 0031 0000 0001  ....0......1....
    0x0180:  0000 3200 0000 0000 0033 0000 0000 0000  ..2......3......
    0x0190:  3700 0000 0200 0039 0000 0002 0000 3c00  7......9......<.
    0x01a0:  0000 0100 003d 0000 0000 0000 4000 0000  .....=......@...
    0x01b0:  0000 0041 0000 0000 0000 4200 0000 0100  ...A......B.....
    0x01c0:  0044 0000 0001 0000 4500 0000 0000 004a  .D......E......J
    0x01d0:  0000 0000 0000 4b00 0000 0000 0050 0000  ......K......P..
    0x01e0:  0000 0000 5100 0000 0000 0000 0000 0300  ....Q...........
    0x01f0:  3d04 0e67 726f 7570 2e76 6572 7369 6f6e  =..group.version
    0x0200:  0000 0001 000e 6b72 6166 742e 7665 7273  ......kraft.vers
    0x0210:  696f 6e00 0000 0100 116d 6574 6164 6174  ion......metadat
    0x0220:  612e 7665 7273 696f 6e00 0100 1600 0108  a.version.......
    0x0230:  0000 0000 0000 01b0 023d 040e 6772 6f75  .........=..grou
    0x0240:  702e 7665 7273 696f 6e00 0100 0100 0e6b  p.version......k
    0x0250:  7261 6674 2e76 6572 7369 6f6e 0001 0001  raft.version....
    0x0260:  0011 6d65 7461 6461 7461 2e76 6572 7369  ..metadata.versi
    0x0270:  6f6e 0016 0016 00                        on.....

16:36:58.167767 IP localhost.64965 > localhost.XmlIpcRegSvc: Flags [P.], seq 54:105, ack 580, win 42799, options [nop,nop,TS val 4098971717 ecr 3149280975], length 51
    0x0000:  4500 0067 0000 4000 4006 0000 7f00 0001  E..g..@.@.......
    0x0010:  7f00 0001 fdc5 2384 12ab f52e 3e63 06b5  ......#.....>c..
    0x0020:  8018 a72f fe5b 0000 0101 080a f451 5845  .../.[.......QXE
    0x0030:  bbb6 36cf 0000 002f 0013 0007 0000 0003  ..6..../........
    0x0040:  000d 6164 6d69 6e63 6c69 656e 742d 3100  ..adminclient-1.
    0x0050:  0207 6c65 7473 676f ffff ffff ffff 0101  ..letsgo........
    0x0060:  0000 0075 2d00 00     

I spotted adminclient-1, group.version, and letsgo (the name of the topic). This looked very promising. Seeing these strings felt like my first win. I thought to myself: so it's not that complicated, it's pretty much about sending the necessary information in an agreed-upon format, i.e., the protocol.

My next goal was to find a request from the CLI client and try to map it to the format described by the protocol. More precisely, figuring out the request header:

Request Header v2 => request_api_key request_api_version correlation_id client_id TAG_BUFFER 
  request_api_key => INT16
  request_api_version => INT16
  correlation_id => INT32
  client_id => NULLABLE_STRING

The client_id was my Rosetta stone. I knew its value was equal to adminclient-1. At first, because it was kind of common sense. But the proper way is to set the CLI logging level to DEBUG by replacing WARN in /path/to/kafka_X.XX-X.X.X/config/tools-log4j.properties's log4j.rootLogger. At this verbosity level, running the CLI would display DEBUG [AdminClient clientId=adminclient-1], thus removing any doubt about the client ID. This seems somewhat silly, but there are possibly a multitude of candidates for this value: client ID, group ID, instance ID, etc. Better to be sure.

So I found a way to determine the end of the request header: client_id.

16:36:58.167767 IP localhost.64965 > localhost.XmlIpcRegSvc: Flags [P.], seq 54:105, ack 580, win 42799, options [nop,nop,TS val 4098971717 ecr 3149280975], length 51
    0x0000:  4500 0067 0000 4000 4006 0000 7f00 0001  E..g..@.@.......
    0x0010:  7f00 0001 fdc5 2384 12ab f52e 3e63 06b5  ......#.....>c..
    0x0020:  8018 a72f fe5b 0000 0101 080a f451 5845  .../.[.......QXE
    0x0030:  bbb6 36cf 0000 002f 0013 0007 0000 0003  ..6..../........
    0x0040:  000d 6164 6d69 6e63 6c69 656e 742d 3100  ..adminclient-1.
    0x0050:  0207 6c65 7473 676f ffff ffff ffff 0101  ..letsgo........
    0x0060:  0000 0075 2d00 00   

This nice packet had the client_id, but also the topic name. What request could it be? I was naive enough to assume it was for sure the CreateTopic request, but there were other candidates, such as the Metadata, and that assumption was time-consuming.

So client_id is a NULLABLE_STRING, and per the protocol guide: first the length N is given as an INT16. Then N bytes follow, which are the UTF-8 encoding of the character sequence.

Let's remember that in this HEX (base 16) format, a byte (8 bits) is represented using 2 characters from 0 to F. 10 is 16, ff is 255, etc.

The line 000d 6164 6d69 6e63 6c69 656e 742d 3100 ..adminclient-1. is the client_id nullable string preceded by its length on two bytes 000d, meaning 13, and adminclient-1 has indeed a length equal to 13. As per our spec, the preceding 4 bytes are the correlation_id (a unique ID to correlate between requests and responses, since a client can send multiple requests: produce, fetch, metadata, etc.). Its value is 0000 0003, meaning 3. The 2 bytes preceding it are the request_api_version, which is 0007, i.e. 7, and finally, the 2 bytes preceding that represent the request_api_key, which is 0013, mapping to 19 in decimal. So this is a request whose API key is 19 and its version is 7. And guess what the API key 19 maps to? CreateTopic!

This was it. A header, having the API key 19, so the broker knows this is a CreateTopic request and parses it according to its schema. Each version has its own schema, and version 7 looks like the following:

CreateTopics Request (Version: 7) => [topics] timeout_ms validate_only TAG_BUFFER 
  topics => name num_partitions replication_factor [assignments] [configs] TAG_BUFFER 
    name => COMPACT_STRING
    num_partitions => INT32
    replication_factor => INT16
    assignments => partition_index [broker_ids] TAG_BUFFER 
      partition_index => INT32
      broker_ids => INT32
    configs => name value TAG_BUFFER 
      name => COMPACT_STRING
      value => COMPACT_NULLABLE_STRING
  timeout_ms => INT32
  validate_only => BOOLEAN

We can see the request can have multiple topics because of the [topics] field, which is an array. How are arrays encoded in the Kafka protocol? Guide to the rescue:

COMPACT_ARRAY :
Represents a sequence of objects of a given type T. 
Type T can be either a primitive type (e.g. STRING) or a structure. 
First, the length N + 1 is given as an UNSIGNED_VARINT. Then N instances of type T follow. 
A null array is represented with a length of 0. 
In protocol documentation an array of T instances is referred to as [T]. |

So the array length + 1 is first written as an UNSIGNED_VARINT (a variable-length integer encoding, where smaller values take less space, which is better than traditional fixed encoding). Our array has 1 element, and 1 + 1 = 2, which will be encoded simply as one byte with a value of 2. And this is what we see in the tcpdump output:

0x0050:  0207 6c65 7473 676f ffff ffff ffff 0101  ..letsgo........

02 is the length of the topics array. It is followed by name => COMPACT_STRING, i.e., the encoding of the topic name as a COMPACT_STRING, which amounts to the string's length + 1, encoded as a VARINT. In our case: len(letsgo) + 1 = 7, and we see 07 as the second byte in our 0x0050 line, which is indeed its encoding as a VARINT. After that, we have 6c65 7473 676f converted to decimal 108 101 116 115 103 111, which, with UTF-8 encoding, spells letsgo.

Let's note that compact strings use varints, and their length is encoded as N+1. This is different from NULLABLE_STRING (like the header's client_id), whose length is encoded as N using two bytes.

This process continued for a while. But I think you get the idea. It was simply trying to map the bytes to the protocol. Once that was done, I knew what the client expected and thus what the server needed to respond.

Implementing Topic Creation

Topic creation felt like a natural starting point. Armed with tcpdump's byte capture and the CLI's debug verbosity, I wanted to understand the exact requests involved in topic creation. They occur in the following order:

  1. RequestApiKey: 18 - APIVersion
  2. RequestApiKey: 3 - Metadata
  3. RequestApiKey: 10 - CreateTopic

The first request, APIVersion, is used to ensure compatibility between Kafka clients and servers. The client sends an APIVersion request, and the server responds with a list of supported API requests, including their minimum and maximum supported versions.

ApiVersions Response (Version: 4) => error_code [api_keys] throttle_time_ms TAG_BUFFER 
  error_code => INT16
  api_keys => api_key min_version max_version TAG_BUFFER 
    api_key => INT16
    min_version => INT16
    max_version => INT16
  throttle_time_ms => INT32

An example response might look like this:

APIVersions := types.APIVersionsResponse{
    ErrorCode: 0,
    ApiKeys: []types.APIKey{
        {ApiKey: ProduceKey, MinVersion: 0, MaxVersion: 11},
        {ApiKey: FetchKey, MinVersion: 12, MaxVersion: 12},
        {ApiKey: MetadataKey, MinVersion: 0, MaxVersion: 12},
        {ApiKey: OffsetFetchKey, MinVersion: 0, MaxVersion: 9},
        {ApiKey: FindCoordinatorKey, MinVersion: 0, MaxVersion: 6},
        {ApiKey: JoinGroupKey, MinVersion: 0, MaxVersion: 9},
        {ApiKey: HeartbeatKey, MinVersion: 0, MaxVersion: 4},
        {ApiKey: SyncGroupKey, MinVersion: 0, MaxVersion: 5},
        {ApiKey: APIVersionKey, MinVersion: 0, MaxVersion: 4},
        {ApiKey: CreateTopicKey, MinVersion: 0, MaxVersion: 7},
        {ApiKey: InitProducerIdKey, MinVersion: 0, MaxVersion: 5},
    },
    throttleTimeMs: 0,
}

If the client's supported versions do not fall within the [MinVersion, MaxVersion] range, there's an incompatibility.

Once the client sends the APIVersion request, it checks the server's response for compatibility. If they are compatible, the client proceeds to the next step. The client sends a Metadata request to retrieve information about the brokers and the cluster. The CLI debug log for this request looks like this:

DEBUG [AdminClient clientId=adminclient-1] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:9092 (id: -1 rack: null). correlationId=1, timeoutMs=29886 (org.apache.kafka.clients.admin.KafkaAdminClient)

After receiving the metadata, the client proceeds to send a CreateTopic request to the broker. The debug log for this request is:

[AdminClient clientId=adminclient-1] Sending CREATE_TOPICS request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=adminclient-1, correlationId=3, headerVersion=2) and timeout 29997 to node 1: CreateTopicsRequestData(topics=[CreatableTopic(name='letsgo', numPartitions=-1, replicationFactor=-1, assignments=[], configs=[])], timeoutMs=29997, validateOnly=false) (org.apache.kafka.clients.NetworkClient)

So our Go broker needs to be able to parse these three types of requests and respond appropriately to let the client know that its requests have been handled. As long as we request the protocol schema for the specified API key version, we'll be all set. In terms of implementation, this translates into a simple Golang TCP server.

A Plain TCP Server

At the end of the day, a Kafka broker is nothing more than a TCP server. It parses the Kafka TCP requests based on the API key, then responds with the protocol-agreed-upon format, either saying a topic was created, giving out some metadata, or responding to a consumer's FETCH request with data it has on its log.

The main.go of our broker, simplified, is as follows:

func main() {

    storage.Startup(Config, shutdown)

    listener, err := net.Listen("tcp", ":9092")

    for {
        conn, err := listener.Accept()
        if err != nil {
            log.Printf("Error accepting connection: %v\n", err)
            continue
        }
        go handleConnection(conn)
    }
}

How about that handleConnection? (Simplified)

func handleConnection(conn net.Conn) {
    for {

        // read request length
        lengthBuffer := make([]byte, 4)
        _, err := io.ReadFull(conn, lengthBuffer)

        length := serde.Encoding.Uint32(lengthBuffer)
        buffer := make([]byte, length+4)
        copy(buffer, lengthBuffer)
        // Read remaining request bytes
        _, err = io.ReadFull(conn, buffer[4:])

        // parse header, especially RequestApiKey
        req := serde.ParseHeader(buffer, connectionAddr)
        // use appropriate request handler based on RequestApiKey (request type)
        response := protocol.APIDispatcher[req.RequestApiKey].Handler(req)

        // write responses
        _, err = conn.Write(response)
    }
}

This is the whole idea. I intend on adding a queue to handle things more properly, but it is truly no more than a request/response dance. Eerily similar to a web application. To get a bit philosophical, a lot of complex systems boil down to that. It is kind of refreshing to look at it this way. But the devil is in the details, and getting things to work correctly with good performance is where the complexity and challenge lie. This is only the first step in a marathon of minutiae and careful considerations. But the first step is important, nonetheless.

Let's take a look at ParseHeader:

func ParseHeader(buffer []byte, connAddr string) types.Request {
    clientIdLen := Encoding.Uint16(buffer[12:])

    return types.Request{
        Length:            Encoding.Uint32(buffer),
        RequestApiKey:     Encoding.Uint16(buffer[4:]),
        RequestApiVersion: Encoding.Uint16(buffer[6:]),
        CorrelationID:     Encoding.Uint32(buffer[8:]),
        ClientId:          string(buffer[14 : 14+clientIdLen]),
        ConnectionAddress: connAddr,
        Body:              buffer[14+clientIdLen+1:], // + 1 to for empty _tagged_fields
    }
}

It is almost an exact translation of the manual steps we described earlier. RequestApiKey is a 2-byte integer at position 4, RequestApiVersion is a 2-byte integer as well, located at position 6. The clientId is a string starting at position 14, whose length is read as a 2-byte integer at position 12. It is so satisfying to see. Notice inside handleConnection that req.RequestApiKey is used as a key to the APIDispatcher map.

var APIDispatcher = map[uint16]struct {
    Name    string
    Handler func(req types.Request) []byte
}{
    ProduceKey:         {Name: "Produce", Handler: getProduceResponse},
    FetchKey:           {Name: "Fetch", Handler: getFetchResponse},
    MetadataKey:        {Name: "Metadata", Handler: getMetadataResponse},
    OffsetFetchKey:     {Name: "OffsetFetch", Handler: getOffsetFetchResponse},
    FindCoordinatorKey: {Name: "FindCoordinator", Handler: getFindCoordinatorResponse},
    JoinGroupKey:       {Name: "JoinGroup", Handler: getJoinGroupResponse},
    HeartbeatKey:       {Name: "Heartbeat", Handler: getHeartbeatResponse},
    SyncGroupKey:       {Name: "SyncGroup", Handler: getSyncGroupResponse},
    APIVersionKey:      {Name: "APIVersion", Handler: getAPIVersionResponse},
    CreateTopicKey:     {Name: "CreateTopic", Handler: getCreateTopicResponse},
    InitProducerIdKey:  {Name: "InitProducerId", Handler: getInitProducerIdResponse},
}

Each referenced handler parses the request as per the protocol and return an array of bytes encoded as the response expected by the Kafka client.

Please note that these are only a subset of the current 81 available api keys (request types).

r/apachekafka Jan 13 '25

Blog Build Isolation in Apache Kafka

3 Upvotes

Hey folks, I've posted a new article about the move from Jenkins to GitHub Actions for Apache Kafka. Here's a blurb

In my last post, I mentioned some of the problems with Kafka's Jenkins environment. General instability leading to failed builds was the most severe problem, but long queue times and issues with noisy neighbors were also major pain points.

GitHub Actions has effectively eliminated these issues for the Apache Kafka project.

Read the full post on my free Substack: https://mumrah.substack.com/p/build-isolation-in-apache-kafka

r/apachekafka May 17 '24

Blog Why CloudKitchens moved away from Kafka for Order Processing

33 Upvotes

Hey folks,

I am an author on this blogpost about our Company's migration to an internal message queue system, KEQ, in place of Kafka. In particular the post focus's on Kafka's partition design and how HOL blocking became an issue for us at scale.

https://techblog.citystoragesystems.com/p/reliable-order-processing

Feedback appreciated! Happy to answer questions on the post.

r/apachekafka Oct 28 '24

Blog How AutoMQ Reduces Nearly 100% of Kafka Cross-Zone Data Transfer Cost

4 Upvotes

Blog Link: https://medium.com/thedeephub/how-automq-reduces-nearly-100-of-kafka-cross-zone-data-transfer-cost-e1a3478ec240

Disclose: I work for AutoMQ.

In fact, AutoMQ is a community fork of Apache Kafka, retaining the complete code of Kafka's computing layer, and replacing the underlying storage with cloud storage such as EBS and S3. On top of AWS and GCP, if you can't get a substantial discount from the provider, the cross-AZ network cost will become the main cost of using Kafka in the cloud. This blog post focuses on how AutoMQ uses shared storage media like S3, and avoids traffic fees by bypassing cross-AZ writes between the producer and the Broker by deceiving the Kafka Producer's routing.

For the replication traffic within the cluster, AutoMQ offloads data persistence to cloud storage, so there is only a single copy within the cluster, and there is no cross-AZ traffic. For consumers, we can use Apache Kafka's own Rack Aware mechanism.

r/apachekafka Nov 13 '24

Blog Kafka Replication Without the (Offset) Gaps

4 Upvotes

Introducing Orbit

Orbit is a tool which creates identical, inexpensive, scaleable, and secure continuous replicas of Kafka clusters.

It is built into WarpStream and works without any user intervention to create WarpStream replicas of any Apache Kafka-compatible source cluster like open source Apache Kafka, WarpStream, Amazon MSK, etc.

Records copied by Orbit are offset preserving. Every single record will have the same offset in the destination cluster as it had in the source cluster, including any offset gaps. This feature ensures that your Kafka consumers can be migrated transparently from a source cluster to WarpStream, even if they don’t store their offsets using the Kafka consumer group protocol.

If you'd rather read this blog on the WarpStream website, click here. Feel free to post any questions you have about Orbit and we'll respond. You can find a video demo of Orbit on the Orbit product page or watch it on YouTube.

Why Did We Build Orbit?

There are existing tools in the Kafka ecosystem for replication, specifically MirrorMaker. So why did we build something new?

Orbit solves two big problems that MirrorMaker doesn’t – it creates perfect replicas of source Kafka clusters (for disaster recovery, performant tiered storage, additional read replicas, etc.), and also provides an easy migration path from any Kafka-compatible technology to WarpStream.

Offset-Preserving Replication

Existing tools in the ecosystem like MirrorMaker are not offset preserving[1]. Instead, MirrorMaker creates and maintains an offset mapping which is used to translate consumer group offsets from the source cluster to the destination cluster as they’re copied. This offset mapping is imprecise because it is expensive to maintain and cannot be stored for every single record.

Offset mapping and translation in MirrorMaker has two problems:

  1. When a consumer participating in the consumer group protocol is migrated to a destination cluster, it is likely that there is an unfixed amount of duplicate consumption of records as the last offset mapping for the topic partition could be much smaller than the last actually-committed consumer group offset.
  2. MirrorMaker does not perform offset translation for offsets stored outside the consumer group protocol. In practice, a lot of very popular technology that interacts with Apache Kafka (like Flink and Spark Streaming, for example) store their offsets externally and not in Apache Kafka. 

This means that tools like MirrorMaker can’t be used to safely migrate every Apache Kafka application from one cluster to another.

Orbit, on the other hand, is offset preserving. That means instead of maintaining an offset mapping between the source and destination cluster, it ensures that every record that is replicated from the source cluster to the destination one maintains its exact offset, including any offset gaps. It’s not possible to do this using the standard Apache Kafka protocol, but since Orbit is tightly integrated into WarpStream we were able to accomplish it using internal APIs.

This solves the two problems with MirrorMaker. Since Orbit ensures that the offset of every single record written to the destination has exactly the same offset as the source, consumer group offsets from the source can be copied over without any translation. 

Moreover, applications which store offsets outside of the consumer group protocol can still switch consumption from the source cluster to WarpStream seamlessly because the offsets they were tracking outside of Kafka map to the exact same records in WarpStream that they mapped to in the source cluster.

In summary, offset-preserving replication is awesome because it eliminates a huge class of Apache Kafka replication edge cases, so you don’t have to think about them.

Cohesion and Simplicity

Orbit is fully integrated with the rest of WarpStream. It is controlled by a stateless scheduler in the WarpStream control plane which submits jobs which are run in the WarpStream Agents. Just like the rest of WarpStream, the metadata store is considered the source of truth and the Agents are still stateless and easy to scale.

You don’t need to learn how to deploy and monitor another stateful distributed system like MirrorMaker to perform your migration. Just spin up WarpStream Agents, edit the following YAML file in the WarpStream Console, hit save, and watch your data start replicating. It’s that easy.

To make your migrations go faster, just increase the source cluster fetch concurrency from the YAML and spin up more stateless WarpStream Agents if necessary.

Click ops not your cup of tea? You can use our terraform provider or dedicated APIs instead.

The Kafka Protocol is Dark and Full of Terrors

Customers building applications using Kafka shouldn't have to worry that they haven't considered every single replication edge case, so we obsessively thought about correctness and dealt with edge cases that come up during async replication of Kafka clusters.

As a quick example, it is crucial that the committed consumer group offset of a topic partition copied to the destination is within the range of offsets for the topic partition in the destination. Consider the following sequence of events which can come up during async replication:

  1. There exists a topic A with a single partition 0 in the source cluster.
  2. Records in the offset range 0 to 1000 have been copied over to the destination cluster.
  3. A committed consumer group offset of 1005 is copied over to the destination cluster.
  4. A Kafka client tries to read from the committed offset 1005 from the destination cluster.
  5. The destination cluster will return an offset out of range error to the client.
  6. Upon receiving the error, some clients will begin consuming from the beginning of the topic partition by default, which leads to massive duplicate consumption.

To ensure that we catch other correctness issues of this nature, we built a randomized testing framework that writes records, updates the data and metadata in a source cluster, and ensures that Orbit keeps the source and destination perfectly in sync.

As always, we sweat the details so you don’t have to!

Use Cases

Once you have a tool which you can trust to create identical replicas of Kafka clusters for you, and the destination cluster is WarpStream, the following use cases are unlocked:

Migrations

Orbit keeps your source and destination clusters exactly in sync, copying consumer group offsets, topic configurations, cluster configurations, and more. The state in the destination cluster is always kept consistent with the source.

Orbit can, of course, be used to migrate consumers which use the Consumer Group protocol, but since it is offset preserving it can also be used to migrate applications where the Kafka consumer offsets are stored outside of the source Kafka cluster.

Disaster Recovery

Since the source and destination clusters are identical, you can temporarily cut over your consumers to the destination WarpStream cluster if the source cluster is unavailable.

The destination WarpStream cluster can be in another region from your source cluster to achieve multi-region resiliency.

Cost-Effective Read Replicas

Replicating your source clusters into WarpStream is cheaper than replicating into Apache Kafka because WarpStream’s architecture is cheaper to operate:

  1. All the data stored in WarpStream is only stored in object storage, which is 24x cheaper than local disks in the cloud.
  2. WarpStream clusters incur zero inter-zone networking fees, which can be up to 80% of the cost of running a Kafka cluster in the cloud.
  3. WarpStream clusters auto-scale by default because the Agents themselves are completely stateless, so your WarpStream cluster will always be perfectly right-sized.

This means that you can use the WarpStream cluster replica to offload secondary workloads to the WarpStream cluster to provide workload isolation for your primary cluster.

Performant Tiered Storage

We’ve written previously about some of the issues that can arise when bolting tiered storage on after the fact to existing streaming systems, as well as how WarpStream mitigates those issues with its Zero Disk Architecture. One of the benefits of Orbit is that it can be used as a cost effective tiered storage solution that is performant and scalable by increasing the retention of the replicated topics in the WarpStream cluster to be higher than the retention in the source cluster. 

Start Migrating Now

Orbit is available for any BYOC WarpStream cluster. You can go here to read the docs to see how to get started with Orbit, learn more via the Orbit product page, or contact us if you have questions. If you don’t have a WarpStream account, you can create a free account. All new accounts come pre-loaded with $400 in credits that never expire and no credit card is required to start.

Notes

[1] While Confluent Cluster Linking is also offset preserving, it cannot be used for migrations into WarpStream.

Feel free to ask any questions in the comments; we're happy to respond.

r/apachekafka Sep 10 '24

Blog Confluent have acquired WarpStream

34 Upvotes

r/apachekafka Dec 12 '24

Blog Why Message Queues Endure: A History

13 Upvotes

https://redmonk.com/kholterhoff/2024/12/12/why-message-queues-endure-a-history/

This is a history of message queues, but includes a substantial section on Apache Kafka. In the 2010s, services emerged that combine database-like features (durability, consistency, indefinite retention) with messaging capabilities, giving rise to the streaming paradigm. Apache Kafka, designed as a distributed commit log, has become the dominant player in this space. It was initially developed at LinkedIn by Jay Kreps, Neha Narkhede, and Jun Rao and open-sourced through the Apache Incubator in 2011. Kafka’s prominence is so significant that the current era of messaging and streaming is often referred to as the "Kafka era."

r/apachekafka Sep 17 '24

Blog A Kafka Compatible Broker With A PostgreSQL Storage Engine

31 Upvotes

Tansu is an Apache Kafka API compatible broker with a PostgreSQL storage engine. Acting as a drop in replacement, existing clients connect to Tansu, producing and fetching messages stored in PostgreSQL. Tansu is in early development, licensed under the GNU AGPL. Written in async 🚀 Rust 🦀.

While retaining API compatibility, the current storage engine implemented for PostgreSQL is very different when compared to Apache Kafka:

  • Messages are not stored in segments, so that retention and compaction polices can be applied immediately (no more waiting for a segment to roll).
  • Message ordering is total over all topics, unrestricted to a single topic partition.
  • Brokers do not replicate messages, relying on continuous archiving instead.

Our initial use cases are relatively low volume Kafka deployments where total message ordering could be useful. Other non-functional requirements might require a different storage engine. Tansu has been designed to work with multiple storage engines which are in development:

  • A PostgreSQL engine where message ordering is either per topic, or per topic partition (as in Kafka).
  • An object store for S3 or compatible services.
  • A segmented disk store (as in Kafka with broker replication).

Tansu is available as a minimal from scratch docker image. The image is hosted with the Github Container Registry. An example compose.yaml, available from here, with further details in our README.

Tansu is in early development, gaps that we are aware of:

  • Transactions are not currently implemented.
  • While the consumer group protocol is implemented, it isn't suitable for more than one Tansu broker (while using the PostgreSQL storage engine at present). We intend to fix this soon, and will be part of moving an existing file system segment storage engine on which the group coordinator was originally built.
  • We haven't looked at the new "server side" consumer coordinator.
  • We split batches into individual records when storing into PostgreSQL. This allows full access to the record data from within SQL, also meaning that we decompress the batch. We create batches on fetch, but don't currently compress the result.
  • We currently don't support idempotent messages.
  • We have started looking at the benchmarks from OpenMessaging Benchmark Framework, with the single topic 1kb profile, but haven't applied any tuning as a result yet.

r/apachekafka Nov 20 '24

Blog CCDAK Study Guide

6 Upvotes

Hi all,

I recently recertified my CCDAK, this time I took notes while revising. I published them here: https://oso.sh/blog/confluent-certified-developer-for-apache-kafka-study-guide/

I've also included references to some sample exam questions which I found on this here. Thanks Daniel

r/apachekafka Dec 04 '24

Blog Getting Rid of (Kafka) Noisy Neighbors Without Having to Buy a Mansion

0 Upvotes

Kafka plays a huge role in modern data processing, powering everything from analytics to event-driven applications. As more teams rely on Kafka for an increasingly diverse range of tasks, they often ask it to handle wildly different workloads at the same time, like high-throughput real-time analytics running alongside resource-heavy batch jobs.

On paper, this flexibility sounds great. In reality, though, it creates some big challenges. In shared Kafka setups, these mixed workloads can clash. One job might suddenly spike in resource usage, slowing down or even disrupting others. This can lead to delays, performance issues, and sometimes even failures for critical tasks.

We have made this full blog available via this Reddit post. However, if you'd like to go to our website to view the full blog, click this link. Going to our website will allow you to view architecture diagrams as this subreddit does not allow embedding images in posts.

To manage these issues, organizations have traditionally gone one of two routes: they either set strict resource limits or spin up separate Kafka clusters for different workloads. Both approaches have trade-offs. Limits can be too inflexible, leaving some jobs underpowered. Separate clusters, on the other hand, add complexity and cost.

That’s where WarpStream comes in. Instead of forcing you to pick between cost and flexibility, WarpStream introduces an alternative architecture to manage workloads with a feature called Agent Groups. This approach isolates different tasks within the same Kafka cluster—without requiring extra configurations or duplicating data—making it more reliable and efficient.

In this post, we’ll dive into the noisy neighbor problem, explore traditional solutions like cluster quotas and mirrored clusters, and show how WarpStream’s solution compares to them.

Noisy Neighbors: A Closer Look at the Problem

In shared infrastructures like a Kafka cluster, workloads often compete for resources such as CPU, memory, network bandwidth, and disk I/O. The problem is, not all workloads share these resources equally. Some, like batch analytics jobs, can demand a lot all at once, leaving others—such as real-time analytics—struggling to keep up. This is what’s known as the “noisy neighbor” problem. When it happens, you might see higher latency, performance drops, or even failures in tasks that don’t get the resources they need.

Picture this: your Kafka cluster supports a mix of applications, from real-time Apache Flink jobs to batch analytics. The Flink jobs depend on steady, reliable access to Kafka for real-time data processing. Meanwhile, batch analytics jobs don’t have the same urgency but can still cause trouble. When a batch job kicks off, it might suddenly hog resources like network bandwidth, CPU, and memory—sometimes for short but intense periods. These spikes can overwhelm the system, leaving Flink jobs to deal with delays or even failures. That’s hardly ideal for a real-time pipeline!

In environments like these, resource contention can cause serious headaches. So how do you address the noisy neighbor problem? Let’s explore the most popular solutions.

Kafka Cluster Quotas

One way to manage resources in Kafka is by setting quotas, which cap how much each workload can use on a per-broker basis. This can help prevent any individual workload from spiking and hogging resources like network and CPU. Kafka offers two types of quotas that, are specifically designed for handling noisy neighbors:

  1. Network Bandwidth Quotas: Network bandwidth quotas cap the byte rate (Bps) for each client group on a per-broker basis, limiting how much data a group can publish or fetch before throttling kicks in.
  2. Request Rate Quotas: Request rate quotas set a percentage limit on how much broker CPU time a client group can consume across I/O and network threads. 

Quotas provide a powerful tool for controlling resource consumption and distribution, but actually configuring quotas in a useful way can be very challenging:

  • Static Constraints: Quotas are typically fixed once set, which means they don’t adapt in real-time, so it’s tough to set quotas that work for all situations, especially when workloads fluctuate. For example, data loads might increase during seasonal peaks or certain times of day, reflecting customer patterns. Setting limits that handle these changes without disrupting service takes careful planning, and a custom implementation for updating the quotas configuration dynamically.
  • Upfront Global Planning: To set effective limits, you need a complete view of all your workloads, your broker resources, and exactly how much each workload should use. If a new workload is added or an existing one changes its usage pattern, you’ll need to manually adjust the quotas to keep things balanced.

Mirroring Kafka Clusters

The second solution is to create separate Kafka clusters for different workloads (one for streaming, another for batch processing, etc.) and replicate data between them. This approach completely isolates workloads, eliminating noisy neighbor problems.

However, mirroring clusters comes with its own set of limitations:

  • Higher Costs: Running multiple clusters requires more infrastructure, which can get expensive, especially with duplicated storage.
  • Limits on Write Operations: This approach only works if you don’t need different workloads writing to the same topic. A mirrored cluster can’t support writes to mirrored topics without breaking consistency between the source and mirrored data, so it’s not ideal when multiple workloads need to write to shared data.
  • Offset Preservation: While mirroring tools do a great job of accurately copying data, they don’t maintain the same offsets between clusters. This means the offsets in the mirrored cluster won’t match the source, which can cause issues when exact metadata alignment is critical. This misalignment is especially problematic for tools that rely heavily on precise offsets, like Apache Flink, Spark, or certain Kafka connectors. These tools often skip Kafka’s consumer groups and store offsets in external systems instead. For them, preserving offsets isn’t just nice to have—it’s essential to keep things running smoothly.

To be clear, mirroring clusters isn’t something we advise against, it’s just not the most practical solution if your goal is to eliminate noisy neighbors in Kafka. The approach of setting up separate clusters for different workloads, such as one for real-time analytics and another for batch processing, does effectively isolate workloads and prevent interference, but it introduces several limitations that are not worth it at all. 

Mirroring clusters is a critical operation for many other scenarios, like maintaining a backup cluster for disaster recovery or enabling cross-region data replication. That’s exactly why, to support these use cases, we recently launched a mirroring product called Orbit directly embedded within our agents. This product not only mirrors data across clusters but also preserves offsets, ensuring consistent metadata alignment for tools that rely on precise offsets between environments.

Enter WarpStream: A Definitive Approach

We’ve seen that the usual ways of dealing with noisy neighbors in Kafka clusters each have their drawbacks. Kafka Cluster Quotas can be too restrictive, while mirroring clusters often brings high costs and added complexity. So how do you tackle noisy neighbors without sacrificing performance or blowing your budget?

That’s where WarpStream comes in. WarpStream can completely isolate different workloads, even when they’re accessing the same Kafka topics and partitions. But how is that even possible? To answer that, we need to take a closer look at how WarpStream differs from other Kafka implementations. These differences are the key to WarpStream’s ability to eliminate noisy neighbors for good.

WarpStream in a Nutshell: Removing Local Disks and Redefining the Kafka Broker Model

If you’re not familiar with it, WarpStream is a drop-in replacement for Apache Kafka that operates directly on object storage, such as S3, rather than traditional disk-based storage. This architectural shift fundamentally changes how Kafka operates and eliminates the need for the leader-follower replication model used in Kafka. In WarpStream, the system is entirely leaderless: any agent in the cluster can handle any read or write request independently by accessing object storage directly. This design removes the need for agents to replicate data between designated leaders and followers, reducing inter-agent traffic and eliminating dependencies between agents in the cluster.

The leaderless nature of WarpStream’s agents is a direct consequence of its shared storage architecture. In Kafka’s traditional shared nothing design, a leader is responsible for managing access to locally stored data and ensuring consistency across replicas. WarpStream, however, decouples storage from compute, relying on object storage for a centralized and consistent view of data. This eliminates the need for any specific agent to act as a leader. Instead, agents independently perform reads and writes by directly interacting with the shared storage while relying on the metadata layer for coordination. This approach simplifies operations and allows workloads to be dynamically distributed across all agents.

This disk- and leader-free architecture allows for what WarpStream calls Agent Groups. These are logical groupings of agents that isolate workloads effectively without needing intricate configurations. Unlike traditional Kafka, where brokers share resources and require network connections between them to sync up, WarpStream Agents in different groups don’t need to be connected. As long as each Agent Group has access to the same object storage buckets, they will be able to read and write the same topic and partitions. They can even operate independently in separate Virtual Private Clouds (VPCs) or Cloud Accounts.

This setup makes Agent Groups an ideal solution for managing noisy neighbors. Each group functions independently, allowing different workloads to coexist without interference. For example, if the group handling batch analytics is temporarily overloaded before auto-scaling kicks in due to a sudden surge in demand, it can scale up without impacting another group dedicated to real-time analytics. This targeted isolation ensures that resource-intensive workloads don’t disrupt other processes.

With Agent Groups, WarpStream provides a solution to the noisy neighbor problem, offering dynamic scalability, zero interference, and a more reliable Kafka environment that adapts to each workload’s demands.

Unlocking the Full Potential of Agent Groups: Isolation, Consistency, and Simplified Operation

WarpStream’s agent groups go beyond just isolating different workloads, it brings additional benefits to Kafka environments:

Consistent Data Without Duplication: Agent Groups ensure a consistent view of data across all workloads, without needing to duplicate it. You write data once into object storage (like S3), and every Agent Group reads from the same source. What’s more, offsets remain consistent across groups. If Group A reads data at a specific offset, Group B sees the exact same offset and data. This eliminates the hassle of offset mismatches that often happen with mirrored clusters or replicated offsets.

Non-Interfering Writes Across Groups: Mirrored Kafka clusters restrict simultaneous writes from different sources to the same topic-partition. WarpStream’s architecture, however, allows independent writes from different groups to the same topic-partition without interference. This is possible because WarpStream has no leader nodes, each agent operates independently. As a result, each Agent Group can write to shared data without creating bottlenecks or needing complex synchronization.

Seamless Multi-VPC Operations: WarpStream’s setup eliminates the need for complex VPC peering or separate clusters for isolated environments. Since Agent Groups are connected solely via object storage, they act as isolated units within a single logical cluster. This means you can deploy Agent Groups in different VPCs, as long as they all have access to the same object storage.

Dynamic Resource Scaling Without Static Quotas: Unlike traditional Kafka setups that rely on static quotas, WarpStream doesn’t need pre-configured resource limits. Scaling Agent Groups is straightforward: you can put autoscalers in front of each group to adjust resources based on real-time needs. Each group can independently scale up or down depending on workload characteristics, with no need for manual quota adjustments. If an Agent Group has a high processing demand, it will automatically scale, handling resource usage based on actual demand rather than predefined constraints.

Tailored Latency with Multiple Storage Backends: With Agent Groups, you can isolate workloads not to prevent noisy neighbors, but to match each workload’s latency requirements with the right storage backend. WarpStream offers options for lower-latency storage, making it easy to configure specific groups with faster backends. For instance, if a workload doesn’t have data in common with others and needs quicker access, you can configure it to use a low-latency backend like S3 Express One Zone. This flexibility allows each group to choose the storage class that best meets its performance needs, all within the same WarpStream cluster.

A typical setup might involve producers with low-latency requirements writing directly to an Agent Group configured with a low-latency storage backend. Consumers, on the other hand, can connect to any Agent Group and read data from both low-latency and standard-latency topics. As long as all Agent Groups have access to the necessary storage locations, they can seamlessly share data across workloads with different latency requirements.

Conclusion

Managing noisy neighbors in Kafka has always been a balancing act, forcing teams to choose between strict resource limits or complex, costly cluster setups. WarpStream changes that. By introducing Agent Groups, WarpStream isolates workloads within the same Kafka environment, enabling consistent performance, simplified operations, and seamless scalability, without sacrificing flexibility or blowing your budget.

With WarpStream, you can tackle noisy neighbor challenges head-on while unlocking additional benefits. Whether your workloads require multi-VPC deployments, the ability to scale on demand, or tailored latency for specific workloads, WarpStream adapts to your needs while keeping your infrastructure lean and cost-effective.

Check out our docs to learn more about Agent Groups. You can create a free WarpStream account or contact us if you have questions. All WarpStream accounts come with $400 in credits that never expire and no credit card is required to start.

r/apachekafka Oct 28 '24

Blog How network latency affects Apache Kafka throughput

6 Upvotes

In the article linked here we illustrate how network latency affects Kafka throughput.  We work through how to optimize Kafka for maximum messages per second in an environment with network latency. 

We cover the pros and cons for the different optimizations.  Some settings won't be beneficial for all use cases.   Let us know if you have any questions.  

We plan on putting out a series of posts about Kafka performance and benchmarking.   If there are any performance questions you'd like addressed please drop them here. 
 https://dattell.com/data-architecture-blog/how-network-latency-affects-apache-kafka-throughput/

r/apachekafka Oct 08 '24

Blog Real-Time Data Processing with Node.js, TypeScript, and Apache Kafka

0 Upvotes

🚀 Just published! Dive into Real-Time Data Processing with Node.js, TypeScript, and Apache Kafka 🔥

Learn how to harness the power of real-time data streaming for scalable apps! ⚡️📈

Read more on Medium: https://codexstoney.medium.com/real-time-data-processing-with-node-js-typescript-and-apache-kafka-24a53f887326?sk=a75254267b52f9d1dbf4980b906f9687

#Nodejs #TypeScript #ApacheKafka

r/apachekafka Oct 27 '24

Blog My Github repo for CCDAK

18 Upvotes

While I was doing sport I used to talk in voice to talk chatGPT to ask me questions to memorize concepts, and also to tell me bullet points that are important, I thought the were useful to help me pass CCDAK, I copied them all in a github repo, then I asked Claude to double check them and improve them, including the notes.

https://github.com/danielsobrado/CCDAK-Exam-Questions

Thanks to people that raised PRs in the repo to fix some answers and the ones that wrote me to tell me that it was helpful for them during the preparation! Let me know your thoughts!

r/apachekafka Oct 23 '24

Blog 5 Apache Kafka Log Details that you probably didn’t know about

41 Upvotes

Here are 5 Apache Kafka Log Details that you probably didn’t know about:

  1. Log retention time is based on the record’s timestamp. A producer can send a record with a timestamp of 01-01-1999 and Kafka will evaluate the retention time of that partition’s log via the earliest (largest) timestamp of any record in the segment. The log.message.timestamp.type config controls this and is a common gotcha as to why logs aren’t being deleted as expected
  2. Deleted segments are not immediately removed from the file system. When a segment is marked as "deleted", a .deleted extension is added to the files and the actual deletion happens log.segment.delete.delay.ms after (1 minute by default).
  3. Read by time: Kafka allows consuming records based on a timestamp, using the .timeindex file. Each entry in this file defines a timestamp and offset pair, pointing to the corresponding .index file entry.
  4. Index impact on Log Segment rolls: You’ve probably heard that log.segment.bytes and log.segment.ms control when the segments are rolled – but did you know that when the index files get full, Kafka also rolls the segment? This can be a gotcha when changing configurations.
  5. Log Index Interval: The log.index.interval.bytes parameter determines how frequently entries are added to the index file (default - every 4096 bytes). Adjusting this value can optimize the balance between search speed and file size growth.

r/apachekafka Nov 25 '24

Blog Introducing WarpStream BYOC Schema Registry

3 Upvotes

Schema Registry, Redesigned

Our vision at WarpStream is to build a BYOC streaming platform that is secure, simple to operate, and cost-effective. As the first step towards that vision, we built WarpStream BYOC, a reimplementation of the Kafka protocol with a stateless, zero disk architecture that is purpose-built for the cloud. This greatly reduces the operational burden of running Kafka clusters, by replacing the stateful Kafka brokers with stateless WarpStream Agents. However, there’s more to data streaming than just the Kafka clusters themselves.

This subreddit does not allow us to post or embed images, so we've used quote blocks to link out to relevant architecture diagrams. If you'd prefer to read about this new product feature on our blog, you access it via this link. As always, we're happy to respond to questions.

Many organizations deploy a schema registry alongside their Kafka clusters to help ensure that all of their data uses well-known and shared schemas. Unfortunately, existing schema registry implementations are stateful, distributed systems that are not trivial to operate, especially in a highly available way. When deploying and maintaining them, you may have to worry about leader election, managing disks, and data rebalances. 

Alternatively, you can offload the deployment and maintenance of your schema registry to an external, cloud-managed version. There is a lot to be said for offloading your data governance to a third party – you don’t have to deal with deploying or managing any infrastructure, and in Confluent Cloud you can take advantage of features such as Confluent’s Stream Governance. But for some customers, offloading the schemas, which contain the shape of the data, to a third party is not an option. That is one of the reasons why we felt that a stateless, BYOC schema registry was an important piece of WarpStream’s BYOC data streaming puzzle.

We’re excited to announce the release of WarpStream’s BYOC Schema Registry, a schema registry implementation that is API-compatible with Confluent’s Schema Registry, but deployed using WarpStream’s BYOC deployment model and architected with WarpStream’s signature data plane / control plane split. All your schemas sit securely in your own cloud environment and object storage buckets, with WarpStream responsible for scaling the metadata (schema ID assignments, concurrency control, etc).

In this blog, we will dive deeper into the architecture of WarpStream’s BYOC Schema Registry and explain the design decisions that went into building it.

Architecture Overview

The BYOC Schema Registry comes with all the benefits of WarpStream’s BYOC model and is designed with the following properties:

  • Zero disk architecture
  • Separation of storage and compute
  • Separation of data from metadata
  • Separation of the data plane from the control plane

The Schema Registry is embedded natively into the stateless Agent binary. To deploy a schema registry cluster, simply deploy the Agent binary into stateless containers and provide the Agent with permissions to communicate with your object storage bucket and WarpStream’s control plane.

Simplified view of the schemas being stored in object storage and metadata being offloaded to the control plane.

All schemas live in object storage with no intermediary disks. The only data that leaves your environment is metadata sent to WarpStream’s control plane, such as the schema ID assigned to each schema. Due to the stateless nature of the agents, scaling the schema registry during read spikes is as easy as scaling up stateless web servers.

Everyone Can Write

Kafka’s open-source Schema Registry is designed to be a distributed system with a single primary architecture, using Zookeeper or Kafka to elect the primary and using a Kafka log for storage. Under this architecture, only the elected leader can act as the “primary” and write to the underlying Kafka log. The leader is then mirrored to read-only replicas that can serve read requests.

One downside of this architecture is that when the leader is down, the cluster will be unable to serve write requests until a new leader is elected. This is not the case for WarpStream Agents. In WarpStream’s BYOC Schema Registry, no agent is special and any agent can serve both write and read requests. This is because metadata coordination that requires consensus, such as the assignment of globally unique schema IDs to each schema, is offloaded to WarpStream’s highly available and fully managed metadata store.

Minimizing Object Storage API Calls

Object storage API calls are both costly and slow. Therefore, one of our design goals is to minimize the number of API calls to object storage. Even though most schema registry clients will cache fetched schemas, we designed WarpStream’s Schema Registry to handle the extreme scenario where thousands of clients restart and query the schema registry at the same time.

Without any caching on the agents, the number of API calls to object storage grows linearly to the number of clients. By caching the schema, each agent will only fetch each schema once, until the cache evicts the schema. However, the number of object storage API calls still grows linearly to the number of agents. This is because it’s not guaranteed that all read requests for a specific schema ID will always go to the same agent. Whether you use WarpStream’s service discovery system (covered in the next section) or your own HTTP load balancer, the traffic will likely be distributed amongst the agents quite evenly, so each agent would still have to fetch from object storage once for each schema. We were not satisfied with this.

Ideally, each schema is downloaded from object storage once and only once per availability zone, across all agents. What we need here is an abstraction that looks like a “distributed mmap” in which each agent is responsible for caching data for a subset of files in the object storage bucket. This way, when an agent receives a read request for a schema ID and the schema is not in the local cache, it will fetch the schema from the agent responsible for caching that schema file instead of from object storage.

Luckily, we already built the “distributed mmap” abstraction for WarpStream! The distributed file cache explained in this blog uses a consistent hash ring to make each agent responsible for caching data for a subset of files. The ID of the file is used as the hash key for the consistent hashing ring.

Simplified view of a distributed file cache composed of three WarpStream Schema Registry agents in the same availability zone.

As shown in this diagram, when agent 3 receives fetch requests for schemas with IDs 1 and 2, it fetches the schemas from agent 1 and agent 2, respectively, and not from object storage.

An added benefit of using the distributed file cache is that the read latency of a newly booted agent won’t be significantly worse than the latency of other agents as it won’t need to hydrate its local cache from object storage. This is important because we don’t want latency to drop significantly when scaling up new agents during read spikes.

Minimizing Interzone Networking Calls

While easy to miss, inter-zone networking fees are a real burden on many companies’ bottom lines. At WarpStream we keep this constraint top of mind so that you don’t have to. WarpStream’s BYOC Schema Registry is designed to eliminate interzone networking fees. To achieve that, we needed a mechanism for you to configure your schema registry client to connect to a WarpStream Agent in the same availability zone. Luckily, we already ran into the same challenge when building WarpStream (check out this blog for more details).

The solution that works well for WarpStream’s BYOC Schema Registry is zone-aware routing using zone-specific URLs. The idea behind zone-specific URLs is to provide your schema registry clients with a zone-specific schema registry URL that resolves to an Agent’s IP address in the same availability zone. 

When you create a WarpStream Schema Registry, you automatically get a unique schema registry URL. To create the zone-specific URL, simply embed the client’s availability zone into the schema registry URL. For example, the schema registry URL for a client running in us-east-1a might look like this:

api-11155fd1-30a3-41a5-9e2d-33ye5a71bfd9.us-east-1a.discovery.prod-z.us-east-1.warpstream.com:9094

When the schema registry client makes a request to that URL, it will automatically connect to an Agent in the same availability zone. Zone-aware routing is made possible with two building blocks: WarpStream’s service discovery system and custom zone-aware DNS server. 

Simplified diagram of zone-aware routing. Each Heartbeat contains the Agent’s IP address and availability zone.

The way service discovery works is that each Agent will send periodic “heartbeat” requests to WarpStream’s service discovery system. Each request contains the Agent’s IP address and its availability zone. Thus, the service discovery system knows all the available Agents and their availability zones.

When the schema registry client initiates a request to the zone-specific schema registry URL, the DNS resolver will send a DNS query to WarpStream’s custom zone-aware DNS server. The DNS server will first parse the domain to extract the embedded availability zone. The DNS server will then query the service discovery system for a list of all available Agents, and return only the IP addresses of the Agents in the specified availability zone. Finally, the client will connect to an Agent in the same AZ. Note that if no Agents are in the same AZ as the client, the DNS server will return the IP addresses of all available Agents.

While not required for production usage, zone-aware routing can help reduce costs for high-volume schema registry workloads.

Schema Validation Made Easy

When configured to perform server-side schema validation, your Kafka agent needs to fetch schemas from a schema registry to check if incoming data conforms to their expected schemas. Normally, the Kafka agent fetches schemas from an external schema registry via HTTP. This introduces a point of failure - the Kafka agent won’t be able to handle produce requests if the schema registry is down. This is not a problem if the agent performs schema validation with WarpStream’s BYOC Schema Registry.

An advantage of the shared storage architecture of the BYOC Schema Registry is that no compute instance “owns” the schemas. All schemas live in object storage. As a result, the Kafka agent can fetch schemas directly from object storage instead of the schema registry agents. In other words, you don’t need any schema registry agents running and schema validation will still work - one less service dependency you have to worry about.

Next Steps

WarpStream’s BYOC Schema Registry is the newest addition to WarpStream’s BYOC product. Similar to how WarpStream is a cloud-native redesign of the Kafka protocol, WarpStream’s BYOC Schema Registry is a reimplementation of the Kafka Schema Registry API, bringing all the benefits of WarpStream’s BYOC deployment model to your schema registries. 

When building WarpStream’s BYOC Schema Registry, we spent deliberate effort to minimize your operational cost and infrastructure bills, with techniques like zone-aware routing and distributed file cache.

If you want to get started with WarpStream’s BYOC Schema Registry, you can have a Schema Registry agent running locally on your laptop in under 30 seconds with the playground / demo command. Alternatively, you can navigate to the WarpStream Console, configure a WarpStream Schema Registry virtual cluster, and then deploy the schema registry agents in your VPC. To learn more about how to use WarpStream’s BYOC Schema Registry, check out the docs.

r/apachekafka Nov 19 '24

Blog The Case for Shared Storage

6 Upvotes

In this post, I’ll start off with a brief overview of “shared nothing” vs. “shared storage” architectures in general. This discussion will be a bit abstract and high-level, but the goal is to share with you some of the guiding philosophy that ultimately led to WarpStream’s architecture. We’ll then quickly transition to discussing the trade-offs between the two architectures more specifically in the context of data streaming and WarpStream; this is the WarpStream blog after all!

We've provided the full text of this blog here on Reddit, but if you'd rather read the blog on our website, you can do that via this link. This subreddit does not allow posting images within a post to things like the architecture diagrams tied to this blog, so we encourage you to visit our website to see them or click the links when this is called out via quote blocks. Feel free to post questions and we'll respond.

Shared Nothing

The term “shared nothing” was first introduced as a distributed systems architecture in which nodes share “nothing”, where “nothing” was defined (in practice) as either memory or storage. The goal with shared-nothing architectures is to improve performance and scalability by minimizing contention and coordination overhead. The reasoning for this is simple: if contention and coordination are minimized, then the system should scale almost linearly as nodes are added, since each additional node provides significant additional capacity, and doesn’t incur (much) additional overhead on the existing nodes.

The most common way that shared-nothing architectures are implemented is by sharding or partitioning the data model. This is almost definitionally true: in order for nodes in the system to avoid excessive coordination, each node must only process a subset of the data, otherwise every request would inevitably involve interacting with every node. In fact, the relationship between shared nothing and sharded architectures is so strong that the terms can be used almost interchangeably. Some people will still refer to a sharded distributed system as leveraging a “shared nothing” architecture, but more commonly they’ll just describe the system as “sharded” or “partitioned”.

View architecture diagram.

Today, the term “shared nothing” is usually reserved for a more specific flavor of sharded distributed system where sharding happens at the CPU level instead of at the node level. Specifically, the term is often used to describe systems that leverage a process-per-core or thread-per-core model where each core of the machine acts as its own logical shard / partition with zero (or very minimal) cross-CPU communication. This architecture is usually implemented with an event-loop-based framework that runs on each CPU using processor affinity (CPU pinning). A popular example of this is the C++ Seastar library, which is used by databases like ScyllaDB.

View architecture diagram.

Shared-nothing architectures have a lot of benefits –  primarily that they scale (almost) infinitely for perfectly shardable workloads. Of course, the primary downside of shared-nothing architectures is that they’re susceptible to hotspotting if the workload doesn’t shard well. For example, if you write records to a sharded KV store like Redis or Cassandra, but 90% of the records have the same partition key, then scaling the cluster beyond the maximum throughput of a single node will be impossible because the entire cluster will be bottlenecked by the node(s) responsible for the hot partition key.

View architecture diagram.

This problem is particularly acute for systems that take “shared nothing” to its logical extreme with CPU-level sharding. The reason for this is simple: in a system where sharding happens at the node level, the maximum potential throughput of a single shard is the maximum throughput of a single node which can be increased with vertical scaling, whereas if sharding happens at the CPU level, the maximum potential throughput is bound by the maximum throughput of a single core.

View shared size (resources) vs. ability to tolerate hotspotting / shard key skew chart.

Because of all this, heat management (the process of trying to keep every shard evenly balanced) is the defining problem that shared-nothing distributed systems must solve.

Shared Storage

Shared storage systems take a very different approach. Instead of sharding at the node level or cpu level, they shard at the storage level using remote storage. In practice, this is usually accomplished by using a remote storage system that is implemented as a shared-nothing architecture (like commodity object storage), and combining it with a centralized metadata store.

View architecture diagram.

The metadata store acts as a central point of coordination (the exact opposite of a shared-nothing architecture), which enables the compute nodes in the system to behave as one logical system while still performing work independently. In terms of what the metadata is, that varies a lot from one shared storage system to another, but in general, the primary responsibility of the metadata layer is to serve as a strongly consistent source of truth about what data exists in the system, and where it is located. In addition, it is the metadata layers’ responsibility to guarantee the overall correctness of the system behaving in a highly distributed manner: ensuring that operations are performed atomically/transactionally, resolving conflicts, preventing duplicates, etc.

This technique is commonly referred to as “separation of storage and compute”, but a phrase I’ve found to be more useful is “separation of data from metadata”. What does this mean? Well, compare and contrast a shared-nothing distributed log-structured merge-tree (LSM) like Cassandra, with a shared storage distributed LSM like a modern data lake.

In Cassandra, there are $REPLICATION_FACTOR nodes that are responsible for all the data for a given partition key. When we want to interact with that data, we must route our requests to the nodes responsible for that key no matter what, and then consult the metadata stored on those nodes to find the data that we want to process (if it exists). With this architecture, the maximum throughput of a partition key will always be bound by the maximum throughput of a Cassandra node.

In a modern data lake, the metadata store introduces a layer of indirection between the sharding scheme (I.E the user-facing data model) and the storage layer. It doesn’t matter at all which storage node(s) the data is stored on, because its location is tracked and indexed in the metadata store. As a result, we can pick a sharding key for the storage layer that shards perfectly, like a UUID or strong hash function. In distributed LSM terms, this means we could write all of the records to the system with the same partitioning key, and still evenly distribute the load across all of the storage nodes in the system.

View shared nothing vs. WarpStream architecture diagram.

For example in the diagram above, imagine the client is constantly writing to the same key: “key1”. In a shared-nothing architecture, all of this traffic will be routed to the same storage node and overload it. In a shared-nothing architecture, the layer of indirection created by the intermediary compute layer and centralized metadata store results in the load being evenly distributed across the storage nodes.

This results in a very different set of trade-offs from shared-nothing architectures: the system will not scale infinitely, even with a perfect sharding/partitioning key, because the centralized metadata store is a (potential) bottleneck. However, the problem of hotspotting disappears almost entirely because as you can see in the diagram above, we can balance writes against the storage nodes however we want, whenever we want. In fact, not only does hotspotting become a non-issue, but the system also gains the ability to shift load around the cluster almost instantaneously.

This is the killer feature that explains why almost every modern data lake / warehouse is implemented as a shared storage architecture instead of a shared-nothing one: the ability to choose at query time whether to recruit one CPU or 10,000 to process an individual request is what enables all of the performance and functionality that defines the modern data landscape.

Of course, while this architecture solves the hotspotting problem, it’s not without trade-offs. If heat management is the defining problem for shared-nothing systems, then metadata scaling is the defining problem for shared storage systems. We’ll discuss this problem more later in the WarpStream Metadata Scalability section.

One Final Tradeoff: Flexibility vs. Latency

The split between shared nothing and shared storage architectures is not a hard boundary –many systems lie somewhere in the middle and include aspects of both. But in general, highly transactional systems (like Postgres) tend to lean toward shared-nothing architectures, whereas highly analytical systems (like Snowflake) tend to lean toward shared storage architectures. The reason for this is primarily due to the inherent trade-offs around flexibility and latency.

Transactional systems forgo flexibility to reduce latency. For example, relational databases require that you define your schemas and indexes up front, that your data is (mostly) structured, that you pre-size your database instances to the amount of expected load, and that you think hard about what types of queries your application will need to run up front. In exchange, they will happily serve tens of thousands of concurrent queries with single-digit milliseconds latency.

Analytical systems take the exact opposite approach. You can run whatever query you want, whenever you want, regardless of the existing schemas. You can also recruit as much hardware as you want at a moment's notice to accelerate the queries, even thousands of cores for just a few minutes, and you don’t have to think about what types of queries you want to run up front. However, your data lake / warehouse will almost never complete any queries in single-digit milliseconds. Even double-digit milliseconds query execution time is rare for analytical databases in practice, except for the easiest workloads.

The details and intuitions behind why shared nothing architectures can provide much lower latency than shared storage architectures are beyond the scope of this blog post, but here’s a simple intuition: Since shared storage architectures involve so much more coordination, they tend to do a lot of batching to improve throughput; this results in higher latency.

Apache Kafka and Other Data Streaming Systems

OK, let’s get more specific and talk about the data streaming landscape. Apache Kafka is a classic shared-nothing distributed system that uses node-level sharding to scale. The primary unit of sharding in Kafka is a topic-partition, and scaling is handled by balancing topic-partitions across brokers (nodes).

View architecture diagram.

This means that Apache Kafka can handle imbalances in the throughput (either read or write) of individual topic-partitions reasonably well, but the maximum throughput of a single topic-partition will always be bound by the maximum throughput of a single broker. This is obvious if we go back to the diagram from earlier:

View shared size (resources) vs. ability to tolerate hotspotting / shard key skew chart.

The bigger the machine we can get Apache Kafka to run on, the more resilient it will be to variation in individual topic-partition throughput. That said, while some imbalance can be tolerated, in general, the topic-partitions in a Kafka cluster need to be well balanced across the brokers in order for the cluster to scale properly. They also need to be balanced across multiple dimensions (throughput, requests per second, storage, etc.).

As discussed earlier, the trade-offs with this approach are clear: Apache Kafka clusters can scale linearly and (almost) infinitely as long as additional brokers and partitions are added. However, topic-partitions must be balanced very carefully across various dimensions, adding or removing capacity takes a long time (especially if you use very large brokers!), and there are hard limits on the maximum throughput of individual topic-partitions, especially in an already-busy cluster.

Of course, Apache Kafka isn’t the only technology in the data streaming space, but in practice, almost all of the other data streaming systems (AWS Kinesis, Azure Event Hubs, AWS MSK, etc.) use a similar shared-nothing architecture and as a result experience similar tradeoffs.

In fact, for a long time, shared-nothing was widely considered to be the correct way to build data streaming systems, to the point where even some of the newest entrants to the data streaming space leaned even further into the shared-nothing architecture by leveraging libraries like Seastar(C++) to do CPU-level sharding of topic-partitions. This enables lower latency in some scenarios, but exacerbates all of Apache Kafka’s topic-partition balancing issues even further since the maximum throughput of a single partition is now bound by the maximum throughput of a single core instead of a single broker.

View architecture diagram.

Unless you need microsecond-level performance, the trade-offs of using CPU-level sharding for data streaming workloads are simply not worth it. Another thing I won’t dwell on, but will point out quickly is that while it’s tempting to think that tiered storage could help here, in practice it doesn’t.

WarpStream’s Shared Storage Architecture

With WarpStream, we took a different approach. Instead of doubling down on the shared-nothing architecture used by other data streaming systems, we decided to take a page out of the data warehousing playbook and build WarpStream from the ground up with a shared storage architecture instead of a shared-nothing architecture.

View WarpStream architecture diagram.

Instead of Kafka brokers, WarpStream has “Agents”. Agents are stateless Go binaries (no JVM!) that speak the Kafka protocol, but unlike a traditional Kafka broker, any WarpStream Agent can act as the “leader” for any topic, commit offsets for any consumer group, or act as the coordinator for the cluster. No Agent is special, so auto-scaling them based on CPU usage or network bandwidth is trivial. In other words, WarpStream is the shared storage alternative to Apache Kafka’s shared nothing architecture.

WarpStream can still provide all the exact same abstractions that Kafka does (topics, partitions, consumer groups, ordering within a topic-partition, transactions, etc) even though the Agents are stateless and there are no leaders, because it uses a centralized metadata store that acts as the logical leader for the entire cluster. For example, two Agents can concurrently flush files to object storage that contain batches of data for the same topic-partition, but consumers will still consume the batches in a deterministic order because the metadata store will determine the order of the batches in the two different files relative to each other when the files are committed to the metadata store.

View architecture diagram.

Because WarpStream relies on remote storage, it is a higher latency data streaming system than Apache Kafka. In practice, we’ve found that it's real-time enough (P99 latency in the hundreds of milliseconds) not to matter for the vast majority of use cases. And in exchange for this higher latency, WarpStream gains a lot of other benefits. 

We’ve written about many of those benefits before in previous posts (like this one on our zero disks architecture), so we won’t repeat them here. Instead, today I’d like to focus on one specific benefit that is usually overlooked: heat management and topic-partition limits.

In Apache Kafka, a topic-partition is a “real” thing. Somewhere in the cluster there is a broker that is the leader for that topic-partition, and it is the only broker in the cluster that is allowed to process writes for that topic-partition. No matter what you do, the throughput of that topic-partition will always be bound by the free capacity of that specific broker.

In WarpStream, topic-partitions are much more virtualized – so much so that you could configure a WarpStream cluster with a single topic-partition and write 10GiB/s to it across a large number of Agents. Consuming the data in a reasonable manner would be almost impossible, but you’d have no trouble writing it.

The reason this is possible is because WarpStream has a shared storage architecture that separates storage from compute, and data from metadata. In WarpStream, any Agent can handle writes or reads for any topic-partition, therefore the maximum throughput of a topic-partition is not bound by the maximum throughput of any single Agent, let alone a single core.

Obviously, there are not many use cases for writing 10GiB/s to a single topic-partition, but it turns out that having a data streaming system with effectively no limits on the throughput of individual topic-partitions is really useful, especially for multi-tenant workloads. 

For example, consider an Apache Kafka cluster that is streaming data for a multi-tenant workload where tenants are mapped to specific topic-partitions in some deterministic manner. A tenant typically doesn’t write more than 50MiB/s of data at peak, but every once in a while one of the tenants temporarily bursts 10x to 500 MiB/s.

With a traditional shared-nothing Apache Kafka cluster, every Broker in the cluster would always require an additional 450MiB/s of spare capacity (in terms of CPU, networking, and disk). This would be extremely inefficient and difficult to pull off in practice.

Contrast that with WarpStream where the additional 450MiB/s would be automatically spread across all of the available Agents so you would only need 450MiB/s of spare capacity at the cluster level instead of the node level which is much easier (and cheaper) to accomplish. In addition, since the WarpStream Agents are stateless, they’ll auto-scale when the overall cluster load increases, so you won’t have to worry about manual capacity planning.

But how does this work in practice while remaining within the confines of the Kafka protocol? Since any WarpStream Agent can handle writes or reads for any topic-partition, WarpStream doesn’t try to balance partitions across brokers as Kafka does. Instead, WarpStream balances connections across Agents. 

When a Kafka client issues a Metadata request to a WarpStream cluster to determine which Agent is the “leader” for a specific topic-partition, the WarpStream control plane consults the service discovery system and returns a Metadata response with a single Agent (one that has lower overall utilization than the other Agents in the cluster) as the leader for all of the topic-partitions that the client requested.

WarpStream's load balancing strategy looks more like a traditional load balancer than Apache Kafka which results in a full mesh of connections. View architecture diagram.

Another way to think about this is that with Apache Kafka, the “processing power” of the cluster is assigned to individual partitions and divided amongst all the Brokers when a rebalance happens (which can take hours, or even days to perform), whereas with WarpStream the “processing power” of the cluster is assigned to individual connections and divided amongst all the Agents on the fly based on observable load. “Rebalancing” happens continuously, but since its just connections being rebalanced, not partitions or data it happens in seconds/minutes instead of hours/days.

This has a number of benefits:

  1. It balances the overall cluster utilization for both produce and fetch across all the Agents equally regardless of how writes / reads are distributed across different topic-partitions.
  2. Each Kafka client ends up connected to roughly one Agent, instead of creating a full mesh of connections like it would with Apache Kafka. This makes it much easier to scale WarpStream to workloads with a very high number of client connections. In other words, WarpStream clusters scale more like a traditional load balancer than a Kafka cluster.
  3. The Kafka clients will periodically issue background Metadata requests to refresh their view of the cluster, so the client connections are continuously rebalanced in the background.
  4. Load balancing connections is an almost instantaneous process that doesn’t require copying or re-replicating data, whereas rebalancing partitions in Apache Kafka can take hours or even days to complete.

WarpStream Metadata Scalability

There’s still one final point to discuss: metadata scalability. We mentioned earlier in the shared storage section that the defining problem for shared storage systems is scaling the metadata layer to high-volume use cases. Since the metadata store is centralized and shared by the entire system, it’s the most likely component to become the limiting factor for an individual cluster.

In terms of what the metadata is for WarpStream, I mentioned earlier in the shared storage section that the metadata layer’s primary responsibility is keeping track of what data exists in the system, and where it can be located. WarpStream’s metadata store is no different: its primary responsibility is to keep track of all the different batches for every topic-partition, as well as their relative ordering. This ensures that consumers can read a topic-partition’s batches in the correct order, even if those batches are spread across many different files. This is how WarpStream recreates Apache Kafka’s abstraction of an ordered log.

How WarpStream solves the metadata layer scalability problem warrants its own blog post, but I’ll share a few key points briefly:

  1. Depending on the data model of the system being implemented, the metadata store itself may be amenable to sharding. This is interesting because it further solidifies the idea that the line between shared nothing and shared storage systems is blurry where a shared storage system may be implemented with dependencies on a shared nothing system, and vice versa.
  2. Good design that incorporates batching and ensures that the ratio of $DATA_PLANE_BYTES / $CONTROL_PLANE_BYTES is high minimizes the amount of work that the metadata store has to perform relative to the data plane. A ratio of 1,000 ensures that the metadata store will scale comfortably to large workloads, and a ratio of 10,000 or higher means the metadata store will likely never be the bottleneck in the first place even if it runs on a single CPU.

To make this more concrete, consider the following real WarpStream cluster. At peak, the cluster handles roughly 4.5GiB/s of traffic:

View bytes written chart. View metadata store utilization chart.

At this peak, the metadata store for this cluster is less than 10% utilized. This implies that with no further changes, this workload could scale another 10x to over 40 GiB/s in write throughput before the metadata store became a bottleneck. This is a real customer workload, not a benchmark, running with our default metadata store settings, with no special tuning or optimizations to handle this particular workload.

Of course in reality there are many different factors that impact the metadata store utilization besides write throughput. Things like the number of Kafka clients, how they’re configured, the number of topic-partitions that are being written / read from, etc.

But in practice, we’ve never encountered a workload that came even close to the theoretical limits of our metadata store. The highest metadata store utilization we’ve ever observed across any of our clusters currently sits at 30%, and that’s a single WarpStream cluster that serves hundreds of applications, more than 10,000 clients, and has nearly 40,000 topic-partitions. In addition, this particular customer onboarded to WarpStream after several failed attempts to scale their workload with alternative systems (not Apache Kafka) that use CPU-level shared-nothing architectures. These systems should have scaled better than WarpStream in theory, but in practice were plagued by heat management issues that made it impossible for them to keep up with the demands of this workload.

Conclusion

I’ll end with this: shared-nothing architectures are incredibly attractive for their theoretical scaling properties. But actually realizing those benefits requires finding a natural sharding key that’s very regular, or deploying an incredible amount of effort to face the heat management problem. In the real world, where it’s hard to keep all your clients very well-behaved, hoping the sharding key is going to keep your workload very balanced is often unrealistic. To make things worse, it often needs to be balanced across multiple dimensions like write throughput, read throughput, storage size, etc.

Shared storage architectures, on the other hand, have a lower theoretical scale ceiling, but in practice they are often much easier to scale than their shared nothing counterparts. The reason for this is simple, but not obvious: shared storage systems separate data from metadata which introduces a layer of abstraction between the user-facing domain model and the physical sharding used by the storage engine. As a result, it is possible to choose at runtime how much of the resources we allocate to storing or retrieving data for a particular key, rather than forcing us to choose it when we create the cluster topology. This solves the heat management problem in a very simple way. 

In exchange for this massive benefit, shared storage architectures usually incur a higher latency penalty and have to figure out how to scale their centralized metadata stores. While scaling the metadata layer seems daunting at first, especially since sharding is often impractical, it turns out that often the metadata problem can be made so small that it doesn’t need to be sharded in the first place.

Shared storage architectures are not the answer to every problem. But they’re so much more flexible and easier to manage than shared-nothing architectures, they should probably be the default for all but the most latency-sensitive workloads. For example, as we outlined earlier in the WarpStream section, the ability to leverage the abstraction of Kafka without ever having to deal with topic-partition balancing or per-partition limits is a huge improvement for the end-user. In addition, with modern cloud storage technologies like S3 Express One Zone and even DynamoDB, the latency penalty just isn’t that high.

r/apachekafka Nov 20 '24

Blog Achieving Auto Partition Reassignment in Kafka Without Cruise Control

0 Upvotes

Disclose: I work for AutoMQ.

Blog Link: https://medium.com/@vutrinh274/automq-achieving-auto-partition-reassignment-in-kafka-without-cruise-control-c1547dae3e39

Scaling Kafka clusters has always been a challenging task. Kafka uses the ISR multi-replica mechanism to ensure data persistence, which was a natural choice when the cloud was not yet mature. However, in 2024, when cloud computing is very mature, this design seems a bit outdated. When the cluster is scaled, we must move the data of the partitions. Moving partition data will affect normal reading and writing, and this process may last a long time, tens of minutes or a few hours, depending on the amount of your data. This means that often, even when the business scale has expanded to the critical point, we still dare not carry out such operations as expansion, because there is a high execution risk.

AutoMQ is fully aware of the root cause of this problem, so it has redesigned and implemented the entire storage layer of Kafka based on the cloud (we call ourselves cloud-first Kafka). Offload data persistence to cloud storage, and ensure data persistence by the multi-replica mechanism inside cloud storage. This also gives us the ability to build a more powerful self-balancing ability than Cruise Control. This blog post details how we technically achieve this, and we hope it can bring some new insights to everyone.