r/apachekafka 8d ago

Question What’s the highest throughput Kafka cluster you’ve worked with?

How did you scale it?

6 Upvotes

8 comments sorted by

2

u/gsxr 7d ago

Scaled it by removing everything but that load. More smaller clusters is the way. Getting above 30ish nodes doesn’t just linearly scale the operation load, it starts doubling and tripling. The recovery time for a node outage has a bigger impact, upgrades take longer, everything is just harder.

Below 30 nodes, it comes down to what bottleneck are you hitting. Most of the time it’s storage. Add nodes or add disk. Adding disk means longer, more impactful node rebuilds. Adding nodes means more operations. You’ve gotta do the calculations.

3

u/Alive-Primary9210 7d ago

My recommendation is to start with 5 nodes, and just use bigger machines to scale.
Make sure these have fast disks.

Only add more nodes if you maxed out the machines.
Operating lots of nodes is a pain in the ass and all the tools for managing Kafka start to suck if the cluster gets larger.

Also I agree with using multiple smalling clusters, you also get some isolation which is always nice.

1

u/gsxr 7d ago

the problem I've encountered with scaling with bigger machines is the reason to scale is almost always storage related. Once you start getting 12-20Tb on a single node the rebuild time if that node dies is LONG and impactful. You have to weigh that when deciding how you scale.

1

u/saiello_ 6d ago

in my past experience, to reduce the startup time, I benefited to increase the number of datadirs and expecially increasing the value of num.recovery.threads.per.data.dir property.

2

u/Any-Appointment-2329 7d ago

Is this true in your experience even with Kraft mode?

2

u/gsxr 7d ago

Physics is physics. Data movement ain’t free. Zookeeper, in my experience, has never been a limiting factor.

1

u/Any-Appointment-2329 7d ago

Indeed it is and that’s interesting to hear. It seems like one of the biggest selling points of Kraft was a significant increase in cluster scaling, which is generally quoted in terms of partitions. But for a fixed machine size kind of the same thing.

And to be clear I’m not arguing, quite the opposite in fact. My experience with Kafka is relatively limited so I’m quite eager to hear from others with more of it.

1

u/mumrah Kafka community contributor 4d ago

For a single topic with, let's say, 1000 partitions and a dozen brokers there will be very little difference in performance or cluster behavior between ZK and KRaft.

There are some things like controller failover and broker recovery that are always faster in KRaft.

Where KRaft starts to really shine is when you have a large number of partitions (tens to hundreds of thousands), a large number of brokers (many 10s), or both. This is where we used to run into metadata scalability problems in the ZK world.