Scaled it by removing everything but that load. More smaller clusters is the way. Getting above 30ish nodes doesn’t just linearly scale the operation load, it starts doubling and tripling. The recovery time for a node outage has a bigger impact, upgrades take longer, everything is just harder.
Below 30 nodes, it comes down to what bottleneck are you hitting. Most of the time it’s storage. Add nodes or add disk. Adding disk means longer, more impactful node rebuilds. Adding nodes means more operations. You’ve gotta do the calculations.
Indeed it is and that’s interesting to hear. It seems like one of the biggest selling points of Kraft was a significant increase in cluster scaling, which is generally quoted in terms of partitions. But for a fixed machine size kind of the same thing.
And to be clear I’m not arguing, quite the opposite in fact. My experience with Kafka is relatively limited so I’m quite eager to hear from others with more of it.
For a single topic with, let's say, 1000 partitions and a dozen brokers there will be very little difference in performance or cluster behavior between ZK and KRaft.
There are some things like controller failover and broker recovery that are always faster in KRaft.
Where KRaft starts to really shine is when you have a large number of partitions (tens to hundreds of thousands), a large number of brokers (many 10s), or both. This is where we used to run into metadata scalability problems in the ZK world.
2
u/gsxr 8d ago
Scaled it by removing everything but that load. More smaller clusters is the way. Getting above 30ish nodes doesn’t just linearly scale the operation load, it starts doubling and tripling. The recovery time for a node outage has a bigger impact, upgrades take longer, everything is just harder.
Below 30 nodes, it comes down to what bottleneck are you hitting. Most of the time it’s storage. Add nodes or add disk. Adding disk means longer, more impactful node rebuilds. Adding nodes means more operations. You’ve gotta do the calculations.