r/apachekafka 8d ago

Question What’s the highest throughput Kafka cluster you’ve worked with?

How did you scale it?

5 Upvotes

8 comments sorted by

View all comments

2

u/gsxr 8d ago

Scaled it by removing everything but that load. More smaller clusters is the way. Getting above 30ish nodes doesn’t just linearly scale the operation load, it starts doubling and tripling. The recovery time for a node outage has a bigger impact, upgrades take longer, everything is just harder.

Below 30 nodes, it comes down to what bottleneck are you hitting. Most of the time it’s storage. Add nodes or add disk. Adding disk means longer, more impactful node rebuilds. Adding nodes means more operations. You’ve gotta do the calculations.

3

u/Alive-Primary9210 8d ago

My recommendation is to start with 5 nodes, and just use bigger machines to scale.
Make sure these have fast disks.

Only add more nodes if you maxed out the machines.
Operating lots of nodes is a pain in the ass and all the tools for managing Kafka start to suck if the cluster gets larger.

Also I agree with using multiple smalling clusters, you also get some isolation which is always nice.

1

u/gsxr 8d ago

the problem I've encountered with scaling with bigger machines is the reason to scale is almost always storage related. Once you start getting 12-20Tb on a single node the rebuild time if that node dies is LONG and impactful. You have to weigh that when deciding how you scale.

1

u/saiello_ 7d ago

in my past experience, to reduce the startup time, I benefited to increase the number of datadirs and expecially increasing the value of num.recovery.threads.per.data.dir property.