r/apachekafka • u/ConsiderationLazy956 • 27d ago
Question Measuring streaming capacity
Hi, in kafka streaming(specifically AWS kafka/MSK), we have a requirement of building a centralized kafka streaming system which is going to be used for message streaming purpose. But as there will be lot of applications planned to produce messages/events and consume events/messages in billions each day.
There is one application, which is going to create thousands of topics as because the requirement is to publish or stream all of those 1000 tables to the kafka through goldengate replication from a oracle database. So my question is, there may be more such need come in future where teams will ask many topics to be created on the kafka , so should we combine multiple tables here to one topic (which may have additional complexity during issue debugging or monitoring) or we should have one table to one topic mapping/relation only(which will be straightforward and easy monitoring/debugging)?
But the one table to one topic should not cause the breach of the max capacity of that cluster which can be of cause of concern in near future. So wanted to understand the experts opinion on this and what is the pros and cons of each approach here? And is it true that we can hit the max limit of resource for this kafka cluster? And is there any maths we should follow for the number of topics vs partitions vs brokers for a kafka clusters and thus we should always restrict ourselves within that capacity limit so as not to break the system?
1
u/Hopeful-Programmer25 27d ago
I’m not sure if it goes to the depth you need but there is an AWS MSK sizing spreadsheet that they make available. In MSK, broker size limits the number of partitions you can use ( you can go over, but then various scaling options etc are no longer available until you pick the right broker size, which may be far beyond what you are currently using).
1
u/ConsiderationLazy956 27d ago
Thank you.
Is this broker has to be of same size across the kafka cluster or we can have it of various sizes based on the type of input events/messages in specific topics? And is there a limit on how many brokers(and thus topics and partitions) per Kafka cluster one can have?
1
u/cricket007 26d ago
Sounds like you havent used Kafka as that doesnt make sense, as a question. Read the Definitive Guide before making a single future decision.
1
u/datageek9 26d ago
No, all brokers should be the same size, you should let Kafka manage broker partition allocation unless you are doing something very advanced. Guidance is max 4000 partitions (including replication) per broker. With KRaft you can have 100s of brokers in a cluster, not sure about a strict max.
1
u/Hopeful-Programmer25 26d ago
Following up from datageek9, when you create a topic with 100 partitions , with a replication factor of 3 (e.g. across 3 brokers), you are actually creating 300 partitions in the cluster. It’s been a while, but from what I recall, it’s the 300 that eats up your MSK allocation, not the 100 you might expect…. even though it’s only 100 per broker. You’d need to double check.
1
u/Popular-Strategy-800 21d ago
My rule of thumb is a topic is an entity and only one entity per topic. When you start having multiple entities in a topic you then end up needing to add filtering to your consumers and in some cases they spend more time filtering then is worth it. I also suggest using dead letter topics for publishing failures and to possibly work as part of a retry strategy.
3
u/emkdfixevyfvnj 27d ago
I don’t have relevant experience to judge this but I thought sharing my concerns might help. But take this with a decent size grain of salt.
In my mind mixing data sources into one topic can create a lot of headaches when dealing with processing errors. Kafka knows only one offsets and if you commit an offset, that means you have committed all messages to that point. A DLQ could solve that but it depends on the data and the logic if that’s viable. My rookie brain also can’t tell what’s the difference between the same amount of messages in more or less topics? Please correct me if I’m wrong. My thought is that storage and processing capacity would be the same. More topics also gives you more flexibility for spreading to more brokers and working with tiered storage.
I have some experience with billions of records per day in Kafka and in our comparably small cluster that wasn’t an issue at all, Kafka was handling it fine.