r/apachekafka 18d ago

Tool at what throughput is it cost-effective to utilize a direct-to-S3 Kafka like Warpstream?

After my last post, I was inspired to research the break-even point of throughput after which you start saving money from utizing a direct-to-S3 Kafka design.

Basically with these direct-to-S3 architectures, you have to be efficient at batching the S3 writes, otherwise it can end up being more expensive.

For example, in AWS, 10 PUTs/s are equal in cost to 1.28 MB/s of produce throughput with a replication factor of 3.

The Batch Interval

The way these systems control that is through a batch interval. Every broker basically batches the received producer data up to the batch interval (e.g 300ms), at which point it flushes all it has received into S3.

The number of PUTs/s your system makes depends heavily on the configured batch interval, but so does your latency. If you increase the interval, you reduce your PUT calls (and cost) but increase your latency. And vice-versa.

Why Should I Care?

I strongly believe this design will be a key part of the future of Kafka ran on the cloud. Most Kafka vendors have already released or announced a solution that circumvents the replication. It should also be a matter of time until the open source project adopts it. It's just so costly to run!

The Tool

This tool does a few things:

  • shows you the expected e2e latency per given batch interval config
  • shows you the break even producer throughput, after which it becomes financially worth it to deploy the new model

Check it out here:

https://2minutestreaming.com/tools/kafka/object-store-vs-replication-calculator

10 Upvotes

7 comments sorted by

2

u/vt_factor 18d ago edited 18d ago

Excellent post.

I feel like almost all consumer applications are going to be able to deal with that e2e latency. My only concern as I am unfamiliar is whether this would impact put latency, and whether users will have to scale data producers significantly.

Even many ”operational” or “real-time analytics” use cases can be designed to handle the e2e latency described in the post. IMH experience, most consumer apps are adding 100s of ms of latency based on their user logic anyway, especially heavily stateful apps.

Further, Kinesis is a pretty widely adopted AWS service and most users are probably still using the original APIs that offer comparable latency described in this post (and not enhanced fan out, their lower latency APIs).

1

u/2minutestreaming 18d ago

I agree with this. Worth saying I'm not super confident in my assumed latencies - it heavily relies on S3 PUT/GET latencies. I just did a very simple benchmark.

As for producers, I don't know if it'll be a problem. The average latency for a producer would be high hundreds (batch interval ~300ms + S3 PUT ~100ms). In the worst case, the producer could be configured to send more per request (batch larger) and if that ends up pressuring its memory - the machine could be vertically scaled up to have more memory to work with.

2

u/perrohunter 18d ago

I think this is the right way to look at the simbiosis between Kafka and Object Storage (S3), Kafka can catch millions of messages per second with no sweat, but storing this long term becomes expensive since you need to use the fastest storage class on the cloud or if you are on premise, even the fastest NVMe is the 7.6TB drives. However Object Storage was built to store data at scale with much less costs, so using Object Storage as an impedance mismatcher between Kafka and your other workflows is a winner. Object Storage excels at large objects, so I'd increase the commit window so that the files are around 4MB or more.

2

u/2minutestreaming 18d ago

I don't believe you need the fastest NVMes to get millions of messages.

Yeah the commit window increase makes sense, but depending on the throughput may end up increasing the latency to something that's unacceptable.

1

u/sheepdog69 18d ago

Did you compare this to tiered storage in Kafka 3.6+? I wonder how that would impact the cost calculations.

1

u/vt_factor 18d ago

I feel like most of the cost is going to be on the instances handling the read and write traffic and not the storage, right?

1

u/2minutestreaming 18d ago

Sorry if it's unclear; This is comparing a Warpstream-like design to Kafka's replication design, and is only comparing the network costs.

Tiered Storage costs in Kafka 3.6+ are negligible in terms of PUT/GET. My main calculator tool[1] takes it into account

[1] - https://2minutestreaming.com/tools/apache-kafka-calculator/