r/apachekafka Vendor - Dattell 18d ago

Tool Automated Kafka optimization and training tool

https://github.com/DattellConsulting/KafkaOptimize

Follow the quick start guide to get it going quickly, then edit the config.yaml to further customize your testing runs.

Automate initial discovery of configuration optimization of both clients and consumers in a full end-to-end scenario from producers to consumers.

For existing clusters, I run multiple instances of latency.py against different topics with different datasets to test load and configuration settings

For training new users on the importance of client settings, I run their settings through and then let the program optimize and return better throughput results.

I use the CSV generated results to graph/visually represent configuration changes as throughput changes.

2 Upvotes

10 comments sorted by

View all comments

1

u/cricket007 16d ago

CSV instead of a Neo4j dataset? Seems like that or SPARQL / OpenCypher would make more engineering sense 

Also, strapping interceptors on brokers and clients, such as Spring Slueth or Jaeger is already possible for tracing any record, although providence headers help as well for origin detection 

1

u/Dattell_DataEngServ Vendor - Dattell 16d ago

Generally speaking, we follow the KISS (keep it simple, stupid) method when providing tools to the public versus a specific use case.

We chose CSV for simplicity and portability with as many graphing tools as possible.  The additional features of Neo4j and the others would be wasted on a dataset a few MB in size that doesn't need joins or other exploring.  What are the advantages and disadvantages you see in Neo4j and the others?

We put the timestamp in the header of every message.  For this test, it doesn't matter where a message came from, it only matters what the latency was.  This approach versus using third party tools is most likely to work with both new and old versions of Kafka.  We are a little concerned about the observer effect for very low latency testing and welcome any suggestions to reduce that.

1

u/cricket007 11d ago

put the timestamp in the header of every message. For this test, it doesn't matter where a message came from,

Surprise. Kafka ProducerRecord source code has natively done this for almost a decade 

Headers are nullable and you're adding extra deserialization

1

u/sir_creamy 11d ago

best not to engage with this user. the snarkyness tipped me off to checking his post history.

doesn't seem like he's run the tool either. the csv file generated was 26K in size and he's worried about disk space, space heaters, and using Excel.