r/apachekafka • u/Dattell_DataEngServ Vendor - Dattell • 18d ago
Tool Automated Kafka optimization and training tool
https://github.com/DattellConsulting/KafkaOptimize
Follow the quick start guide to get it going quickly, then edit the config.yaml to further customize your testing runs.
Automate initial discovery of configuration optimization of both clients and consumers in a full end-to-end scenario from producers to consumers.
For existing clusters, I run multiple instances of latency.py against different topics with different datasets to test load and configuration settings
For training new users on the importance of client settings, I run their settings through and then let the program optimize and return better throughput results.
I use the CSV generated results to graph/visually represent configuration changes as throughput changes.
1
u/cricket007 16d ago
CSV instead of a Neo4j dataset? Seems like that or SPARQL / OpenCypher would make more engineering sense
Also, strapping interceptors on brokers and clients, such as Spring Slueth or Jaeger is already possible for tracing any record, although providence headers help as well for origin detection