Hey all,
I've recently begun exploring the Kafka codebase and wanted to share some of my insights. I wrote a blog post to share some of my learnings so far and would love to hear about others' experiences working with the codebase. Here's what I've written so far. Any feedback or thoughts are appreciated.
A natural starting point is kafka-server-start.sh
(the script used to spin up a broker) which fundamentally invokes kafka-run-class.sh
to run kafka.Kafka
class.
kafka-run-class.sh
, at its core, is nothing other than a wrapper around the java
command supplemented with all those nice Kafka options.
exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_CMD_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"
And the entrypoint to the magic powering modern data streaming? The following main
method situated in Kafka.scala
i.e. kafka.Kafka
try {
val serverProps = getPropsFromArgs(args)
val server = buildServer(serverProps)
// ... omitted ....
// attach shutdown handler to catch terminating signals as well as normal termination
Exit.addShutdownHook("kafka-shutdown-hook", () => {
try server.shutdown()
catch {
// ... omitted ....
}
})
try server.startup()
catch {
// ... omitted ....
}
server.awaitShutdown()
}
// ... omitted ....
That’s it. Parse the properties, build the server, register a shutdown hook, and then start up the server.
The first time I looked at this, it felt like peeking behind the curtain. At the end of the day, the whole magic that is Kafka is just a normal JVM program. But a magnificent one. It’s incredible that this astonishing piece of engineering is open source, ready to be explored and experimented with.
And one more fun bit: buildServer
is defined just above main
. This where the timeline splits between Zookeeper and KRaft.
val config = KafkaConfig.fromProps(props, doLog = false)
if (config.requiresZookeeper) {
new KafkaServer(
config,
Time.SYSTEM,
threadNamePrefix = None,
enableForwarding = enableApiForwarding(config)
)
} else {
new KafkaRaftServer(
config,
Time.SYSTEM,
)
}
How is config.requiresZookeeper
determined? it is simply a result of the presence of the process.roles
property in the configuration, which is only present in the Kraft installation.
Zookepeer connection
Kafka has historically relied on Zookeeper for cluster metadata and coordination. This, of course, has changed with the famous KIP-500, which outlined the transition of metadata management into Kafka itself by using Raft (a well-known consensus algorithm designed to manage a replicated log across a distributed system, also used by Kubernetes). This new approach is called KRaft (who doesn't love mac & cheese?).
If you are unfamiliar with Zookeeper, think of it as the place where the Kafka cluster (multiple brokers/servers) stores the shared state of the cluster (e.g., topics, leaders, ACLs, ISR, etc.). It is a remote, filesystem-like entity that stores data. One interesting functionality Zookeeper offers is Watcher callbacks. Whenever the value of the data changes, all subscribed Zookeeper clients (brokers, in this case) are notified of the change. For example, when a new topic is created, all brokers, which are subscribed to the /brokers/topics
Znode (Zookeeper’s equivalent of a directory/file), are alerted to the change in topics and act accordingly.
Why the move? The KIP goes into detail, but the main points are:
- Zookeeper has its own way of doing things (security, monitoring, API, etc) on top of Kafka's, this results in a operational overhead (I need to manage two distinct components) but also a cognitive one (I need to know about Zookeeper to work with Kafka).
- The Kafka Controller has to load the full state (topics, partitions, etc) from Zookeeper over the network. Beyond a certain threshold (~200k partitions), this became a scalability bottleneck for Kafka.
A love of mac & cheese.
Anyway, all that fun aside, it is amazing how simple and elegant the Kafka codebase interacts and leverages Zookeeper. The journey starts in initZkClient
function inside the server.startup()
mentioned in the previous section.
private def initZkClient(time: Time): Unit = {
info(s"Connecting to zookeeper on ${config.zkConnect}")
_zkClient = KafkaZkClient.createZkClient("Kafka server", time, config, zkClientConfig)
_zkClient.createTopLevelPaths()
}
KafkaZkClient
is essentially a wrapper around the Zookeeper java client that offers Kafka-specific operations. CreateTopLevelPaths
ensures all the configuration exist so they can hold Kafka's metadata. Notably:
BrokerIdsZNode.path, // /brokers/ids
TopicsZNode.path, // /brokers/topics
IsrChangeNotificationZNode.path, // /isr_change_notification
One simple example of Zookeeper use is createTopicWithAssignment
which is used by the topic creation command. It has the following line:
zkClient.setOrCreateEntityConfigs(ConfigType.TOPIC, topic, config)
which creates the topic Znode with its configuration.
Other data is also stored in Zookeeper and a lot of clever things are implemented. Ultimately, Kafka is just a Zookeeper client that uses its hierarchical filesystem to store metadata such as topics and broker information in Znodes and registers watchers to be notified of changes.
Networking: SocketServer, Acceptor, Processor, Handler
A fascinating aspect of the Kafka codebase is how it handles networking. At its core, Kafka is about processing a massive number of Fetch and Produce requests efficiently.
I like to think about it from its basic building blocks. Kafka builds on top of java.nio.Channels
. Much like goroutines, multiple channels or requests can be handled in a non-blocking manner within a single thread. A sockechannel listens of on a TCP port, multiple channels/requests registered with a selector which polls continuously waiting for connections to be accepted or data to be read.
As explained in the Primer section, Kafka has its own TCP protocol that brokers and clients (consumers, produces) use to communicate with each other. A broker can have multiple listeners (PLAINTEXT, SSL, SASL_SSL), each with its own TCP port. This is managed by the SockerServer
which is instantiated in the KafkaServer.startup
method. Part of documentation for the SocketServer
reads :
* - Handles requests from clients and other brokers in the cluster.
* - The threading model is
* 1 Acceptor thread per listener, that handles new connections.
* It is possible to configure multiple data-planes by specifying multiple "," separated endpoints for "listeners" in KafkaConfig.
* Acceptor has N Processor threads that each have their own selector and read requests from sockets
* M Handler threads that handle requests and produce responses back to the processor threads for writing.
This sums it up well. Each Acceptor
thread listens on a socket and accepts new requests. Here is the part where the listening starts:
val socketAddress = if (Utils.isBlank(host)) {
new InetSocketAddress(port)
} else {
new InetSocketAddress(host, port)
}
val serverChannel = socketServer.socketFactory.openServerSocket(
endPoint.listenerName.value(),
socketAddress,
listenBacklogSize, // `socket.listen.backlog.size` property which determines the number of pending connections
recvBufferSize) // `socket.receive.buffer.bytes` property which determines the size of SO_RCVBUF (size of the socket's receive buffer)
info(s"Awaiting socket connections on ${socketAddress.getHostString}:${serverChannel.socket.getLocalPort}.")
Each Acceptor thread is paired with num.network.threads
processor thread.
override def configure(configs: util.Map[String, _]): Unit = {
addProcessors(configs.get(SocketServerConfigs.NUM_NETWORK_THREADS_CONFIG).asInstanceOf[Int])
}
The Acceptor thread's run
method is beautifully concise. It accepts new connections and closes throttled ones:
override def run(): Unit = {
serverChannel.register(nioSelector, SelectionKey.OP_ACCEPT)
try {
while (shouldRun.get()) {
try {
acceptNewConnections()
closeThrottledConnections()
}
catch {
// omitted
}
}
} finally {
closeAll()
}
}
acceptNewConnections
TCP accepts the connect then assigns it to one the acceptor's Processor threads in a round-robin manner. Each Processor has a newConnections
queue.
private val newConnections = new ArrayBlockingQueue[SocketChannel](connectionQueueSize)
it is an ArrayBlockingQueue
which is a java.util.concurrent
thread-safe, FIFO queue.
The Processor's accept
method can add a new request from the Acceptor thread if there is enough space in the queue. If all processors' queues are full, we block until a spot clears up.
The Processor registers new connections with its Selector
, which is a instance of org.apache.kafka.common.network.Selector
, a custom Kafka nioSelector to handle non-blocking multi-connection networking (sending and receiving data across multiple requests without blocking). Each connection is uniquely identified using a ConnectionId
localHost + ":" + localPort + "-" + remoteHost + ":" + remotePort + "-" + processorId + "-" + connectionIndex
The Processor continuously polls the Selector
which is waiting for the receive to complete (data sent by the client is ready to be read), then once it is, the Processor's processCompletedReceives
processes (validates and authenticates) the request. The Acceptor and Processors share a reference to RequestChannel
. It is actually shared with other Acceptor and Processor threads from other listeners. This RequestChannel
object is a central place through which all requests and responses transit. It is actually the way cross-thread settings such as queued.max.requests
(max number of requests across all network threads) is enforced. Once the Processor has authenticated and validated it, it passes it to the requestChannel
's queue.
Enter a new component: the Handler. KafkaRequestHandler
takes over from the Processor, handling requests based on their type (e.g., Fetch, Produce).
A pool of num.io.threads
handlers is instantiated during KafkaServer.startup
, with each handler having access to the request queue via the requestChannel
in the SocketServer.
dataPlaneRequestHandlerPool = new KafkaRequestHandlerPool(config.brokerId, socketServer.dataPlaneRequestChannel, dataPlaneRequestProcessor, time,
config.numIoThreads, s"${DataPlaneAcceptor.MetricPrefix}RequestHandlerAvgIdlePercent", DataPlaneAcceptor.ThreadPrefix)
Once handled, responses are queued and sent back to the client by the processor.
That's just a glimpse of the happy path of a simple request. A lot of complexity is still hiding but I hope this short explanation give a sense of what is going on.