Business news

Apache Kafka Optimization: 4 Best Practices to Achieve & Maintain Performance


Apache Kafka is the world’s most used distributed event store and stream-processing platform. But despite its good quality processing power and peak performance reliability, there are certain elements which you should keep in mind. These things are necessary so that you can make the best possible use of this powerful data streaming platform. But if it is one thing that you must remember it is that optimising Apache Kafka is an ongoing job for there are millions of data sets and each feeds into a data stream and lots of processes must work in tandem in order to make it work.

  1. Set Log Configuration Parameters To Keep Log Manageable: 

Apache Kafka gives you lots of options when it comes to log configurations, and the default settings should reasonably suffice for most users. However, it would be prudent if you consider setting up a sound log retention policy, cleanups, compaction, and compression activities for the long term. Log behaviour can be controlled using the log.segments.bytes, and log.cleanup.policy parameters. Click here to learn Apache Kafka overview and more about its various parameters.

But suppose your business organisation does not require past logs. In that case, you can have Apache Kafka delete log files of specific sizes or after a set length of time by setting this parameter ‘cleanup.policy’ to delete. This is important because running log cleanup consumes precious CPU and RAM resources; hence if you are using Apache Kafka as a commit log for any length of time, make sure to balance out the frequency.

  2Know Your Hardware Capabilities:

Despite Apache Kafka being in existence for close to a decade, there are still some highly optimistic people out there who overshoot their hardware capabilities when dealing with Apache Kafka. Also Kafka is becoming more and more popular for application developers, IT professionals, and data managers. 

Here is what you need to know:

  • CPU: If your company needs to deal with Security Service Later (SSL) or log compressions, you will require a multicore CPU to run Apache Kafka. Otherwise, in most cases, the LZ4 codec should be able to provide you with optimal performance.
  • RAM: In most ideal scenarios, 6GB of RAM should suffice, but if your company relies heavily upon heavy productions, then use an extra RAM. Extra RAM helps in bolstering the operating system cache and also improves the client throughput. Apache Kafka is actually more dependent upon CPU than RAM; however, its ability to process is hampered when the free RAM falls below a certain threshold.
  • Storage: Apache Kafka uses multiple drives in a RAID setup; hence SSDs don’t deliver much of an advantage. This is also because Apache Kafka needs a sequential disk I/O paradigm. At all costs, you should avoid NAS.
  • File System: The recommended file format is XFS. You should be keeping your cluster at a single data centre if your company’s circumstances allow. Also, you should try to deliver as much network bandwidth as possible.
  1. Apache Zookeeper: A running Apache ZooKeeper cluster is a critical dependency for running Kafka, but when you are using ZooKeeper alongside Kafka, there are some key things to keep in mind:
  • The number of ZooKeeper nodes should be kept at five maximum. One node is suitable for a development environment, and three nodes are enough for most production Kafka clusters.
  • Six or more nodes synced and handling requests, the load becomes immense, and performance might take a significant hit. 
  • Also, make sure that you provide ZooKeeper with the strongest network bandwidth possible and use the best disks, storing logs, and disable swaps. This is to make sure that the latency is kept to a negligible amount.

  1. Topic Configurations: Topic configurations have a high impact on the performance of Apache Kafka clusters. This is because alterations to settings like replication factor or partition count can be a challenging task in itself. So you might want to set these configurations the right way for the first time. Use a replication factor of at least three and be very careful about handling large messages. 

You can try and break large messages into ordered pieces or use pointers to the data. The default log segment size is one GB, and if your messages are larger, then you should consider whether you should be needing them or downsize them.

The topic configurations also have a ‘server default’ property. These can be overridden at the point of topic creation or at a later time in order to have a topic-specific configuration. This replication factor is one of the most important configurations you will need to deal with in Apache Kafka.

Also another thing you must have noticed if you have used an older version of Apache Kafka. In Kafka versions lower than 0.1 the default parameter receive.buffer.bytes was 64 kB. But now in newer Apache Kafka versions the parameter is socket.receive.buffer.bytes with 100 kB as the default option. Why is this so? 

This is because for high throughput Kafka environments the default buffer size values are extremely small, thus not very useful. This is exactly the situation which arises when the network’s bandwidth-delay product between the broker and the consumer is bigger than that of the local area network or popularly know an LAN.

The hardware data threads slow down and become scarce or limited when there is not enough hard disk bandwidth left. So this is why you should consider increasing the size of the buffers for at least network requests so that it improves your overall network throughput capacity.

For example- Suppose your network is running on 10 Gbps or higher speed and has a latency of 1 millisecond or more then you should increase your socket’s buffer size to at least 8 MB or 16 MB if possible.


Apache Kafka is designed for parallel processing of data and the act of parallelization itself fully utilises and also requires a balancing act. Partition count is a topic level setting, and the more partitions, the greater parallelization and throughput. The performance optimization guides mentioned above are some of the optimization approaches users can implement to improve Kafka performance, but you can always check out our other blogs for more information.





To Top

Pin It on Pinterest

Share This