How we increased performance of flume by another 700%!

How we increased performance of flume by another 700%!
Shubham Shirude
Posted by on January 23, 2017 in Blog, Gazelle

In my last blog, I had written about how we used the appropriate Regex expression in the Flume script and achieved a significant improvement in the performance of Flume.
In another project, we ran into a different problem and realized that in addition to the appropriate Regex, performance of Flume can be increased drastically by another way.
Context  : We were receiving data in the form of CSV files. Each file was very small, just a few Kb in size. We were passing the files to Flume, one by one. Initially when the number of files available for testing was less, we did not notice a problem. The files used to get loaded in a matter of a few seconds.

In the Performance testing phase, we started receiving good volumes of data. As a result, we needed to load 1000’s of files in a few seconds. That is when we noticed that Flume was not able to load the files as per our expectations. We were able to load hardly 60-70 files per minute which was woefully inadequate.
We were aware that HDFS prefers dealing with small number of large files rather than vice versa. After some analysis, we realized that the same concept might apply to Flume as well.
Approach to the problem : We then introduced a pre-processing step in which we combined multiple small files to form a single big file before passing to Flume. The results were as expected and astonishing. Concatenating smaller files into a bigger file before passing to Flume improved loading times significantly. In one instance, the performance shot up by more than 700%!
Here’s a summary of what we achieved by using different combinations –

Conclusion :
1) When files being loaded via Flume are small, concatenate them into bigger files and then pass to Flume. Loading times are reduced significantly. This is because the Java overhead of opening and closing each small file is reduced when a smaller number of files are passed.
2) It is essential to test and see what level of concatenation gives optimum results. In our case, concatenating around 2000 files with an average size below 10 Kb each gave good results. Concatenating any further is not expected to give any additional benefits. In fact, after a threshold, it might decrease the loading performance.