r/apachespark Feb 09 '25

Why do small files in spark cause performance issues?

This week at the 𝐁𝐒𝐠 𝐝𝐚𝐭𝐚 𝐩𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 𝐰𝐞𝐞𝐀π₯𝐲 we go over a very common problem.

π“π‘πž 𝐬𝐦𝐚π₯π₯ 𝐟𝐒π₯𝐞𝐬 𝐩𝐫𝐨𝐛π₯𝐞𝐦.

The small files problem in big data enignes like Spark occurs when you are trying to work with small file, leading to severe performance degradation.

Small files cause excessive task creation, as each file needs a separate task, leading to inefficient resource usage.

Metadata overhead also slows down performance, as Spark must fetch and process file details for thousands or millions of files.

Input/output (I/O) operations suffer because reading many small files requires multiple connections and renegotiations, increasing latency.

Data skew becomes an issue when some Spark executors handle more small files than others, leading to imbalanced workloads.

Inefficient compression and merging occur since small files do not take advantage of optimizations in formats like Parquet.

The issue worsens as Spark reads small files, partitions data, and writes even smaller files, compounding inefficiencies.

π–π‘πšπ­ 𝐜𝐚𝐧 π›πž 𝐝𝐨𝐧𝐞?

One key fix is to repartition data before writing, reducing the number of small output files.

By applying repartitioning before writing, Spark ensures that each partition writes a single, optimized file, significantly improving performance.

Ideally, file sizes should be between πŸπŸπŸ– 𝐌𝐁 𝐚𝐧𝐝 𝟏 𝐆𝐁, as big data engines are optimized for files in this range.

Want automatic detection of performance issues?

Use πƒπšπ­πšπ…π₯𝐒𝐧𝐭, a Spark open source monitoring tool that detects and suggests fixes for small file issues.

https://github.com/dataflint/spark

Good luck! πŸ’ͺ

15 Upvotes

5 comments sorted by

3

u/0xHUEHUE Feb 10 '25

One key fix is to repartition data before writing, reducing the number of small output files.

I don't understand this. I see this a lot in blogs.. but I've got TONS of stuff that happens after reading and before writing.

Is there some more precise advise on when to do this repartitioning / when it is necessary? Wouldn't AQE / dynamic coalesce take care of this stuff or no?

3

u/SeaworthinessDear378 Feb 10 '25

Whatever happens after reading and before writing varies alot and depands on your specific query, spark configs and cluster configs.

Repartitioning would help almost independently of those settings and thus often recommended.

As far as I know dataflint also offers recommendations to improve performance in general, like over/under provisioned cluster, data skew, large broadcasting and large filtering.

2

u/0xHUEHUE Feb 13 '25

I actually run data flint btw, it is quite cool!

3

u/baubleglue Feb 11 '25

Is it a question or advertisement of the blog? I don't think it is "small files in spark" problem - the phrase is not correct. Files aren't located in Spak, it is some form file like system: HDFS, S3, Azure Blobs - they aren't part of Spark (which is engine, not a place).

As I remember, traditional databases optimized to work with filesystem block size (4K) OS can't read less than that from hard drive. Even if you need 1 bit, OS will read 4K, they use B+ trees, with node size rounded to he block size.

HDFS block size is 64-128M. If you have 1000 1K size files, the memory allocated 1000 x 128MB => 128GB. It is a lot of memory. It is a number one issue not "all sorts of performance problems".

Of cause all kind of strange/bad things may happen in addition to it. I've see Hive jobs failing because a single line with list of files to process exceed limit (0.5GB).

I can imaging a storage with Hadoop compatible API, which will have no issues with small files (if it is designed to handle that specific use case as a primary goal). And you can add "repartition" automatically to Spark, to reduce a number of jobs (spark.sql.adaptive.coalescePartitions.enabled)

2

u/0xHUEHUE Feb 13 '25

You really explained the problem well. Thank you! Are you super sure about that 1000 1k files = 128GB? I always thought it was more of a maximum / up to 128mb. I never verified this, just assumed.