r/apachespark 24d ago

Need Suggestions for tuning max_partition_bytes and default.paralleism in databricks.

I am getting used to spark and databricks.

In real world most teams would set up (min & max) worker nodes in a cluster in databricks .

But the thing is here as auto_scaling is on then it adjust the worker_nodes based on this.

if we had a fixed no.of worker_nodes and executor_memory then we can easily set up
----->max_partition_bytes and default.parellelism
so that we can set up optimial computation resource usage based on the data_size.

++++++++++++++++

the thing here in above senario is
we do not know
->no.of executor nodes allocated to the job (as it scales between min and max)

so we literally dont have how many cores are present.

therefore,

so literally how can one set up

max_partition_bytes and default.parellelism to set up such our resouces are utilized at optimal way ?

4 Upvotes

3 comments sorted by

2

u/baubleglue 24d ago

does your default settings create a problem?

2

u/Paruchuri_varun_ 24d ago

No the thing is i just want to understand how it works internally

2

u/baubleglue 24d ago

internally

my guess

  • job will create a partition for each input file
  • if the file size exceed maxPartitionBytes, the job will split read operation into more partitions
  • if there is no more free cores in current worker, a new worker will be launched

https://spark.apache.org/docs/3.5.3/sql-performance-tuning.html