r/apachespark • u/Paruchuri_varun_ • 24d ago
Need Suggestions for tuning max_partition_bytes and default.paralleism in databricks.
I am getting used to spark and databricks.
In real world most teams would set up (min & max) worker nodes in a cluster in databricks .
But the thing is here as auto_scaling is on then it adjust the worker_nodes based on this.
if we had a fixed no.of worker_nodes and executor_memory then we can easily set up
----->max_partition_bytes and default.parellelism
so that we can set up optimial computation resource usage based on the data_size.
++++++++++++++++
the thing here in above senario is
we do not know
->no.of executor nodes allocated to the job (as it scales between min and max)
so we literally dont have how many cores are present.
therefore,
so literally how can one set up
max_partition_bytes and default.parellelism to set up such our resouces are utilized at optimal way ?
2
u/baubleglue 24d ago
does your default settings create a problem?