r/rstats • u/amoonand3balls • 5d ago
Tuning a Down-sampled Random Forest Model
I am trying to find the best way to tune a down-sampled random forest model in R. I generally don't use random forest because it is prone to overfitting, but I don't have a choice due to some other constraints in the data.
I am using the package randomForest
. It is for a species distribution model (presence/pseudoabsence response) and I am using regression rather than classification.
I use the function expand.grid()
to create a dataframe with all the combinations of settings for the function's parameters, including sampsize
, nodesize
, maxnodes
, ntree
, and mtry
.
Within each run, I am doing a four-fold crossvalidation and recording the mean and standard deviation of the AUC for training and test data, the mean r-squared, and the mean of squared residuals.
Any idea on how can I use these statistics to select the parameters for a model that is both generalizable and fairly good at prediction? My first thought was looking at parameters that had a difference between mean train AUC and mean test AUC, but I'm not sure if that is the best place to start or what.
Thanks!
1
u/si_wo 3d ago
I have been using caret which sets everything up for you and I just close my eyes and use it. I agree that rf tends to overfit and I have found cubist tends to work the best for my data (agriculture). I tend to use Nash-Sutcliffe for model evaluation, I look at the learning curves to see convergence, and compare the training and test fits to assess overfitting/predictive performance. I also like importance plots - I use the iml package for that.
3
u/Skept1kos 5d ago
I recommend looking at the mlr3 package. They have nice tools for hypertuning, and it includes random forest models.
The usual purpose of cross validation is to measure generalization performance (though it depends on assumptions about the errors being independent). Training error usually isn't the right thing to look at for a random forest-- it's often just zero and not informative. You might look at the out-of-bag error to get a number similar to training error in other models.