- Does the Number of Folds Matter in Cross Validation?
K-Fold Cross Validation is an essential technique in machine learning, particularly when the performance of a model shows significant variance based on the train-test split. While many practitioners often use 5 or 10 folds, there is no strict rule or norm dictating these numbers. In essence, one can use as many folds as deemed appropriate for the specific dataset and problem at hand.
- Experimentation with Different Fold Numbers:
I conducted an experiment using 5-fold cross validation and obtained an R-Squared value of 34%.
To delve deeper into the behavior of the model, I aim to experiment with varying numbers of folds. The objective is to observe any changes in the R-Squared value as the number of folds changes.
Beyond this, I intend to take unequal sets for training and testing, then perform regression to see the resulting R-Squared values. By doing so, I hope to gain a deeper understanding of how the model behaves under different conditions.