Hyperparameter Tuning Process
What is Hyperparameter Tuning?
Hyperparameter Tuning is the process of finding the best settings for a machine learning algorithm. Unlike model parameters learned from data, hyperparameters are set before training.
Common Hyperparameters
- Random Forest: nestimators, maxdepth, minsamplessplit
- XGBoost: learningrate, maxdepth, n_estimators
- SVM: C, kernel, gamma
- Neural Network: learning_rate, layers, neurons
Tuning Methods
Grid Search
- Tests all combinations of specified values
- Exhaustive but slow
Random Search
- Randomly samples parameter combinations
- Faster, often finds good solutions
In CMMI-DCC
- Select tuning method when creating ML pipeline
- Specify parameter ranges to search
- View best parameters in results
Related Terms
- Grid Search: Exhaustive parameter search
- Random Search: Stochastic parameter search
- Cross-Validation: Used to evaluate each combination