3. Model selection and evaluation# 3.1. Cross-validation: evaluating estimator performance 3.1.1. Computing cross-validated metrics 3.1.2. Cross validation iterators 3.1.3. A note on shuffling 3.1.4. Cross validation and model selection 3.1.5. Permutation test score 3.2. Tuning the hyper-parameters of an estimator 3.2.1. Exhaustive Grid Search 3.2.2. Randomized Parameter Optimization 3.2.3. Searching for optimal parameters with successive halving 3.2.4. Tips for parameter search 3.2.5. Alternatives to brute force parameter search 3.3. Tuning the decision threshold for class prediction 3.3.1. Post-tuning the decision threshold 3.4. Metrics and scoring: quantifying the quality of predictions 3.4.1. Which scoring function should I use? 3.4.2. Scoring API overview 3.4.3. The scoring parameter: defining model evaluation rules 3.4.4. Classification metrics 3.4.5. Multilabel ranking metrics 3.4.6. Regression metrics 3.4.7. Clustering metrics 3.4.8. Dummy estimators 3.5. Validation curves: plotting scores to evaluate models 3.5.1. Validation curve 3.5.2. Learning curve