Install
User Guide
API
Examples
Community
Getting Started
Tutorial
What's new
Glossary
Development
FAQ
Support
Related packages
Roadmap
Governance
About us
GitHub
Other Versions and Download
More
Getting Started
Tutorial
What's new
Glossary
Development
FAQ
Support
Related packages
Roadmap
Governance
About us
GitHub
Other Versions and Download
Toggle Menu
Prev
Up
Next
scikit-learn 1.3.2
Other versions
Please
cite us
if you use the software.
User Guide
1. Supervised learning
2. Unsupervised learning
3. Model selection and evaluation
3.1. Cross-validation: evaluating estimator performance
3.2. Tuning the hyper-parameters of an estimator
3.3. Metrics and scoring: quantifying the quality of predictions
3.4. Validation curves: plotting scores to evaluate models
4. Inspection
5. Visualizations
6. Dataset transformations
7. Dataset loading utilities
8. Computing with scikit-learn
9. Model persistence
10. Common pitfalls and recommended practices
11. Dispatching
3.
Model selection and evaluation
¶
3.1. Cross-validation: evaluating estimator performance
3.1.1. Computing cross-validated metrics
3.1.2. Cross validation iterators
3.1.3. A note on shuffling
3.1.4. Cross validation and model selection
3.1.5. Permutation test score
3.2. Tuning the hyper-parameters of an estimator
3.2.1. Exhaustive Grid Search
3.2.2. Randomized Parameter Optimization
3.2.3. Searching for optimal parameters with successive halving
3.2.4. Tips for parameter search
3.2.5. Alternatives to brute force parameter search
3.3. Metrics and scoring: quantifying the quality of predictions
3.3.1. The
scoring
parameter: defining model evaluation rules
3.3.2. Classification metrics
3.3.3. Multilabel ranking metrics
3.3.4. Regression metrics
3.3.5. Clustering metrics
3.3.6. Dummy estimators
3.4. Validation curves: plotting scores to evaluate models
3.4.1. Validation curve
3.4.2. Learning curve