User Guide¶
- 1. Supervised learning
- 1.1. Generalized Linear Models
- 1.1.1. Ordinary Least Squares
- 1.1.2. Ridge Regression
- 1.1.3. Lasso
- 1.1.4. Multi-task Lasso
- 1.1.5. Elastic-Net
- 1.1.6. Multi-task Elastic-Net
- 1.1.7. Least Angle Regression
- 1.1.8. LARS Lasso
- 1.1.9. Orthogonal Matching Pursuit (OMP)
- 1.1.10. Bayesian Regression
- 1.1.11. Logistic regression
- 1.1.12. Stochastic Gradient Descent - SGD
- 1.1.13. Perceptron
- 1.1.14. Passive Aggressive Algorithms
- 1.1.15. Robustness regression: outliers and modeling errors
- 1.1.16. Polynomial regression: extending linear models with basis functions
- 1.2. Linear and Quadratic Discriminant Analysis
- 1.3. Kernel ridge regression
- 1.4. Support Vector Machines
- 1.5. Stochastic Gradient Descent
- 1.6. Nearest Neighbors
- 1.7. Gaussian Processes
- 1.8. Cross decomposition
- 1.9. Naive Bayes
- 1.10. Decision Trees
- 1.11. Ensemble methods
- 1.11.1. Bagging meta-estimator
- 1.11.2. Forests of randomized trees
- 1.11.3. AdaBoost
- 1.11.4. Gradient Tree Boosting
- 1.11.5. Voting Classifier
- 1.11.6. Voting Regressor
- 1.12. Multiclass and multilabel algorithms
- 1.13. Feature selection
- 1.14. Semi-Supervised
- 1.15. Isotonic regression
- 1.16. Probability calibration
- 1.17. Neural network models (supervised)
- 1.1. Generalized Linear Models
- 2. Unsupervised learning
- 2.1. Gaussian mixture models
- 2.2. Manifold learning
- 2.2.1. Introduction
- 2.2.2. Isomap
- 2.2.3. Locally Linear Embedding
- 2.2.4. Modified Locally Linear Embedding
- 2.2.5. Hessian Eigenmapping
- 2.2.6. Spectral Embedding
- 2.2.7. Local Tangent Space Alignment
- 2.2.8. Multi-dimensional Scaling (MDS)
- 2.2.9. t-distributed Stochastic Neighbor Embedding (t-SNE)
- 2.2.10. Tips on practical use
- 2.3. Clustering
- 2.3.1. Overview of clustering methods
- 2.3.2. K-means
- 2.3.3. Affinity Propagation
- 2.3.4. Mean Shift
- 2.3.5. Spectral clustering
- 2.3.6. Hierarchical clustering
- 2.3.7. DBSCAN
- 2.3.8. OPTICS
- 2.3.9. Birch
- 2.3.10. Clustering performance evaluation
- 2.4. Biclustering
- 2.5. Decomposing signals in components (matrix factorization problems)
- 2.5.1. Principal component analysis (PCA)
- 2.5.2. Truncated singular value decomposition and latent semantic analysis
- 2.5.3. Dictionary Learning
- 2.5.4. Factor Analysis
- 2.5.5. Independent component analysis (ICA)
- 2.5.6. Non-negative matrix factorization (NMF or NNMF)
- 2.5.7. Latent Dirichlet Allocation (LDA)
- 2.6. Covariance estimation
- 2.7. Novelty and Outlier Detection
- 2.8. Density Estimation
- 2.9. Neural network models (unsupervised)
- 3. Model selection and evaluation
- 3.1. Cross-validation: evaluating estimator performance
- 3.1.1. Computing cross-validated metrics
- 3.1.2. Cross validation iterators
- 3.1.3. A note on shuffling
- 3.1.4. Cross validation and model selection
- 3.2. Tuning the hyper-parameters of an estimator
- 3.2.1. Exhaustive Grid Search
- 3.2.2. Randomized Parameter Optimization
- 3.2.3. Tips for parameter search
- 3.2.4. Alternatives to brute force parameter search
- 3.2.4.1. Model specific cross-validation
- 3.2.4.1.1.
sklearn.linear_model
.ElasticNetCV - 3.2.4.1.2.
sklearn.linear_model
.LarsCV - 3.2.4.1.3.
sklearn.linear_model
.LassoCV - 3.2.4.1.4.
sklearn.linear_model
.LassoLarsCV - 3.2.4.1.5.
sklearn.linear_model
.LogisticRegressionCV - 3.2.4.1.6.
sklearn.linear_model
.MultiTaskElasticNetCV - 3.2.4.1.7.
sklearn.linear_model
.MultiTaskLassoCV - 3.2.4.1.8.
sklearn.linear_model
.OrthogonalMatchingPursuitCV - 3.2.4.1.9.
sklearn.linear_model
.RidgeCV - 3.2.4.1.10.
sklearn.linear_model
.RidgeClassifierCV
- 3.2.4.1.1.
- 3.2.4.2. Information Criterion
- 3.2.4.3. Out of Bag Estimates
- 3.2.4.3.1.
sklearn.ensemble
.RandomForestClassifier - 3.2.4.3.2.
sklearn.ensemble
.RandomForestRegressor - 3.2.4.3.3.
sklearn.ensemble
.ExtraTreesClassifier - 3.2.4.3.4.
sklearn.ensemble
.ExtraTreesRegressor - 3.2.4.3.5.
sklearn.ensemble
.GradientBoostingClassifier - 3.2.4.3.6.
sklearn.ensemble
.GradientBoostingRegressor
- 3.2.4.3.1.
- 3.2.4.1. Model specific cross-validation
- 3.3. Model evaluation: quantifying the quality of predictions
- 3.3.1. The
scoring
parameter: defining model evaluation rules - 3.3.2. Classification metrics
- 3.3.2.1. From binary to multiclass and multilabel
- 3.3.2.2. Accuracy score
- 3.3.2.3. Balanced accuracy score
- 3.3.2.4. Cohen’s kappa
- 3.3.2.5. Confusion matrix
- 3.3.2.6. Classification report
- 3.3.2.7. Hamming loss
- 3.3.2.8. Precision, recall and F-measures
- 3.3.2.9. Jaccard similarity coefficient score
- 3.3.2.10. Hinge loss
- 3.3.2.11. Log loss
- 3.3.2.12. Matthews correlation coefficient
- 3.3.2.13. Multi-label confusion matrix
- 3.3.2.14. Receiver operating characteristic (ROC)
- 3.3.2.15. Zero one loss
- 3.3.2.16. Brier score loss
- 3.3.3. Multilabel ranking metrics
- 3.3.4. Regression metrics
- 3.3.5. Clustering metrics
- 3.3.6. Dummy estimators
- 3.3.1. The
- 3.4. Model persistence
- 3.5. Validation curves: plotting scores to evaluate models
- 3.1. Cross-validation: evaluating estimator performance
- 4. Inspection
- 5. Dataset transformations
- 5.1. Pipelines and composite estimators
- 5.2. Feature extraction
- 5.2.1. Loading features from dicts
- 5.2.2. Feature hashing
- 5.2.3. Text feature extraction
- 5.2.3.1. The Bag of Words representation
- 5.2.3.2. Sparsity
- 5.2.3.3. Common Vectorizer usage
- 5.2.3.4. Tf–idf term weighting
- 5.2.3.5. Decoding text files
- 5.2.3.6. Applications and examples
- 5.2.3.7. Limitations of the Bag of Words representation
- 5.2.3.8. Vectorizing a large text corpus with the hashing trick
- 5.2.3.9. Performing out-of-core scaling with HashingVectorizer
- 5.2.3.10. Customizing the vectorizer classes
- 5.2.4. Image feature extraction
- 5.3. Preprocessing data
- 5.4. Imputation of missing values
- 5.5. Unsupervised dimensionality reduction
- 5.6. Random Projection
- 5.7. Kernel Approximation
- 5.8. Pairwise metrics, Affinities and Kernels
- 5.9. Transforming the prediction target (
y
)
- 6. Dataset loading utilities
- 6.1. General dataset API
- 6.2. Toy datasets
- 6.3. Real world datasets
- 6.4. Generated datasets
- 6.5. Loading other datasets
- 7. Computing with scikit-learn