5. Dataset transformations¶
scikit-learn provides a library of transformers, which may clean (see Preprocessing data), reduce (see Unsupervised dimensionality reduction), expand (see Kernel Approximation) or generate (see Feature extraction) feature representations.
Like other estimators, these are represented by classes with a fit
method,
which learns model parameters (e.g. mean and standard deviation for
normalization) from a training set, and a transform
method which applies
this transformation model to unseen data. fit_transform
may be more
convenient and efficient for modelling and transforming the training data
simultaneously.
Combining such transformers, either in parallel or series is covered in Pipelines and composite estimators. Pairwise metrics, Affinities and Kernels covers transforming feature spaces into affinity matrices, while Transforming the prediction target (y) considers transformations of the target space (e.g. categorical labels) for use in scikit-learn.
- 5.1. Pipelines and composite estimators
- 5.2. Feature extraction
- 5.2.1. Loading features from dicts
- 5.2.2. Feature hashing
- 5.2.3. Text feature extraction
- 5.2.3.1. The Bag of Words representation
- 5.2.3.2. Sparsity
- 5.2.3.3. Common Vectorizer usage
- 5.2.3.4. Tf–idf term weighting
- 5.2.3.5. Decoding text files
- 5.2.3.6. Applications and examples
- 5.2.3.7. Limitations of the Bag of Words representation
- 5.2.3.8. Vectorizing a large text corpus with the hashing trick
- 5.2.3.9. Performing out-of-core scaling with HashingVectorizer
- 5.2.3.10. Customizing the vectorizer classes
- 5.2.4. Image feature extraction
- 5.3. Preprocessing data
- 5.4. Imputation of missing values
- 5.5. Unsupervised dimensionality reduction
- 5.6. Random Projection
- 5.7. Kernel Approximation
- 5.8. Pairwise metrics, Affinities and Kernels
- 5.9. Transforming the prediction target (
y
)