sklearn.preprocessing.maxabs_scale(X, *, axis=0, copy=True)[source]

Scale each feature to the [-1, 1] range without breaking the sparsity.

This estimator scales each feature individually such that the maximal absolute value of each feature in the training set will be 1.0.

This scaler can also be applied to sparse CSR or CSC matrices.

X{array-like, sparse matrix} of shape (n_samples, n_features)

The data.

axisint, default=0

axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample.

copybool, default=True

Set to False to perform inplace scaling and avoid a copy (if the input is already a numpy array).

X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)

The transformed data.


Risk of data leak Do not use maxabs_scale unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using MaxAbsScaler within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(MaxAbsScaler(), LogisticRegression()).

See also


Performs scaling to the [-1, 1] range using the``Transformer`` API (e.g. as part of a preprocessing Pipeline).


NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation.

For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/