sklearn.preprocessing.maxabs_scale(X, *, axis=0, copy=True)[source]#

Scale each feature to the [-1, 1] range without breaking the sparsity.

This estimator scales each feature individually such that the maximal absolute value of each feature in the training set will be 1.0.

This scaler can also be applied to sparse CSR or CSC matrices.

X{array-like, sparse matrix} of shape (n_samples, n_features)

The data.

axis{0, 1}, default=0

Axis used to scale along. If 0, independently scale each feature, otherwise (if 1) scale each sample.

copybool, default=True

If False, try to avoid a copy and scale in place. This is not guaranteed to always work in place; e.g. if the data is a numpy array with an int dtype, a copy will be returned even with copy=False.

X_tr{ndarray, sparse matrix} of shape (n_samples, n_features)

The transformed data.


Risk of data leak Do not use maxabs_scale unless you know what you are doing. A common mistake is to apply it to the entire data before splitting into training and test sets. This will bias the model evaluation because information would have leaked from the test set to the training set. In general, we recommend using MaxAbsScaler within a Pipeline in order to prevent most risks of data leaking: pipe = make_pipeline(MaxAbsScaler(), LogisticRegression()).

See also


Performs scaling to the [-1, 1] range using the Transformer API (e.g. as part of a preprocessing Pipeline).


NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation.

For a comparison of the different scalers, transformers, and normalizers, see: Compare the effect of different scalers on data with outliers.


>>> from sklearn.preprocessing import maxabs_scale
>>> X = [[-2, 1, 2], [-1, 0, 1]]
>>> maxabs_scale(X, axis=0)  # scale each column independently
array([[-1. ,  1. ,  1. ],
       [-0.5,  0. ,  0.5]])
>>> maxabs_scale(X, axis=1)  # scale each row independently
array([[-1. ,  0.5,  1. ],
       [-1. ,  0. ,  1. ]])