sklearn.ensemble.partial_dependence
.partial_dependence¶
-
sklearn.ensemble.partial_dependence.
partial_dependence
(gbrt, target_variables, grid=None, X=None, percentiles=(0.05, 0.95), grid_resolution=100)[source]¶ Partial dependence of
target_variables
.Partial dependence plots show the dependence between the joint values of the
target_variables
and the function represented by thegbrt
.Read more in the User Guide.
Parameters: gbrt : BaseGradientBoosting
A fitted gradient boosting model.
target_variables : array-like, dtype=int
The target features for which the partial dependecy should be computed (size should be smaller than 3 for visual renderings).
grid : array-like, shape=(n_points, len(target_variables))
The grid of
target_variables
values for which the partial dependecy should be evaluated (eithergrid
orX
must be specified).X : array-like, shape=(n_samples, n_features)
The data on which
gbrt
was trained. It is used to generate agrid
for thetarget_variables
. Thegrid
comprisesgrid_resolution
equally spaced points between the twopercentiles
.percentiles : (low, high), default=(0.05, 0.95)
The lower and upper percentile used create the extreme values for the
grid
. Only ifX
is not None.grid_resolution : int, default=100
The number of equally spaced points on the
grid
.Returns: pdp : array, shape=(n_classes, n_points)
The partial dependence function evaluated on the
grid
. For regression and binary classificationn_classes==1
.axes : seq of ndarray or None
The axes with which the grid has been created or None if the grid has been given.
Examples
>>> samples = [[0, 0, 2], [1, 0, 0]] >>> labels = [0, 1] >>> from sklearn.ensemble import GradientBoostingClassifier >>> gb = GradientBoostingClassifier(random_state=0).fit(samples, labels) >>> kwargs = dict(X=samples, percentiles=(0, 1), grid_resolution=2) >>> partial_dependence(gb, [0], **kwargs) (array([[-4.52..., 4.52...]]), [array([ 0., 1.])])