This documentation is for scikit-learn version 0.13.1Other versions

### Citing

If you use the software, please consider citing scikit-learn.

class sklearn.linear_model.MultiTaskElasticNet(alpha=1.0, l1_ratio=0.5, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, rho=None)

Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer

The optimization objective for Lasso is:

(1 / (2 * n_samples)) * ||Y - XW||^Fro_2
+ alpha * l1_ratio * ||W||_21
+ 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2

Where:

||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}

i.e. the sum of norm of earch row.

Parameters : alpha : float, optional Constant that multiplies the L1/L2 term. Defaults to 1.0 l1_ratio : float The ElasticNet mixing parameter, with 0 < l1_ratio <= 1. For l1_ratio = 0 the penalty is an L1/L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1/L2 and L2. fit_intercept : boolean whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered). normalize : boolean, optional, default False If True, the regressors X will be normalized before regression. copy_X : boolean, optional, default True If True, X will be copied; else, it may be overwritten. max_iter : int, optional The maximum number of iterations tol : float, optional The tolerance for the optimization: if the updates are smaller than ‘tol’, the optimization code checks the dual gap for optimality and continues until it is smaller than tol. warm_start : bool, optional When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution.

Notes

The algorithm used to fit the model is coordinate descent.

To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a fortran contiguous numpy array.

Examples

>>> from sklearn import linear_model
>>> clf.fit([[0,0], [1, 1], [2, 2]], [[0, 0], [1, 1], [2, 2]])
...
l1_ratio=0.5, max_iter=1000, normalize=False, rho=None, tol=0.0001,
warm_start=False)
>>> print clf.coef_
[[ 0.45663524  0.45612256]
[ 0.45663524  0.45612256]]
>>> print clf.intercept_
[ 0.0872422  0.0872422]


Attributes

 intercept_ array, shape = (n_tasks,) Independent term in decision function. coef_ array, shape = (n_tasks, n_features) Parameter vector (W in the cost function formula). If a 1D y is passed in at fit (non multi-task usage), coef_ is then a 1D array

Methods

 decision_function(X) Decision function of the linear model fit(X, y[, Xy, coef_init]) Fit MultiTaskLasso model with coordinate descent get_params([deep]) Get parameters for the estimator predict(X) Predict using the linear model score(X, y) Returns the coefficient of determination R^2 of the prediction. set_params(**params) Set the parameters of the estimator.
__init__(alpha=1.0, l1_ratio=0.5, fit_intercept=True, normalize=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, rho=None)
decision_function(X)

Decision function of the linear model

Parameters : X : numpy array or scipy.sparse matrix of shape (n_samples, n_features) T : array, shape = (n_samples,) The predicted decision function
fit(X, y, Xy=None, coef_init=None)

Fit MultiTaskLasso model with coordinate descent

Parameters : X: ndarray, shape = (n_samples, n_features) : Data y: ndarray, shape = (n_samples, n_tasks) : Target coef_init: ndarray of shape n_features : The initial coeffients to warm-start the optimization

Notes

Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a fortran contiguous numpy array if necessary.

To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.

get_params(deep=True)

Get parameters for the estimator

Parameters : deep: boolean, optional : If True, will return the parameters for this estimator and contained subobjects that are estimators.
predict(X)

Predict using the linear model

Parameters : X : numpy array of shape [n_samples, n_features] C : array, shape = [n_samples] Returns predicted values.
score(X, y)

Returns the coefficient of determination R^2 of the prediction.

The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score is 1.0, lower values are worse.

Parameters : X : array-like, shape = [n_samples, n_features] Training set. y : array-like, shape = [n_samples] z : float
set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns : self :
sparse_coef_

sparse representation of the fitted coef