Package mdp :: Package nodes :: Class RidgeScikitsLearnNode
[hide private]
[frames] | no frames]

Class RidgeScikitsLearnNode



Linear least squares with l2 regularization.

This node has been automatically generated by wrapping the ``sklearn.linear_model.ridge.Ridge`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

This model solves a regression model where the loss function is
the linear least squares function and regularization is given by
the l2-norm. Also known as Ridge Regression or Tikhonov regularization.
This estimator has built-in support for multi-variate regression
(i.e., when y is a 2d-array of shape [n_samples, n_targets]).

Read more in the :ref:`User Guide <ridge_regression>`.

**Parameters**

alpha : {float, array-like}, shape (n_targets)
    Small positive values of alpha improve the conditioning of the problem
    and reduce the variance of the estimates.  Alpha corresponds to
    ``C^-1`` in other linear models such as LogisticRegression or
    LinearSVC. If an array is passed, penalties are assumed to be specific
    to the targets. Hence they must correspond in number.

copy_X : boolean, optional, default True
    If True, X will be copied; else, it may be overwritten.

fit_intercept : boolean
    Whether to calculate the intercept for this model. If set
    to false, no intercept will be used in calculations
    (e.g. data is expected to be already centered).

max_iter : int, optional
    Maximum number of iterations for conjugate gradient solver.
    For 'sparse_cg' and 'lsqr' solvers, the default value is determined
    by scipy.sparse.linalg. For 'sag' solver, the default value is 1000.

normalize : boolean, optional, default False
    If True, the regressors X will be normalized before regression.

solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag'}
    Solver to use in the computational routines:


    - 'auto' chooses the solver automatically based on the type of data.

    - 'svd' uses a Singular Value Decomposition of X to compute the Ridge
      coefficients. More stable for singular matrices than
      'cholesky'.

    - 'cholesky' uses the standard scipy.linalg.solve function to
      obtain a closed-form solution.

    - 'sparse_cg' uses the conjugate gradient solver as found in
      scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
      more appropriate than 'cholesky' for large-scale data
      (possibility to set `tol` and `max_iter`).

    - 'lsqr' uses the dedicated regularized least-squares routine
      scipy.sparse.linalg.lsqr. It is the fatest but may not be available
      in old scipy versions. It also uses an iterative procedure.

    - 'sag' uses a Stochastic Average Gradient descent. It also uses an
      iterative procedure, and is often faster than other solvers when
      both n_samples and n_features are large. Note that 'sag' fast
      convergence is only guaranteed on features with approximately the
      same scale. You can preprocess the data with a scaler from
      sklearn.preprocessing.

    All last four solvers support both dense and sparse data. However,
    only 'sag' supports sparse input when `fit_intercept` is True.

    .. versionadded:: 0.17
       Stochastic Average Gradient descent solver.

tol : float
    Precision of the solution.

random_state : int seed, RandomState instance, or None (default)
    The seed of the pseudo random number generator to use when
    shuffling the data. Used in 'sag' solver.

    .. versionadded:: 0.17
       *random_state* to support Stochastic Average Gradient.

**Attributes**

``coef_`` : array, shape (n_features,) or (n_targets, n_features)
    Weight vector(s).

``intercept_`` : float | array, shape = (n_targets,)
    Independent term in decision function. Set to 0.0 if
    ``fit_intercept = False``.

``n_iter_`` : array or None, shape (n_targets,)
    Actual number of iterations for each target. Available only for
    sag and lsqr solvers. Other solvers will return None.

See also

RidgeClassifier, RidgeCV, KernelRidge

**Examples**

>>> from sklearn.linear_model import Ridge
>>> import numpy as np
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> clf = Ridge(alpha=1.0)
>>> clf.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
Ridge(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None,
      normalize=False, random_state=None, solver='auto', tol=0.001)

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Linear least squares with l2 regularization.
 
_execute(self, x)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
Predict using the linear model
 
stop_training(self, **kwargs)
Fit Ridge regression model

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Linear least squares with l2 regularization.

This node has been automatically generated by wrapping the ``sklearn.linear_model.ridge.Ridge`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

This model solves a regression model where the loss function is
the linear least squares function and regularization is given by
the l2-norm. Also known as Ridge Regression or Tikhonov regularization.
This estimator has built-in support for multi-variate regression
(i.e., when y is a 2d-array of shape [n_samples, n_targets]).

Read more in the :ref:`User Guide <ridge_regression>`.

**Parameters**

alpha : {float, array-like}, shape (n_targets)
    Small positive values of alpha improve the conditioning of the problem
    and reduce the variance of the estimates.  Alpha corresponds to
    ``C^-1`` in other linear models such as LogisticRegression or
    LinearSVC. If an array is passed, penalties are assumed to be specific
    to the targets. Hence they must correspond in number.

copy_X : boolean, optional, default True
    If True, X will be copied; else, it may be overwritten.

fit_intercept : boolean
    Whether to calculate the intercept for this model. If set
    to false, no intercept will be used in calculations
    (e.g. data is expected to be already centered).

max_iter : int, optional
    Maximum number of iterations for conjugate gradient solver.
    For 'sparse_cg' and 'lsqr' solvers, the default value is determined
    by scipy.sparse.linalg. For 'sag' solver, the default value is 1000.

normalize : boolean, optional, default False
    If True, the regressors X will be normalized before regression.

solver : {'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag'}
    Solver to use in the computational routines:


    - 'auto' chooses the solver automatically based on the type of data.

    - 'svd' uses a Singular Value Decomposition of X to compute the Ridge
      coefficients. More stable for singular matrices than
      'cholesky'.

    - 'cholesky' uses the standard scipy.linalg.solve function to
      obtain a closed-form solution.

    - 'sparse_cg' uses the conjugate gradient solver as found in
      scipy.sparse.linalg.cg. As an iterative algorithm, this solver is
      more appropriate than 'cholesky' for large-scale data
      (possibility to set `tol` and `max_iter`).

    - 'lsqr' uses the dedicated regularized least-squares routine
      scipy.sparse.linalg.lsqr. It is the fatest but may not be available
      in old scipy versions. It also uses an iterative procedure.

    - 'sag' uses a Stochastic Average Gradient descent. It also uses an
      iterative procedure, and is often faster than other solvers when
      both n_samples and n_features are large. Note that 'sag' fast
      convergence is only guaranteed on features with approximately the
      same scale. You can preprocess the data with a scaler from
      sklearn.preprocessing.

    All last four solvers support both dense and sparse data. However,
    only 'sag' supports sparse input when `fit_intercept` is True.

    .. versionadded:: 0.17
       Stochastic Average Gradient descent solver.

tol : float
    Precision of the solution.

random_state : int seed, RandomState instance, or None (default)
    The seed of the pseudo random number generator to use when
    shuffling the data. Used in 'sag' solver.

    .. versionadded:: 0.17
       *random_state* to support Stochastic Average Gradient.

**Attributes**

``coef_`` : array, shape (n_features,) or (n_targets, n_features)
    Weight vector(s).

``intercept_`` : float | array, shape = (n_targets,)
    Independent term in decision function. Set to 0.0 if
    ``fit_intercept = False``.

``n_iter_`` : array or None, shape (n_targets,)
    Actual number of iterations for each target. Available only for
    sag and lsqr solvers. Other solvers will return None.

See also

RidgeClassifier, RidgeCV, KernelRidge

**Examples**

>>> from sklearn.linear_model import Ridge
>>> import numpy as np
>>> n_samples, n_features = 10, 5
>>> np.random.seed(0)
>>> y = np.random.randn(n_samples)
>>> X = np.random.randn(n_samples, n_features)
>>> clf = Ridge(alpha=1.0)
>>> clf.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
Ridge(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None,
      normalize=False, random_state=None, solver='auto', tol=0.001)

Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 

Predict using the linear model

This node has been automatically generated by wrapping the sklearn.linear_model.ridge.Ridge class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : {array-like, sparse matrix}, shape = (n_samples, n_features)
Samples.

Returns

C : array, shape = (n_samples,)
Returns predicted values.
Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Fit Ridge regression model

This node has been automatically generated by wrapping the sklearn.linear_model.ridge.Ridge class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training data
y : array-like, shape = [n_samples] or [n_samples, n_targets]
Target values
sample_weight : float or numpy array of shape [n_samples]
Individual weights for each sample

Returns

self : returns an instance of self.

Overrides: Node.stop_training