Package mdp :: Package nodes :: Class GradientBoostingRegressorScikitsLearnNode
[hide private]
[frames] | no frames]

Class GradientBoostingRegressorScikitsLearnNode



Gradient Boosting for regression.

This node has been automatically generated by wrapping the ``sklearn.ensemble.gradient_boosting.GradientBoostingRegressor`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

GB builds an additive model in a forward stage-wise fashion;
it allows for the optimization of arbitrary differentiable loss functions.
In each stage a regression tree is fit on the negative gradient of the
given loss function.

Read more in the :ref:`User Guide <gradient_boosting>`.

**Parameters**

loss : {'ls', 'lad', 'huber', 'quantile'}, optional (default='ls')
    loss function to be optimized. 'ls' refers to least squares
    regression. 'lad' (least absolute deviation) is a highly robust
    loss function solely based on order information of the input
    variables. 'huber' is a combination of the two. 'quantile'
    allows quantile regression (use `alpha` to specify the quantile).

learning_rate : float, optional (default=0.1)
    learning rate shrinks the contribution of each tree by `learning_rate`.
    There is a trade-off between learning_rate and n_estimators.

n_estimators : int (default=100)
    The number of boosting stages to perform. Gradient boosting
    is fairly robust to over-fitting so a large number usually
    results in better performance.

max_depth : integer, optional (default=3)
    maximum depth of the individual regression estimators. The maximum
    depth limits the number of nodes in the tree. Tune this parameter
    for best performance; the best value depends on the interaction
    of the input variables.
    Ignored if ``max_leaf_nodes`` is not None.

min_samples_split : integer, optional (default=2)
    The minimum number of samples required to split an internal node.

min_samples_leaf : integer, optional (default=1)
    The minimum number of samples required to be at a leaf node.

min_weight_fraction_leaf : float, optional (default=0.)
    The minimum weighted fraction of the input samples required to be at a
    leaf node.

subsample : float, optional (default=1.0)
    The fraction of samples to be used for fitting the individual base
    learners. If smaller than 1.0 this results in Stochastic Gradient
    Boosting. `subsample` interacts with the parameter `n_estimators`.
    Choosing `subsample < 1.0` leads to a reduction of variance
    and an increase in bias.

max_features : int, float, string or None, optional (default=None)
    The number of features to consider when looking for the best split:

      - - If int, then consider `max_features` features at each split.
      - - If float, then `max_features` is a percentage and
      -   `int(max_features * n_features)` features are considered at each
      -   split.
      - - If "auto", then `max_features=n_features`.
      - - If "sqrt", then `max_features=sqrt(n_features)`.
      - - If "log2", then `max_features=log2(n_features)`.
      - - If None, then `max_features=n_features`.


    Choosing `max_features < n_features` leads to a reduction of variance
    and an increase in bias.

    Note: the search for a split does not stop until at least one
    valid partition of the node samples is found, even if it requires to
    effectively inspect more than ``max_features`` features.

max_leaf_nodes : int or None, optional (default=None)
    Grow trees with ``max_leaf_nodes`` in best-first fashion.
    Best nodes are defined as relative reduction in impurity.
    If None then unlimited number of leaf nodes.

alpha : float (default=0.9)
    The alpha-quantile of the huber loss function and the quantile
    loss function. Only if ``loss='huber'`` or ``loss='quantile'``.

init : BaseEstimator, None, optional (default=None)
    An estimator object that is used to compute the initial
    predictions. ``init`` has to provide ``fit`` and ``predict``.
    If None it uses ``loss.init_estimator``.

verbose : int, default: 0
    Enable verbose output. If 1 then it prints progress and performance
    once in a while (the more trees the lower the frequency). If greater
    than 1 then it prints progress and performance for every tree.

warm_start : bool, default: False
    When set to ``True``, reuse the solution of the previous call to fit
    and add more estimators to the ensemble, otherwise, just erase the
    previous solution.

random_state : int, RandomState instance or None, optional (default=None)
    If int, random_state is the seed used by the random number generator;
    If RandomState instance, random_state is the random number generator;
    If None, the random number generator is the RandomState instance used
    by `np.random`.

presort : bool or 'auto', optional (default='auto')
    Whether to presort the data to speed up the finding of best splits in
    fitting. Auto mode by default will use presorting on dense data and
    default to normal sorting on sparse data. Setting presort to true on
    sparse data will raise an error.

    .. versionadded:: 0.17
       optional parameter *presort*.

**Attributes**

``feature_importances_`` : array, shape = [n_features]
    The feature importances (the higher, the more important the feature).

``oob_improvement_`` : array, shape = [n_estimators]
    The improvement in loss (= deviance) on the out-of-bag samples
    relative to the previous iteration.
    ``oob_improvement_[0]`` is the improvement in
    loss of the first stage over the ``init`` estimator.

``train_score_`` : array, shape = [n_estimators]
    The i-th score ``train_score_[i]`` is the deviance (= loss) of the
    model at iteration ``i`` on the in-bag sample.
    If ``subsample == 1`` this is the deviance on the training data.

``loss_`` : LossFunction
    The concrete ``LossFunction`` object.

`init` : BaseEstimator
    The estimator that provides the initial predictions.
    Set via the ``init`` argument or ``loss.init_estimator``.

``estimators_`` : ndarray of DecisionTreeRegressor, shape = [n_estimators, 1]
    The collection of fitted sub-estimators.

See also

DecisionTreeRegressor, RandomForestRegressor

**References**

J. Friedman, Greedy Function Approximation: A Gradient Boosting
Machine, The Annals of Statistics, Vol. 29, No. 5, 2001.

J. Friedman, Stochastic Gradient Boosting, 1999

T. Hastie, R. Tibshirani and J. Friedman.
Elements of Statistical Learning Ed. 2, Springer, 2009.

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Gradient Boosting for regression.
 
_execute(self, x)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
DEPRECATED: Support to use estimators as feature selectors will be removed in version 0.19.
 
stop_training(self, **kwargs)
Fit the gradient boosting model.

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Gradient Boosting for regression.

This node has been automatically generated by wrapping the ``sklearn.ensemble.gradient_boosting.GradientBoostingRegressor`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

GB builds an additive model in a forward stage-wise fashion;
it allows for the optimization of arbitrary differentiable loss functions.
In each stage a regression tree is fit on the negative gradient of the
given loss function.

Read more in the :ref:`User Guide <gradient_boosting>`.

**Parameters**

loss : {'ls', 'lad', 'huber', 'quantile'}, optional (default='ls')
    loss function to be optimized. 'ls' refers to least squares
    regression. 'lad' (least absolute deviation) is a highly robust
    loss function solely based on order information of the input
    variables. 'huber' is a combination of the two. 'quantile'
    allows quantile regression (use `alpha` to specify the quantile).

learning_rate : float, optional (default=0.1)
    learning rate shrinks the contribution of each tree by `learning_rate`.
    There is a trade-off between learning_rate and n_estimators.

n_estimators : int (default=100)
    The number of boosting stages to perform. Gradient boosting
    is fairly robust to over-fitting so a large number usually
    results in better performance.

max_depth : integer, optional (default=3)
    maximum depth of the individual regression estimators. The maximum
    depth limits the number of nodes in the tree. Tune this parameter
    for best performance; the best value depends on the interaction
    of the input variables.
    Ignored if ``max_leaf_nodes`` is not None.

min_samples_split : integer, optional (default=2)
    The minimum number of samples required to split an internal node.

min_samples_leaf : integer, optional (default=1)
    The minimum number of samples required to be at a leaf node.

min_weight_fraction_leaf : float, optional (default=0.)
    The minimum weighted fraction of the input samples required to be at a
    leaf node.

subsample : float, optional (default=1.0)
    The fraction of samples to be used for fitting the individual base
    learners. If smaller than 1.0 this results in Stochastic Gradient
    Boosting. `subsample` interacts with the parameter `n_estimators`.
    Choosing `subsample < 1.0` leads to a reduction of variance
    and an increase in bias.

max_features : int, float, string or None, optional (default=None)
    The number of features to consider when looking for the best split:

      - - If int, then consider `max_features` features at each split.
      - - If float, then `max_features` is a percentage and
      -   `int(max_features * n_features)` features are considered at each
      -   split.
      - - If "auto", then `max_features=n_features`.
      - - If "sqrt", then `max_features=sqrt(n_features)`.
      - - If "log2", then `max_features=log2(n_features)`.
      - - If None, then `max_features=n_features`.


    Choosing `max_features < n_features` leads to a reduction of variance
    and an increase in bias.

    Note: the search for a split does not stop until at least one
    valid partition of the node samples is found, even if it requires to
    effectively inspect more than ``max_features`` features.

max_leaf_nodes : int or None, optional (default=None)
    Grow trees with ``max_leaf_nodes`` in best-first fashion.
    Best nodes are defined as relative reduction in impurity.
    If None then unlimited number of leaf nodes.

alpha : float (default=0.9)
    The alpha-quantile of the huber loss function and the quantile
    loss function. Only if ``loss='huber'`` or ``loss='quantile'``.

init : BaseEstimator, None, optional (default=None)
    An estimator object that is used to compute the initial
    predictions. ``init`` has to provide ``fit`` and ``predict``.
    If None it uses ``loss.init_estimator``.

verbose : int, default: 0
    Enable verbose output. If 1 then it prints progress and performance
    once in a while (the more trees the lower the frequency). If greater
    than 1 then it prints progress and performance for every tree.

warm_start : bool, default: False
    When set to ``True``, reuse the solution of the previous call to fit
    and add more estimators to the ensemble, otherwise, just erase the
    previous solution.

random_state : int, RandomState instance or None, optional (default=None)
    If int, random_state is the seed used by the random number generator;
    If RandomState instance, random_state is the random number generator;
    If None, the random number generator is the RandomState instance used
    by `np.random`.

presort : bool or 'auto', optional (default='auto')
    Whether to presort the data to speed up the finding of best splits in
    fitting. Auto mode by default will use presorting on dense data and
    default to normal sorting on sparse data. Setting presort to true on
    sparse data will raise an error.

    .. versionadded:: 0.17
       optional parameter *presort*.

**Attributes**

``feature_importances_`` : array, shape = [n_features]
    The feature importances (the higher, the more important the feature).

``oob_improvement_`` : array, shape = [n_estimators]
    The improvement in loss (= deviance) on the out-of-bag samples
    relative to the previous iteration.
    ``oob_improvement_[0]`` is the improvement in
    loss of the first stage over the ``init`` estimator.

``train_score_`` : array, shape = [n_estimators]
    The i-th score ``train_score_[i]`` is the deviance (= loss) of the
    model at iteration ``i`` on the in-bag sample.
    If ``subsample == 1`` this is the deviance on the training data.

``loss_`` : LossFunction
    The concrete ``LossFunction`` object.

`init` : BaseEstimator
    The estimator that provides the initial predictions.
    Set via the ``init`` argument or ``loss.init_estimator``.

``estimators_`` : ndarray of DecisionTreeRegressor, shape = [n_estimators, 1]
    The collection of fitted sub-estimators.

See also

DecisionTreeRegressor, RandomForestRegressor

**References**

J. Friedman, Greedy Function Approximation: A Gradient Boosting
Machine, The Annals of Statistics, Vol. 29, No. 5, 2001.

J. Friedman, Stochastic Gradient Boosting, 1999

T. Hastie, R. Tibshirani and J. Friedman.
Elements of Statistical Learning Ed. 2, Springer, 2009.

Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 

DEPRECATED: Support to use estimators as feature selectors will be removed in version 0.19. Use SelectFromModel instead.

This node has been automatically generated by wrapping the ``sklearn.ensemble.gradient_boosting.GradientBoostingRegressor`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

Reduce X to its most important features.

        Uses ``coef_`` or ``feature_importances_`` to determine the most
        important features.  For models with a ``coef_`` for each class, the
        absolute sum over the classes is used.

        Parameters
        ----------
        X : array or scipy sparse matrix of shape [n_samples, n_features]
            The input samples.

        threshold : string, float or None, optional (default=None)
            The threshold value to use for feature selection. Features whose
            importance is greater or equal are kept while the others are
            discarded. If "median" (resp. "mean"), then the threshold value is
            the median (resp. the mean) of the feature importances. A scaling
            factor (e.g., "1.25*mean") may also be used. If None and if
            available, the object attribute ``threshold`` is used. Otherwise,
            "mean" is used by default.

        Returns
        -------
        X_r : array of shape [n_samples, n_selected_features]
            The input samples with only the selected features.

Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Fit the gradient boosting model.

This node has been automatically generated by wrapping the sklearn.ensemble.gradient_boosting.GradientBoostingRegressor class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values (integers in classification, real numbers in regression) For classification, labels must correspond to classes.
sample_weight : array-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node.
monitor : callable, optional
The monitor is called after each iteration with the current iteration, a reference to the estimator and the local variables of _fit_stages as keyword arguments callable(i, self, locals()). If the callable returns True the fitting procedure is stopped. The monitor can be used for various things such as computing held-out estimates, early stopping, model introspect, and snapshoting.

Returns

self : object
Returns self.
Overrides: Node.stop_training