Package mdp :: Package nodes :: Class SGDClassifierScikitsLearnNode
[hide private]
[frames] | no frames]

Class SGDClassifierScikitsLearnNode



Linear classifiers (SVM, logistic regression, a.o.) with SGD training.

This node has been automatically generated by wrapping the ``sklearn.linear_model.stochastic_gradient.SGDClassifier`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

This estimator implements regularized linear models with stochastic
gradient descent (SGD) learning: the gradient of the loss is estimated
each sample at a time and the model is updated along the way with a
decreasing strength schedule (aka learning rate). SGD allows minibatch
(online/out-of-core) learning, see the partial_fit method.
For best results using the default learning rate schedule, the data should
have zero mean and unit variance.

This implementation works with data represented as dense or sparse arrays
of floating point values for the features. The model it fits can be
controlled with the loss parameter; by default, it fits a linear support
vector machine (SVM).

The regularizer is a penalty added to the loss function that shrinks model
parameters towards the zero vector using either the squared euclidean norm
L2 or the absolute norm L1 or a combination of both (Elastic Net). If the
parameter update crosses the 0.0 value because of the regularizer, the
update is truncated to 0.0 to allow for learning sparse models and achieve
online feature selection.

Read more in the :ref:`User Guide <sgd>`.

**Parameters**

loss : str, 'hinge', 'log', 'modified_huber', 'squared_hinge',                'perceptron', or a regression loss: 'squared_loss', 'huber',                'epsilon_insensitive', or 'squared_epsilon_insensitive'
    The loss function to be used. Defaults to 'hinge', which gives a
    linear SVM.
    The 'log' loss gives logistic regression, a probabilistic classifier.
    'modified_huber' is another smooth loss that brings tolerance to
    outliers as well as probability estimates.
    'squared_hinge' is like hinge but is quadratically penalized.
    'perceptron' is the linear loss used by the perceptron algorithm.
    The other losses are designed for regression but can be useful in
    classification as well; see SGDRegressor for a description.

penalty : str, 'none', 'l2', 'l1', or 'elasticnet'
    The penalty (aka regularization term) to be used. Defaults to 'l2'
    which is the standard regularizer for linear SVM models. 'l1' and
    'elasticnet' might bring sparsity to the model (feature selection)
    not achievable with 'l2'.

alpha : float
    Constant that multiplies the regularization term. Defaults to 0.0001
    Also used to compute learning_rate when set to 'optimal'.

l1_ratio : float
    The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1.
    l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1.
    Defaults to 0.15.

fit_intercept : bool
    Whether the intercept should be estimated or not. If False, the
    data is assumed to be already centered. Defaults to True.

n_iter : int, optional
    The number of passes over the training data (aka epochs). The number
    of iterations is set to 1 if using partial_fit.
    Defaults to 5.

shuffle : bool, optional
    Whether or not the training data should be shuffled after each epoch.
    Defaults to True.

random_state : int seed, RandomState instance, or None (default)
    The seed of the pseudo random number generator to use when
    shuffling the data.

verbose : integer, optional
    The verbosity level

epsilon : float
    Epsilon in the epsilon-insensitive loss functions; only if `loss` is
    'huber', 'epsilon_insensitive', or 'squared_epsilon_insensitive'.
    For 'huber', determines the threshold at which it becomes less
    important to get the prediction exactly right.
    For epsilon-insensitive, any differences between the current prediction
    and the correct label are ignored if they are less than this threshold.

n_jobs : integer, optional
    The number of CPUs to use to do the OVA (One Versus All, for
    multi-class problems) computation. -1 means 'all CPUs'. Defaults
    to 1.

learning_rate : string, optional
    The learning rate schedule:

    - constant: eta = eta0
    - optimal: eta = 1.0 / (alpha * (t + t0)) [default]
    - invscaling: eta = eta0 / pow(t, power_t)
    - where t0 is chosen by a heuristic proposed by Leon Bottou.


eta0 : double
    The initial learning rate for the 'constant' or 'invscaling'
    schedules. The default value is 0.0 as eta0 is not used by the
    default schedule 'optimal'.

power_t : double
    The exponent for inverse scaling learning rate [default 0.5].

class_weight : dict, {class_label: weight} or "balanced" or None, optional
    Preset for the class_weight fit parameter.

    Weights associated with classes. If not given, all classes
    are supposed to have weight one.

    The "balanced" mode uses the values of y to automatically adjust
    weights inversely proportional to class frequencies in the input data
    as ``n_samples / (n_classes * np.bincount(y))``

warm_start : bool, optional
    When set to True, reuse the solution of the previous call to fit as
    initialization, otherwise, just erase the previous solution.

average : bool or int, optional
    When set to True, computes the averaged SGD weights and stores the
    result in the ``coef_`` attribute. If set to an int greater than 1,
    averaging will begin once the total number of samples seen reaches
    average. So average=10 will begin averaging after seeing 10 samples.

**Attributes**

``coef_`` : array, shape (1, n_features) if n_classes == 2 else (n_classes,            n_features)
    Weights assigned to the features.

``intercept_`` : array, shape (1,) if n_classes == 2 else (n_classes,)
    Constants in decision function.

**Examples**

>>> import numpy as np
>>> from sklearn import linear_model
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> Y = np.array([1, 1, 2, 2])
>>> clf = linear_model.SGDClassifier()
>>> clf.fit(X, Y)
... #doctest: +NORMALIZE_WHITESPACE
SGDClassifier(alpha=0.0001, average=False, class_weight=None, epsilon=0.1,
        eta0=0.0, fit_intercept=True, l1_ratio=0.15,
        learning_rate='optimal', loss='hinge', n_iter=5, n_jobs=1,
        penalty='l2', power_t=0.5, random_state=None, shuffle=True,
        verbose=0, warm_start=False)
>>> print(clf.predict([[-0.8, -1]]))
[1]

See also

LinearSVC, LogisticRegression, Perceptron

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Linear classifiers (SVM, logistic regression, a.o.) with SGD training.
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_label(self, x)
 
_stop_training(self, **kwargs)
Transform the data and labels lists to array objects and reshape them.
 
label(self, x)
Predict class labels for samples in X.
 
stop_training(self, **kwargs)
Fit linear model with Stochastic Gradient Descent.

Inherited from PreserveDimNode (private): _set_input_dim, _set_output_dim

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from ClassifierCumulator
 
_check_train_args(self, x, labels)
 
_train(self, x, labels)
Cumulate all input data in a one dimensional list.
 
train(self, x, labels)
Cumulate all input data in a one dimensional list.
    Inherited from ClassifierNode
 
_execute(self, x)
 
_prob(self, x, *args, **kargs)
 
execute(self, x)
Process the data contained in x.
 
prob(self, x, *args, **kwargs)
Predict probability for each possible outcome.
 
rank(self, x, threshold=None)
Returns ordered list with all labels ordered according to prob(x) (e.g., [[3 1 2], [2 1 3], ...]).
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Linear classifiers (SVM, logistic regression, a.o.) with SGD training.

This node has been automatically generated by wrapping the ``sklearn.linear_model.stochastic_gradient.SGDClassifier`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

This estimator implements regularized linear models with stochastic
gradient descent (SGD) learning: the gradient of the loss is estimated
each sample at a time and the model is updated along the way with a
decreasing strength schedule (aka learning rate). SGD allows minibatch
(online/out-of-core) learning, see the partial_fit method.
For best results using the default learning rate schedule, the data should
have zero mean and unit variance.

This implementation works with data represented as dense or sparse arrays
of floating point values for the features. The model it fits can be
controlled with the loss parameter; by default, it fits a linear support
vector machine (SVM).

The regularizer is a penalty added to the loss function that shrinks model
parameters towards the zero vector using either the squared euclidean norm
L2 or the absolute norm L1 or a combination of both (Elastic Net). If the
parameter update crosses the 0.0 value because of the regularizer, the
update is truncated to 0.0 to allow for learning sparse models and achieve
online feature selection.

Read more in the :ref:`User Guide <sgd>`.

**Parameters**

loss : str, 'hinge', 'log', 'modified_huber', 'squared_hinge',                'perceptron', or a regression loss: 'squared_loss', 'huber',                'epsilon_insensitive', or 'squared_epsilon_insensitive'
    The loss function to be used. Defaults to 'hinge', which gives a
    linear SVM.
    The 'log' loss gives logistic regression, a probabilistic classifier.
    'modified_huber' is another smooth loss that brings tolerance to
    outliers as well as probability estimates.
    'squared_hinge' is like hinge but is quadratically penalized.
    'perceptron' is the linear loss used by the perceptron algorithm.
    The other losses are designed for regression but can be useful in
    classification as well; see SGDRegressor for a description.

penalty : str, 'none', 'l2', 'l1', or 'elasticnet'
    The penalty (aka regularization term) to be used. Defaults to 'l2'
    which is the standard regularizer for linear SVM models. 'l1' and
    'elasticnet' might bring sparsity to the model (feature selection)
    not achievable with 'l2'.

alpha : float
    Constant that multiplies the regularization term. Defaults to 0.0001
    Also used to compute learning_rate when set to 'optimal'.

l1_ratio : float
    The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1.
    l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1.
    Defaults to 0.15.

fit_intercept : bool
    Whether the intercept should be estimated or not. If False, the
    data is assumed to be already centered. Defaults to True.

n_iter : int, optional
    The number of passes over the training data (aka epochs). The number
    of iterations is set to 1 if using partial_fit.
    Defaults to 5.

shuffle : bool, optional
    Whether or not the training data should be shuffled after each epoch.
    Defaults to True.

random_state : int seed, RandomState instance, or None (default)
    The seed of the pseudo random number generator to use when
    shuffling the data.

verbose : integer, optional
    The verbosity level

epsilon : float
    Epsilon in the epsilon-insensitive loss functions; only if `loss` is
    'huber', 'epsilon_insensitive', or 'squared_epsilon_insensitive'.
    For 'huber', determines the threshold at which it becomes less
    important to get the prediction exactly right.
    For epsilon-insensitive, any differences between the current prediction
    and the correct label are ignored if they are less than this threshold.

n_jobs : integer, optional
    The number of CPUs to use to do the OVA (One Versus All, for
    multi-class problems) computation. -1 means 'all CPUs'. Defaults
    to 1.

learning_rate : string, optional
    The learning rate schedule:

    - constant: eta = eta0
    - optimal: eta = 1.0 / (alpha * (t + t0)) [default]
    - invscaling: eta = eta0 / pow(t, power_t)
    - where t0 is chosen by a heuristic proposed by Leon Bottou.


eta0 : double
    The initial learning rate for the 'constant' or 'invscaling'
    schedules. The default value is 0.0 as eta0 is not used by the
    default schedule 'optimal'.

power_t : double
    The exponent for inverse scaling learning rate [default 0.5].

class_weight : dict, {class_label: weight} or "balanced" or None, optional
    Preset for the class_weight fit parameter.

    Weights associated with classes. If not given, all classes
    are supposed to have weight one.

    The "balanced" mode uses the values of y to automatically adjust
    weights inversely proportional to class frequencies in the input data
    as ``n_samples / (n_classes * np.bincount(y))``

warm_start : bool, optional
    When set to True, reuse the solution of the previous call to fit as
    initialization, otherwise, just erase the previous solution.

average : bool or int, optional
    When set to True, computes the averaged SGD weights and stores the
    result in the ``coef_`` attribute. If set to an int greater than 1,
    averaging will begin once the total number of samples seen reaches
    average. So average=10 will begin averaging after seeing 10 samples.

**Attributes**

``coef_`` : array, shape (1, n_features) if n_classes == 2 else (n_classes,            n_features)
    Weights assigned to the features.

``intercept_`` : array, shape (1,) if n_classes == 2 else (n_classes,)
    Constants in decision function.

**Examples**

>>> import numpy as np
>>> from sklearn import linear_model
>>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
>>> Y = np.array([1, 1, 2, 2])
>>> clf = linear_model.SGDClassifier()
>>> clf.fit(X, Y)
... #doctest: +NORMALIZE_WHITESPACE
SGDClassifier(alpha=0.0001, average=False, class_weight=None, epsilon=0.1,
        eta0=0.0, fit_intercept=True, l1_ratio=0.15,
        learning_rate='optimal', loss='hinge', n_iter=5, n_jobs=1,
        penalty='l2', power_t=0.5, random_state=None, shuffle=True,
        verbose=0, warm_start=False)
>>> print(clf.predict([[-0.8, -1]]))
[1]

See also

LinearSVC, LogisticRegression, Perceptron

Overrides: object.__init__

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_label(self, x)

 
Overrides: ClassifierNode._label

_stop_training(self, **kwargs)

 
Transform the data and labels lists to array objects and reshape them.

Overrides: Node._stop_training

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Overrides: Node.is_trainable

label(self, x)

 

Predict class labels for samples in X.

This node has been automatically generated by wrapping the sklearn.linear_model.stochastic_gradient.SGDClassifier class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Samples.

Returns

C : array, shape = [n_samples]
Predicted class label per sample.
Overrides: ClassifierNode.label

stop_training(self, **kwargs)

 

Fit linear model with Stochastic Gradient Descent.

This node has been automatically generated by wrapping the sklearn.linear_model.stochastic_gradient.SGDClassifier class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : {array-like, sparse matrix}, shape (n_samples, n_features)
Training data
y : numpy array, shape (n_samples,)
Target values
coef_init : array, shape (n_classes, n_features)
The initial coefficients to warm-start the optimization.
intercept_init : array, shape (n_classes,)
The initial intercept to warm-start the optimization.
sample_weight : array-like, shape (n_samples,), optional
Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the contructor) if class_weight is specified

Returns

self : returns an instance of self.

Overrides: Node.stop_training