Package mdp :: Package nodes :: Class PerceptronScikitsLearnNode
[hide private]
[frames] | no frames]

Class PerceptronScikitsLearnNode



Perceptron

This node has been automatically generated by wrapping the ``sklearn.linear_model.perceptron.Perceptron`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

Read more in the :ref:`User Guide <perceptron>`.

**Parameters**


penalty : None, 'l2' or 'l1' or 'elasticnet'
    The penalty (aka regularization term) to be used. Defaults to None.

alpha : float
    Constant that multiplies the regularization term if regularization is
    used. Defaults to 0.0001

fit_intercept : bool
    Whether the intercept should be estimated or not. If False, the
    data is assumed to be already centered. Defaults to True.

n_iter : int, optional
    The number of passes over the training data (aka epochs).
    Defaults to 5.

shuffle : bool, optional, default True
    Whether or not the training data should be shuffled after each epoch.

random_state : int seed, RandomState instance, or None (default)
    The seed of the pseudo random number generator to use when
    shuffling the data.

verbose : integer, optional
    The verbosity level

n_jobs : integer, optional
    The number of CPUs to use to do the OVA (One Versus All, for
    multi-class problems) computation. -1 means 'all CPUs'. Defaults
    to 1.

eta0 : double
    Constant by which the updates are multiplied. Defaults to 1.

class_weight : dict, {class_label: weight} or "balanced" or None, optional
    Preset for the class_weight fit parameter.

    Weights associated with classes. If not given, all classes
    are supposed to have weight one.

    The "balanced" mode uses the values of y to automatically adjust
    weights inversely proportional to class frequencies in the input data
    as ``n_samples / (n_classes * np.bincount(y))``

warm_start : bool, optional
    When set to True, reuse the solution of the previous call to fit as
    initialization, otherwise, just erase the previous solution.

**Attributes**

``coef_`` : array, shape = [1, n_features] if n_classes == 2 else [n_classes,            n_features]
    Weights assigned to the features.

``intercept_`` : array, shape = [1] if n_classes == 2 else [n_classes]
    Constants in decision function.

**Notes**


`Perceptron` and `SGDClassifier` share the same underlying implementation.
In fact, `Perceptron()` is equivalent to `SGDClassifier(loss="perceptron",
eta0=1, learning_rate="constant", penalty=None)`.

See also


SGDClassifier

**References**


http://en.wikipedia.org/wiki/Perceptron and references therein.

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Perceptron
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_label(self, x)
 
_stop_training(self, **kwargs)
Transform the data and labels lists to array objects and reshape them.
 
label(self, x)
Predict class labels for samples in X.
 
stop_training(self, **kwargs)
Fit linear model with Stochastic Gradient Descent.

Inherited from PreserveDimNode (private): _set_input_dim, _set_output_dim

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from ClassifierCumulator
 
_check_train_args(self, x, labels)
 
_train(self, x, labels)
Cumulate all input data in a one dimensional list.
 
train(self, x, labels)
Cumulate all input data in a one dimensional list.
    Inherited from ClassifierNode
 
_execute(self, x)
 
_prob(self, x, *args, **kargs)
 
execute(self, x)
Process the data contained in x.
 
prob(self, x, *args, **kwargs)
Predict probability for each possible outcome.
 
rank(self, x, threshold=None)
Returns ordered list with all labels ordered according to prob(x) (e.g., [[3 1 2], [2 1 3], ...]).
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Perceptron

This node has been automatically generated by wrapping the ``sklearn.linear_model.perceptron.Perceptron`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

Read more in the :ref:`User Guide <perceptron>`.

**Parameters**


penalty : None, 'l2' or 'l1' or 'elasticnet'
    The penalty (aka regularization term) to be used. Defaults to None.

alpha : float
    Constant that multiplies the regularization term if regularization is
    used. Defaults to 0.0001

fit_intercept : bool
    Whether the intercept should be estimated or not. If False, the
    data is assumed to be already centered. Defaults to True.

n_iter : int, optional
    The number of passes over the training data (aka epochs).
    Defaults to 5.

shuffle : bool, optional, default True
    Whether or not the training data should be shuffled after each epoch.

random_state : int seed, RandomState instance, or None (default)
    The seed of the pseudo random number generator to use when
    shuffling the data.

verbose : integer, optional
    The verbosity level

n_jobs : integer, optional
    The number of CPUs to use to do the OVA (One Versus All, for
    multi-class problems) computation. -1 means 'all CPUs'. Defaults
    to 1.

eta0 : double
    Constant by which the updates are multiplied. Defaults to 1.

class_weight : dict, {class_label: weight} or "balanced" or None, optional
    Preset for the class_weight fit parameter.

    Weights associated with classes. If not given, all classes
    are supposed to have weight one.

    The "balanced" mode uses the values of y to automatically adjust
    weights inversely proportional to class frequencies in the input data
    as ``n_samples / (n_classes * np.bincount(y))``

warm_start : bool, optional
    When set to True, reuse the solution of the previous call to fit as
    initialization, otherwise, just erase the previous solution.

**Attributes**

``coef_`` : array, shape = [1, n_features] if n_classes == 2 else [n_classes,            n_features]
    Weights assigned to the features.

``intercept_`` : array, shape = [1] if n_classes == 2 else [n_classes]
    Constants in decision function.

**Notes**


`Perceptron` and `SGDClassifier` share the same underlying implementation.
In fact, `Perceptron()` is equivalent to `SGDClassifier(loss="perceptron",
eta0=1, learning_rate="constant", penalty=None)`.

See also


SGDClassifier

**References**


http://en.wikipedia.org/wiki/Perceptron and references therein.

Overrides: object.__init__

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_label(self, x)

 
Overrides: ClassifierNode._label

_stop_training(self, **kwargs)

 
Transform the data and labels lists to array objects and reshape them.

Overrides: Node._stop_training

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Overrides: Node.is_trainable

label(self, x)

 

Predict class labels for samples in X.

This node has been automatically generated by wrapping the sklearn.linear_model.perceptron.Perceptron class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Samples.

Returns

C : array, shape = [n_samples]
Predicted class label per sample.
Overrides: ClassifierNode.label

stop_training(self, **kwargs)

 

Fit linear model with Stochastic Gradient Descent.

This node has been automatically generated by wrapping the sklearn.linear_model.perceptron.Perceptron class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : {array-like, sparse matrix}, shape (n_samples, n_features)
Training data
y : numpy array, shape (n_samples,)
Target values
coef_init : array, shape (n_classes, n_features)
The initial coefficients to warm-start the optimization.
intercept_init : array, shape (n_classes,)
The initial intercept to warm-start the optimization.
sample_weight : array-like, shape (n_samples,), optional
Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the contructor) if class_weight is specified

Returns

self : returns an instance of self.

Overrides: Node.stop_training