Package mdp :: Package nodes :: Class MinMaxScalerScikitsLearnNode
[hide private]
[frames] | no frames]

Class MinMaxScalerScikitsLearnNode



Transforms features by scaling each feature to a given range.

This node has been automatically generated by wrapping the ``sklearn.preprocessing.data.MinMaxScaler`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

This estimator scales and translates each feature individually such
that it is in the given range on the training set, i.e. between
zero and one.

The transformation is given by::


    X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
    X_scaled = X_std * (max - min) + min

where min, max = feature_range.

This transformation is often used as an alternative to zero mean,
unit variance scaling.

Read more in the :ref:`User Guide <preprocessing_scaler>`.

**Parameters**

feature_range: tuple (min, max), default=(0, 1)
    Desired range of transformed data.

copy : boolean, optional, default True
    Set to False to perform inplace row normalization and avoid a
    copy (if the input is already a numpy array).

**Attributes**

``min_`` : ndarray, shape (n_features,)
    Per feature adjustment for minimum.

``scale_`` : ndarray, shape (n_features,)
    Per feature relative scaling of the data.

    .. versionadded:: 0.17
       *scale_* attribute.

``data_min_`` : ndarray, shape (n_features,)
    Per feature minimum seen in the data

    .. versionadded:: 0.17
       *data_min_* instead of deprecated *data_min*.

``data_max_`` : ndarray, shape (n_features,)
    Per feature maximum seen in the data

    .. versionadded:: 0.17
       *data_max_* instead of deprecated *data_max*.

``data_range_`` : ndarray, shape (n_features,)
    Per feature range ``(data_max_ - data_min_)`` seen in the data

    .. versionadded:: 0.17
       *data_range_* instead of deprecated *data_range*.

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Transforms features by scaling each feature to a given range.
 
_execute(self, x)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
Scaling features of X according to feature_range.
 
stop_training(self, **kwargs)
Compute the minimum and maximum to be used for later scaling.

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Transforms features by scaling each feature to a given range.

This node has been automatically generated by wrapping the ``sklearn.preprocessing.data.MinMaxScaler`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

This estimator scales and translates each feature individually such
that it is in the given range on the training set, i.e. between
zero and one.

The transformation is given by::


    X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
    X_scaled = X_std * (max - min) + min

where min, max = feature_range.

This transformation is often used as an alternative to zero mean,
unit variance scaling.

Read more in the :ref:`User Guide <preprocessing_scaler>`.

**Parameters**

feature_range: tuple (min, max), default=(0, 1)
    Desired range of transformed data.

copy : boolean, optional, default True
    Set to False to perform inplace row normalization and avoid a
    copy (if the input is already a numpy array).

**Attributes**

``min_`` : ndarray, shape (n_features,)
    Per feature adjustment for minimum.

``scale_`` : ndarray, shape (n_features,)
    Per feature relative scaling of the data.

    .. versionadded:: 0.17
       *scale_* attribute.

``data_min_`` : ndarray, shape (n_features,)
    Per feature minimum seen in the data

    .. versionadded:: 0.17
       *data_min_* instead of deprecated *data_min*.

``data_max_`` : ndarray, shape (n_features,)
    Per feature maximum seen in the data

    .. versionadded:: 0.17
       *data_max_* instead of deprecated *data_max*.

``data_range_`` : ndarray, shape (n_features,)
    Per feature range ``(data_max_ - data_min_)`` seen in the data

    .. versionadded:: 0.17
       *data_range_* instead of deprecated *data_range*.

Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 

Scaling features of X according to feature_range.

This node has been automatically generated by wrapping the sklearn.preprocessing.data.MinMaxScaler class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : array-like, shape [n_samples, n_features]
Input data that will be transformed.
Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Compute the minimum and maximum to be used for later scaling.

This node has been automatically generated by wrapping the sklearn.preprocessing.data.MinMaxScaler class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : array-like, shape [n_samples, n_features]
The data used to compute the per-feature minimum and maximum used for later scaling along the features axis.
Overrides: Node.stop_training