Package mdp :: Package nodes :: Class KMeansScikitsLearnNode
[hide private]
[frames] | no frames]

Class KMeansScikitsLearnNode



K-Means clustering

This node has been automatically generated by wrapping the ``sklearn.cluster.k_means_.KMeans`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

Read more in the :ref:`User Guide <k_means>`.

**Parameters**


n_clusters : int, optional, default: 8
    The number of clusters to form as well as the number of
    centroids to generate.

max_iter : int, default: 300
    Maximum number of iterations of the k-means algorithm for a
    single run.

n_init : int, default: 10
    Number of time the k-means algorithm will be run with different
    centroid seeds. The final results will be the best output of
    n_init consecutive runs in terms of inertia.

init : {'k-means++', 'random' or an ndarray}
    Method for initialization, defaults to 'k-means++':


    'k-means++' : selects initial cluster centers for k-mean
    clustering in a smart way to speed up convergence. See section
    Notes in k_init for more details.

    'random': choose k observations (rows) at random from data for
    the initial centroids.

    If an ndarray is passed, it should be of shape (n_clusters, n_features)
    and gives the initial centers.

precompute_distances : {'auto', True, False}
    Precompute distances (faster but takes more memory).

    'auto' : do not precompute distances if n_samples * n_clusters > 12
    million. This corresponds to about 100MB overhead per job using
    double precision.

    True : always precompute distances

    False : never precompute distances

tol : float, default: 1e-4
    Relative tolerance with regards to inertia to declare convergence

n_jobs : int
    The number of jobs to use for the computation. This works by computing
    each of the n_init runs in parallel.

    If -1 all CPUs are used. If 1 is given, no parallel computing code is
    used at all, which is useful for debugging. For n_jobs below -1,
    (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one
    are used.

random_state : integer or numpy.RandomState, optional
    The generator used to initialize the centers. If an integer is
    given, it fixes the seed. Defaults to the global numpy random
    number generator.

verbose : int, default 0
    Verbosity mode.

copy_x : boolean, default True
    When pre-computing distances it is more numerically accurate to center
    the data first.  If copy_x is True, then the original data is not
    modified.  If False, the original data is modified, and put back before
    the function returns, but small numerical differences may be introduced
    by subtracting and then adding the data mean.

**Attributes**

``cluster_centers_`` : array, [n_clusters, n_features]
    Coordinates of cluster centers

``labels_`` :

    - Labels of each point


``inertia_`` : float
    Sum of distances of samples to their closest cluster center.

**Notes**

The k-means problem is solved using Lloyd's algorithm.

The average complexity is given by O(k n T), were n is the number of
samples and T is the number of iteration.

The worst case complexity is given by O(n^(k+2/p)) with
n = n_samples, p = n_features. (D. Arthur and S. Vassilvitskii,
'How slow is the k-means method?' SoCG2006)

In practice, the k-means algorithm is very fast (one of the fastest
clustering algorithms available), but it falls in local minima. That's why
it can be useful to restart it several times.

See also


MiniBatchKMeans:

    - Alternative online implementation that does incremental updates
    - of the centers positions using mini-batches.
    - For large scale learning (say n_samples > 10k) MiniBatchKMeans is
    - probably much faster to than the default batch implementation.

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
K-Means clustering
 
_execute(self, x)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
Transform X to a cluster-distance space.
 
stop_training(self, **kwargs)
Compute k-means clustering.

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

K-Means clustering

This node has been automatically generated by wrapping the ``sklearn.cluster.k_means_.KMeans`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

Read more in the :ref:`User Guide <k_means>`.

**Parameters**


n_clusters : int, optional, default: 8
    The number of clusters to form as well as the number of
    centroids to generate.

max_iter : int, default: 300
    Maximum number of iterations of the k-means algorithm for a
    single run.

n_init : int, default: 10
    Number of time the k-means algorithm will be run with different
    centroid seeds. The final results will be the best output of
    n_init consecutive runs in terms of inertia.

init : {'k-means++', 'random' or an ndarray}
    Method for initialization, defaults to 'k-means++':


    'k-means++' : selects initial cluster centers for k-mean
    clustering in a smart way to speed up convergence. See section
    Notes in k_init for more details.

    'random': choose k observations (rows) at random from data for
    the initial centroids.

    If an ndarray is passed, it should be of shape (n_clusters, n_features)
    and gives the initial centers.

precompute_distances : {'auto', True, False}
    Precompute distances (faster but takes more memory).

    'auto' : do not precompute distances if n_samples * n_clusters > 12
    million. This corresponds to about 100MB overhead per job using
    double precision.

    True : always precompute distances

    False : never precompute distances

tol : float, default: 1e-4
    Relative tolerance with regards to inertia to declare convergence

n_jobs : int
    The number of jobs to use for the computation. This works by computing
    each of the n_init runs in parallel.

    If -1 all CPUs are used. If 1 is given, no parallel computing code is
    used at all, which is useful for debugging. For n_jobs below -1,
    (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one
    are used.

random_state : integer or numpy.RandomState, optional
    The generator used to initialize the centers. If an integer is
    given, it fixes the seed. Defaults to the global numpy random
    number generator.

verbose : int, default 0
    Verbosity mode.

copy_x : boolean, default True
    When pre-computing distances it is more numerically accurate to center
    the data first.  If copy_x is True, then the original data is not
    modified.  If False, the original data is modified, and put back before
    the function returns, but small numerical differences may be introduced
    by subtracting and then adding the data mean.

**Attributes**

``cluster_centers_`` : array, [n_clusters, n_features]
    Coordinates of cluster centers

``labels_`` :

    - Labels of each point


``inertia_`` : float
    Sum of distances of samples to their closest cluster center.

**Notes**

The k-means problem is solved using Lloyd's algorithm.

The average complexity is given by O(k n T), were n is the number of
samples and T is the number of iteration.

The worst case complexity is given by O(n^(k+2/p)) with
n = n_samples, p = n_features. (D. Arthur and S. Vassilvitskii,
'How slow is the k-means method?' SoCG2006)

In practice, the k-means algorithm is very fast (one of the fastest
clustering algorithms available), but it falls in local minima. That's why
it can be useful to restart it several times.

See also


MiniBatchKMeans:

    - Alternative online implementation that does incremental updates
    - of the centers positions using mini-batches.
    - For large scale learning (say n_samples > 10k) MiniBatchKMeans is
    - probably much faster to than the default batch implementation.

Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 

Transform X to a cluster-distance space.

This node has been automatically generated by wrapping the sklearn.cluster.k_means_.KMeans class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

In the new space, each dimension is the distance to the cluster centers. Note that even if X is sparse, the array returned by transform will typically be dense.

Parameters

X : {array-like, sparse matrix}, shape = [n_samples, n_features]
New data to transform.

Returns

X_new : array, shape [n_samples, k]
X transformed in the new space.
Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Compute k-means clustering.

This node has been automatically generated by wrapping the sklearn.cluster.k_means_.KMeans class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : array-like or sparse matrix, shape=(n_samples, n_features)

Overrides: Node.stop_training