Package mdp :: Package nodes :: Class DPGMMScikitsLearnNode
[hide private]
[frames] | no frames]

Class DPGMMScikitsLearnNode



Variational Inference for the Infinite Gaussian Mixture Model.

This node has been automatically generated by wrapping the ``sklearn.mixture.dpgmm.DPGMM`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

DPGMM stands for Dirichlet Process Gaussian Mixture Model, and it
is an infinite mixture model with the Dirichlet Process as a prior
distribution on the number of clusters. In practice the
approximate inference algorithm uses a truncated distribution with
a fixed maximum number of components, but almost always the number
of components actually used depends on the data.

Stick-breaking Representation of a Gaussian mixture model
probability distribution. This class allows for easy and efficient
inference of an approximate posterior distribution over the
parameters of a Gaussian mixture model with a variable number of
components (smaller than the truncation parameter n_components).

Initialization is with normally-distributed means and identity
covariance, for proper convergence.

Read more in the :ref:`User Guide <dpgmm>`.

**Parameters**

n_components: int, default 1
    Number of mixture components.

covariance_type: string, default 'diag'
    String describing the type of covariance parameters to
    use.  Must be one of 'spherical', 'tied', 'diag', 'full'.

alpha: float, default 1
    Real number representing the concentration parameter of
    the dirichlet process. Intuitively, the Dirichlet Process
    is as likely to start a new cluster for a point as it is
    to add that point to a cluster with alpha elements. A
    higher alpha means more clusters, as the expected number
    of clusters is ``alpha*log(N)``.

tol : float, default 1e-3
    Convergence threshold.

n_iter : int, default 10
    Maximum number of iterations to perform before convergence.

params : string, default 'wmc'
    Controls which parameters are updated in the training
    process.  Can contain any combination of 'w' for weights,
    'm' for means, and 'c' for covars.

init_params : string, default 'wmc'
    Controls which parameters are updated in the initialization
    process.  Can contain any combination of 'w' for weights,
    'm' for means, and 'c' for covars.  Defaults to 'wmc'.

verbose : int, default 0
    Controls output verbosity.

**Attributes**

covariance_type : string
    String describing the type of covariance parameters used by
    the DP-GMM.  Must be one of 'spherical', 'tied', 'diag', 'full'.

n_components : int
    Number of mixture components.

``weights_`` : array, shape (`n_components`,)
    Mixing weights for each mixture component.

``means_`` : array, shape (`n_components`, `n_features`)
    Mean parameters for each mixture component.

``precs_`` : array
    Precision (inverse covariance) parameters for each mixture
    component.  The shape depends on `covariance_type`::


        (`n_components`, 'n_features')                if 'spherical',
        (`n_features`, `n_features`)                  if 'tied',
        (`n_components`, `n_features`)                if 'diag',
        (`n_components`, `n_features`, `n_features`)  if 'full'

``converged_`` : bool
    True when convergence was reached in fit(), False otherwise.

See Also

GMM : Finite Gaussian mixture model fit with EM

VBGMM : Finite Gaussian mixture model fit with a variational
    algorithm, better for situations where there might be too little
    data to get a good estimate of the covariance matrix.

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Variational Inference for the Infinite Gaussian Mixture Model.
 
_execute(self, x)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
Predict label for data.
 
stop_training(self, **kwargs)
Estimate model parameters with the EM algorithm.

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Variational Inference for the Infinite Gaussian Mixture Model.

This node has been automatically generated by wrapping the ``sklearn.mixture.dpgmm.DPGMM`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

DPGMM stands for Dirichlet Process Gaussian Mixture Model, and it
is an infinite mixture model with the Dirichlet Process as a prior
distribution on the number of clusters. In practice the
approximate inference algorithm uses a truncated distribution with
a fixed maximum number of components, but almost always the number
of components actually used depends on the data.

Stick-breaking Representation of a Gaussian mixture model
probability distribution. This class allows for easy and efficient
inference of an approximate posterior distribution over the
parameters of a Gaussian mixture model with a variable number of
components (smaller than the truncation parameter n_components).

Initialization is with normally-distributed means and identity
covariance, for proper convergence.

Read more in the :ref:`User Guide <dpgmm>`.

**Parameters**

n_components: int, default 1
    Number of mixture components.

covariance_type: string, default 'diag'
    String describing the type of covariance parameters to
    use.  Must be one of 'spherical', 'tied', 'diag', 'full'.

alpha: float, default 1
    Real number representing the concentration parameter of
    the dirichlet process. Intuitively, the Dirichlet Process
    is as likely to start a new cluster for a point as it is
    to add that point to a cluster with alpha elements. A
    higher alpha means more clusters, as the expected number
    of clusters is ``alpha*log(N)``.

tol : float, default 1e-3
    Convergence threshold.

n_iter : int, default 10
    Maximum number of iterations to perform before convergence.

params : string, default 'wmc'
    Controls which parameters are updated in the training
    process.  Can contain any combination of 'w' for weights,
    'm' for means, and 'c' for covars.

init_params : string, default 'wmc'
    Controls which parameters are updated in the initialization
    process.  Can contain any combination of 'w' for weights,
    'm' for means, and 'c' for covars.  Defaults to 'wmc'.

verbose : int, default 0
    Controls output verbosity.

**Attributes**

covariance_type : string
    String describing the type of covariance parameters used by
    the DP-GMM.  Must be one of 'spherical', 'tied', 'diag', 'full'.

n_components : int
    Number of mixture components.

``weights_`` : array, shape (`n_components`,)
    Mixing weights for each mixture component.

``means_`` : array, shape (`n_components`, `n_features`)
    Mean parameters for each mixture component.

``precs_`` : array
    Precision (inverse covariance) parameters for each mixture
    component.  The shape depends on `covariance_type`::


        (`n_components`, 'n_features')                if 'spherical',
        (`n_features`, `n_features`)                  if 'tied',
        (`n_components`, `n_features`)                if 'diag',
        (`n_components`, `n_features`, `n_features`)  if 'full'

``converged_`` : bool
    True when convergence was reached in fit(), False otherwise.

See Also

GMM : Finite Gaussian mixture model fit with EM

VBGMM : Finite Gaussian mixture model fit with a variational
    algorithm, better for situations where there might be too little
    data to get a good estimate of the covariance matrix.

Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 

Predict label for data.

This node has been automatically generated by wrapping the sklearn.mixture.dpgmm.DPGMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : array-like, shape = [n_samples, n_features]

Returns

C : array, shape = (n_samples,) component memberships

Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Estimate model parameters with the EM algorithm.

This node has been automatically generated by wrapping the sklearn.mixture.dpgmm.DPGMM class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

A initialization step is performed before entering the expectation-maximization (EM) algorithm. If you want to avoid this step, set the keyword argument init_params to the empty string '' when creating the GMM object. Likewise, if you would like just to do an initialization, set n_iter=0.

Parameters

X : array_like, shape (n, n_features)
List of n_features-dimensional data points. Each row corresponds to a single data point.

Returns

self

Overrides: Node.stop_training