Package mdp :: Package nodes :: Class TruncatedSVDScikitsLearnNode
[hide private]
[frames] | no frames]

Class TruncatedSVDScikitsLearnNode



Dimensionality reduction using truncated SVD (aka LSA).

This node has been automatically generated by wrapping the ``sklearn.decomposition.truncated_svd.TruncatedSVD`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

This transformer performs linear dimensionality reduction by means of
truncated singular value decomposition (SVD). It is very similar to PCA,
but operates on sample vectors directly, instead of on a covariance matrix.
This means it can work with scipy.sparse matrices efficiently.

In particular, truncated SVD works on term count/tf-idf matrices as
returned by the vectorizers in sklearn.feature_extraction.text. In that
context, it is known as latent semantic analysis (LSA).

This estimator supports two algorithm: a fast randomized SVD solver, and
a "naive" algorithm that uses ARPACK as an eigensolver on (X * X.T) or
(X.T * X), whichever is more efficient.

Read more in the :ref:`User Guide <LSA>`.

**Parameters**

n_components : int, default = 2
    Desired dimensionality of output data.
    Must be strictly less than the number of features.
    The default value is useful for visualisation. For LSA, a value of
    100 is recommended.

algorithm : string, default = "randomized"
    SVD solver to use. Either "arpack" for the ARPACK wrapper in SciPy
    (scipy.sparse.linalg.svds), or "randomized" for the randomized
    algorithm due to Halko (2009).

n_iter : int, optional
    Number of iterations for randomized SVD solver. Not used by ARPACK.

random_state : int or RandomState, optional
    (Seed for) pseudo-random number generator. If not given, the
    numpy.random singleton is used.

tol : float, optional
    Tolerance for ARPACK. 0 means machine precision. Ignored by randomized
    SVD solver.

**Attributes**

``components_`` : array, shape (n_components, n_features)

``explained_variance_ratio_`` : array, [n_components]
    Percentage of variance explained by each of the selected components.

``explained_variance_`` : array, [n_components]
    The variance of the training samples transformed by a projection to
    each component.

**Examples**

>>> from sklearn.decomposition import TruncatedSVD
>>> from sklearn.random_projection import sparse_random_matrix
>>> X = sparse_random_matrix(100, 100, density=0.01, random_state=42)
>>> svd = TruncatedSVD(n_components=5, random_state=42)
>>> svd.fit(X) # doctest: +NORMALIZE_WHITESPACE
TruncatedSVD(algorithm='randomized', n_components=5, n_iter=5,
        random_state=42, tol=0.0)
>>> print(svd.explained_variance_ratio_) # doctest: +ELLIPSIS
[ 0.0782... 0.0552... 0.0544... 0.0499... 0.0413...]
>>> print(svd.explained_variance_ratio_.sum()) # doctest: +ELLIPSIS
0.279...

See also

PCA
RandomizedPCA

**References**

Finding structure with randomness: Stochastic algorithms for constructing
approximate matrix decompositions
Halko, et al., 2009 (arXiv:909) http://arxiv.org/pdf/0909.4061

**Notes**

SVD suffers from a problem called "sign indeterminancy", which means the
sign of the ``components_`` and the output from transform depend on the
algorithm and random state. To work around this, fit instances of this
class to data once, then keep the instance around to do transformations.

Instance Methods [hide private]
 
__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
Dimensionality reduction using truncated SVD (aka LSA).
 
_execute(self, x)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
 
_stop_training(self, **kwargs)
Concatenate the collected data in a single array.
 
execute(self, x)
Perform dimensionality reduction on X.
 
stop_training(self, **kwargs)
Fit LSI model on training data X.

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Cumulator
 
_train(self, *args)
Collect all input data in a list.
 
train(self, *args)
Collect all input data in a list.
    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, input_dim=None, output_dim=None, dtype=None, **kwargs)
(Constructor)

 

Dimensionality reduction using truncated SVD (aka LSA).

This node has been automatically generated by wrapping the ``sklearn.decomposition.truncated_svd.TruncatedSVD`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.

This transformer performs linear dimensionality reduction by means of
truncated singular value decomposition (SVD). It is very similar to PCA,
but operates on sample vectors directly, instead of on a covariance matrix.
This means it can work with scipy.sparse matrices efficiently.

In particular, truncated SVD works on term count/tf-idf matrices as
returned by the vectorizers in sklearn.feature_extraction.text. In that
context, it is known as latent semantic analysis (LSA).

This estimator supports two algorithm: a fast randomized SVD solver, and
a "naive" algorithm that uses ARPACK as an eigensolver on (X * X.T) or
(X.T * X), whichever is more efficient.

Read more in the :ref:`User Guide <LSA>`.

**Parameters**

n_components : int, default = 2
    Desired dimensionality of output data.
    Must be strictly less than the number of features.
    The default value is useful for visualisation. For LSA, a value of
    100 is recommended.

algorithm : string, default = "randomized"
    SVD solver to use. Either "arpack" for the ARPACK wrapper in SciPy
    (scipy.sparse.linalg.svds), or "randomized" for the randomized
    algorithm due to Halko (2009).

n_iter : int, optional
    Number of iterations for randomized SVD solver. Not used by ARPACK.

random_state : int or RandomState, optional
    (Seed for) pseudo-random number generator. If not given, the
    numpy.random singleton is used.

tol : float, optional
    Tolerance for ARPACK. 0 means machine precision. Ignored by randomized
    SVD solver.

**Attributes**

``components_`` : array, shape (n_components, n_features)

``explained_variance_ratio_`` : array, [n_components]
    Percentage of variance explained by each of the selected components.

``explained_variance_`` : array, [n_components]
    The variance of the training samples transformed by a projection to
    each component.

**Examples**

>>> from sklearn.decomposition import TruncatedSVD
>>> from sklearn.random_projection import sparse_random_matrix
>>> X = sparse_random_matrix(100, 100, density=0.01, random_state=42)
>>> svd = TruncatedSVD(n_components=5, random_state=42)
>>> svd.fit(X) # doctest: +NORMALIZE_WHITESPACE
TruncatedSVD(algorithm='randomized', n_components=5, n_iter=5,
        random_state=42, tol=0.0)
>>> print(svd.explained_variance_ratio_) # doctest: +ELLIPSIS
[ 0.0782... 0.0552... 0.0544... 0.0499... 0.0413...]
>>> print(svd.explained_variance_ratio_.sum()) # doctest: +ELLIPSIS
0.279...

See also

PCA
RandomizedPCA

**References**

Finding structure with randomness: Stochastic algorithms for constructing
approximate matrix decompositions
Halko, et al., 2009 (arXiv:909) http://arxiv.org/pdf/0909.4061

**Notes**

SVD suffers from a problem called "sign indeterminancy", which means the
sign of the ``components_`` and the output from transform depend on the
algorithm and random state. To work around this, fit instances of this
class to data once, then keep the instance around to do transformations.

Overrides: object.__init__

_execute(self, x)

 
Overrides: Node._execute

_get_supported_dtypes(self)

 
Return the list of dtypes supported by this node. The types can be specified in any format allowed by numpy.dtype.
Overrides: Node._get_supported_dtypes

_stop_training(self, **kwargs)

 
Concatenate the collected data in a single array.
Overrides: Node._stop_training

execute(self, x)

 

Perform dimensionality reduction on X.

This node has been automatically generated by wrapping the sklearn.decomposition.truncated_svd.TruncatedSVD class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : {array-like, sparse matrix}, shape (n_samples, n_features)
New data.

Returns

X_new : array, shape (n_samples, n_components)
Reduced version of X. This will always be a dense array.
Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

is_trainable()
Static Method

 
Return True if the node can be trained, False otherwise.
Overrides: Node.is_trainable

stop_training(self, **kwargs)

 

Fit LSI model on training data X.

This node has been automatically generated by wrapping the sklearn.decomposition.truncated_svd.TruncatedSVD class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute.

Parameters

X : {array-like, sparse matrix}, shape (n_samples, n_features)
Training data.

Returns

self : object
Returns the transformer object.
Overrides: Node.stop_training