| Home | Trees | Indices | Help | 
 | 
|---|
|  | 
 
Principal component analysis (PCA) using randomized SVD
This node has been automatically generated by wrapping the ``sklearn.decomposition.pca.RandomizedPCA`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
Linear dimensionality reduction using approximated Singular Value
Decomposition of the data and keeping only the most significant
singular vectors to project the data to a lower dimensional space.
Read more in the :ref:`User Guide <RandomizedPCA>`.
**Parameters**
n_components : int, optional
    Maximum number of components to keep. When not given or None, this
    is set to n_features (the second dimension of the training data).
copy : bool
    If False, data passed to fit are overwritten and running
    fit(X).transform(X) will not yield the expected results,
    use fit_transform(X) instead.
iterated_power : int, optional
    Number of iterations for the power method. 3 by default.
whiten : bool, optional
    When True (False by default) the `components_` vectors are divided
    by the singular values to ensure uncorrelated outputs with unit
    component-wise variances.
    Whitening will remove some information from the transformed signal
    (the relative variance scales of the components) but can sometime
    improve the predictive accuracy of the downstream estimators by
    making their data respect some hard-wired assumptions.
random_state : int or RandomState instance or None (default)
    Pseudo Random Number generator seed control. If None, use the
    numpy.random singleton.
**Attributes**
``components_`` : array, [n_components, n_features]
    Components with maximum variance.
``explained_variance_ratio_`` : array, [n_components]
    Percentage of variance explained by each of the selected components.         k is not set then all components are stored and the sum of explained         variances is equal to 1.0
``mean_`` : array, [n_features]
    Per-feature empirical mean, estimated from the training set.
**Examples**
>>> import numpy as np
>>> from sklearn.decomposition import RandomizedPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = RandomizedPCA(n_components=2)
>>> pca.fit(X)                 # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
RandomizedPCA(copy=True, iterated_power=3, n_components=2,
       random_state=None, whiten=False)
>>> print(pca.explained_variance_ratio_) # doctest: +ELLIPSIS
[ 0.99244...  0.00755...]
See also
PCA
TruncatedSVD
**References**
.. [Halko2009] `Finding structure with randomness: Stochastic algorithms
  for constructing approximate matrix decompositions Halko, et al., 2009
  (arXiv:909)`
.. [MRT] `A randomized algorithm for the decomposition of matrices
  Per-Gunnar Martinsson, Vladimir Rokhlin and Mark Tygert`
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| Inherited from  Inherited from  | |||
| Inherited from Cumulator | |||
|---|---|---|---|
| 
 | |||
| 
 | |||
| Inherited from Node | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| 
 | |||
| Inherited from  | |||
| Inherited from Node | |||
|---|---|---|---|
| _train_seq List of tuples: | |||
| dtype dtype | |||
| input_dim Input dimensions | |||
| output_dim Output dimensions | |||
| supported_dtypes Supported dtypes | |||
| 
 | |||
| 
 
Principal component analysis (PCA) using randomized SVD
This node has been automatically generated by wrapping the ``sklearn.decomposition.pca.RandomizedPCA`` class
from the ``sklearn`` library.  The wrapped instance can be accessed
through the ``scikits_alg`` attribute.
Linear dimensionality reduction using approximated Singular Value
Decomposition of the data and keeping only the most significant
singular vectors to project the data to a lower dimensional space.
Read more in the :ref:`User Guide <RandomizedPCA>`.
**Parameters**
n_components : int, optional
    Maximum number of components to keep. When not given or None, this
    is set to n_features (the second dimension of the training data).
copy : bool
    If False, data passed to fit are overwritten and running
    fit(X).transform(X) will not yield the expected results,
    use fit_transform(X) instead.
iterated_power : int, optional
    Number of iterations for the power method. 3 by default.
whiten : bool, optional
    When True (False by default) the `components_` vectors are divided
    by the singular values to ensure uncorrelated outputs with unit
    component-wise variances.
    Whitening will remove some information from the transformed signal
    (the relative variance scales of the components) but can sometime
    improve the predictive accuracy of the downstream estimators by
    making their data respect some hard-wired assumptions.
random_state : int or RandomState instance or None (default)
    Pseudo Random Number generator seed control. If None, use the
    numpy.random singleton.
**Attributes**
``components_`` : array, [n_components, n_features]
    Components with maximum variance.
``explained_variance_ratio_`` : array, [n_components]
    Percentage of variance explained by each of the selected components.         k is not set then all components are stored and the sum of explained         variances is equal to 1.0
``mean_`` : array, [n_features]
    Per-feature empirical mean, estimated from the training set.
**Examples**
>>> import numpy as np
>>> from sklearn.decomposition import RandomizedPCA
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> pca = RandomizedPCA(n_components=2)
>>> pca.fit(X)                 # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE
RandomizedPCA(copy=True, iterated_power=3, n_components=2,
       random_state=None, whiten=False)
>>> print(pca.explained_variance_ratio_) # doctest: +ELLIPSIS
[ 0.99244...  0.00755...]
See also
PCA
TruncatedSVD
**References**
.. [Halko2009] `Finding structure with randomness: Stochastic algorithms
  for constructing approximate matrix decompositions Halko, et al., 2009
  (arXiv:909)`
.. [MRT] `A randomized algorithm for the decomposition of matrices
  Per-Gunnar Martinsson, Vladimir Rokhlin and Mark Tygert`
 | 
| 
 
 | 
| 
 
 | 
| 
 
 | 
| 
 Apply dimensionality reduction on X. This node has been automatically generated by wrapping the sklearn.decomposition.pca.RandomizedPCA class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. X is projected on the first principal components previous extracted from a training set. Parameters 
 Returns X_new : array-like, shape (n_samples, n_components) 
 | 
| 
 
 | 
| 
 
 | 
| 
 Fit the model with X by extracting the first principal components. This node has been automatically generated by wrapping the sklearn.decomposition.pca.RandomizedPCA class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters 
 Returns 
 
 | 
| Home | Trees | Indices | Help | 
 | 
|---|
| Generated by Epydoc 3.0.1 on Tue Mar 8 12:39:48 2016 | http://epydoc.sourceforge.net |