dict:
dict() -> new empty dictionary
dict(mapping) -> new dictionary initialized from a mapping object's
(key, value) pairs
dict(iterable) -> new dictionary initialized as if via:
d = {}
for k, v in iterable:
d[k] = v
dict(**kwargs) -> new dictionary initialized with the name=value pairs
in the keyword argument list.
mdp.CheckpointSaveFunction:
This checkpoint function saves the node in pickle format.
The pickle dump can be done either before the training phase is finished or
right after that.
In this way, it is for example possible to reload it in successive sessions
and continue the training.
mdp.utils.CovarianceMatrix:
This class stores an empirical covariance matrix that can be updated
incrementally. A call to the 'fix' method returns the current state of
the covariance matrix, the average and the number of observations, and
resets the internal data.
mdp.utils.DelayCovarianceMatrix:
This class stores an empirical covariance matrix between the signal and
time delayed signal that can be updated incrementally.
mdp.Flow:
A 'Flow' is a sequence of nodes that are trained and executed
together to form a more complex algorithm. Input data is sent to the
first node and is successively processed by the subsequent nodes along
the sequence.
mdp.CheckpointFlow:
Subclass of Flow class that allows user-supplied checkpoint functions
to be executed at the end of each phase, for example to
save the internal structures of a node for later analysis.
mdp.utils.MultipleCovarianceMatrices:
Container class for multiple covariance matrices to easily
execute operations on all matrices at the same time.
Note: all operations are done in place where possible.
mdp.nodes.GaussianRandomProjectionHashScikitsLearnNode:
This node has been automatically generated by wrapping the sklearn.neighbors.approximate.GaussianRandomProjectionHash class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
mdp.nodes.ICANode:
ICANode is a general class to handle different batch-mode algorithm for
Independent Component Analysis. More information about ICA can be found
among others in
Hyvarinen A., Karhunen J., Oja E. (2001). Independent Component Analysis,
Wiley.
mdp.nodes.CuBICANode:
Perform Independent Component Analysis using the CuBICA algorithm.
Note that CuBICA is a batch-algorithm, which means that it needs
all input data before it can start and compute the ICs. The
algorithm is here given as a Node for convenience, but it actually
accumulates all inputs it receives. Remember that to avoid running
out of memory when you have many components and many time samples.
mdp.nodes.FastICANode:
Perform Independent Component Analysis using the FastICA algorithm.
Note that FastICA is a batch-algorithm. This means that it needs
all input data before it can start and compute the ICs.
The algorithm is here given as a Node for convenience, but it
actually accumulates all inputs it receives. Remember that to avoid
running out of memory when you have many components and many time samples.
mdp.nodes.JADENode:
Perform Independent Component Analysis using the JADE algorithm.
Note that JADE is a batch-algorithm. This means that it needs
all input data before it can start and compute the ICs.
The algorithm is here given as a Node for convenience, but it
actually accumulates all inputs it receives. Remember that to avoid
running out of memory when you have many components and many time samples.
mdp.nodes.LinearModelCVScikitsLearnNode:
This node has been automatically generated by wrapping the sklearn.linear_model.coordinate_descent.LinearModelCV class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
mdp.nodes.LogOddsEstimatorScikitsLearnNode:
This node has been automatically generated by wrapping the sklearn.ensemble.gradient_boosting.LogOddsEstimator class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
mdp.nodes.MeanEstimatorScikitsLearnNode:
This node has been automatically generated by wrapping the sklearn.ensemble.gradient_boosting.MeanEstimator class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
mdp.nodes.NIPALSNode:
Perform Principal Component Analysis using the NIPALS algorithm.
This algorithm is particularyl useful if you have more variable than
observations, or in general when the number of variables is huge and
calculating a full covariance matrix may be unfeasable. It's also more
efficient of the standard PCANode if you expect the number of significant
principal components to be a small. In this case setting output_dim to be
a certain fraction of the total variance, say 90%, may be of some help.
mdp.nodes.PLSCanonicalScikitsLearnNode:
PLSCanonical implements the 2 blocks canonical PLS of the original Wold
algorithm [Tenenhaus 1998] p.204, referred as PLS-C2A in [Wegelin 2000].
mdp.nodes.QuantileEstimatorScikitsLearnNode:
This node has been automatically generated by wrapping the sklearn.ensemble.gradient_boosting.QuantileEstimator class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
mdp.nodes.ScaledLogOddsEstimatorScikitsLearnNode:
This node has been automatically generated by wrapping the sklearn.ensemble.gradient_boosting.ScaledLogOddsEstimator class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
mdp.nodes.ZeroEstimatorScikitsLearnNode:
This node has been automatically generated by wrapping the sklearn.ensemble.gradient_boosting.ZeroEstimator class
from the sklearn library. The wrapped instance can be accessed
through the scikits_alg attribute.
unreachable.Cumulator
mdp.ClassifierCumulator:
A ClassifierCumulator is a Node whose training phase simply collects
all input data and labels. In this way it is possible to easily implement
batch-mode learning.
mdp.nodes.FDANode:
Perform a (generalized) Fisher Discriminant Analysis of its
input. It is a supervised node that implements FDA using a
generalized eigenvalue approach.
mdp.nodes.GrowingNeuralGasNode:
Learn the topological structure of the input data by building a
corresponding graph approximation.
mdp.nodes.GrowingNeuralGasExpansionNode:
Perform a trainable radial basis expansion, where the centers and
sizes of the basis functions are learned through a growing neural
gas.
mdp.nodes.NeuralGasNode:
Learn the topological structure of the input data by building a
corresponding graph approximation (original Neural Gas algorithm).
mdp.nodes.ICANode:
ICANode is a general class to handle different batch-mode algorithm for
Independent Component Analysis. More information about ICA can be found
among others in
Hyvarinen A., Karhunen J., Oja E. (2001). Independent Component Analysis,
Wiley.
mdp.nodes.CuBICANode:
Perform Independent Component Analysis using the CuBICA algorithm.
Note that CuBICA is a batch-algorithm, which means that it needs
all input data before it can start and compute the ICs. The
algorithm is here given as a Node for convenience, but it actually
accumulates all inputs it receives. Remember that to avoid running
out of memory when you have many components and many time samples.
mdp.nodes.FastICANode:
Perform Independent Component Analysis using the FastICA algorithm.
Note that FastICA is a batch-algorithm. This means that it needs
all input data before it can start and compute the ICs.
The algorithm is here given as a Node for convenience, but it
actually accumulates all inputs it receives. Remember that to avoid
running out of memory when you have many components and many time samples.
mdp.nodes.JADENode:
Perform Independent Component Analysis using the JADE algorithm.
Note that JADE is a batch-algorithm. This means that it needs
all input data before it can start and compute the ICs.
The algorithm is here given as a Node for convenience, but it
actually accumulates all inputs it receives. Remember that to avoid
running out of memory when you have many components and many time samples.
mdp.nodes.ISFANode:
Perform Independent Slow Feature Analysis on the input data.
mdp.nodes.TDSEPNode:
Perform Independent Component Analysis using the TDSEP algorithm.
Note that TDSEP, as implemented in this Node, is an online algorithm,
i.e. it is suited to be trained on huge data sets, provided that the
training is done sending small chunks of data for each time.
mdp.hinet.Layer:
Layers are nodes which consist of multiple horizontally parallel nodes.
mdp.hinet.CloneLayer:
Layer with a single node instance that is used multiple times.
mdp.nodes.LinearRegressionNode:
Compute least-square, multivariate linear regression on the input
data, i.e., learn coefficients b_j so that:
mdp.nodes.PCANode:
Filter the input data through the most significatives of its
principal components.
mdp.nodes.NIPALSNode:
Perform Principal Component Analysis using the NIPALS algorithm.
This algorithm is particularyl useful if you have more variable than
observations, or in general when the number of variables is huge and
calculating a full covariance matrix may be unfeasable. It's also more
efficient of the standard PCANode if you expect the number of significant
principal components to be a small. In this case setting output_dim to be
a certain fraction of the total variance, say 90%, may be of some help.
mdp.nodes.WhiteningNode:
Whiten the input data by filtering it through the most
significatives of its principal components. All output
signals have zero mean, unit variance and are decorrelated.
mdp.PreserveDimNode:
Abstract base class with output_dim == input_dim.
mdp.ClassifierNode:
A ClassifierNode can be used for classification tasks that should not
interfere with the normal execution flow. A reason for that is that the
labels used for classification do not form a vector space, and so they don't
make much sense in a flow.
mdp.ClassifierCumulator:
A ClassifierCumulator is a Node whose training phase simply collects
all input data and labels. In this way it is possible to easily implement
batch-mode learning.
mdp.nodes.SignumClassifier:
This classifier node classifies as 1 if the sum of the data points
is positive and as -1 if the data point is negative
mdp.nodes.SimpleMarkovClassifier:
A simple version of a Markov classifier.
It can be trained on a vector of tuples the label being the next element
in the testing data.
mdp.nodes.RBMNode:
Restricted Boltzmann Machine node. An RBM is an undirected
probabilistic network with binary variables. The graph is
bipartite into observed (visible) and hidden (latent) variables.
mdp.nodes.RBMWithLabelsNode:
Restricted Boltzmann Machine with softmax labels. An RBM is an
undirected probabilistic network with binary variables. In this
case, the node is partitioned into a set of observed (visible)
variables, a set of hidden (latent) variables, and a set of
label variables (also observed), only one of which is active at
any time. The node is able to learn associations between the
visible variables and the labels.
mdp.nodes.SFANode:
Extract the slowly varying components from the input data.
More information about Slow Feature Analysis can be found in
Wiskott, L. and Sejnowski, T.J., Slow Feature Analysis: Unsupervised
Learning of Invariances, Neural Computation, 14(4):715-770 (2002).
mdp.nodes.SFA2Node:
Get an input signal, expand it in the space of
inhomogeneous polynomials of degree 2 and extract its slowly varying
components. The get_quadratic_form method returns the input-output
function of one of the learned unit as a QuadraticForm object.
See the documentation of mdp.utils.QuadraticForm for additional
information.
mdp.hinet.Switchboard:
Does the routing associated with the connections between layers.
mdp.nodes.TimeDelaySlidingWindowNode:
TimeDelaySlidingWindowNode is an alternative to TimeDelayNode
which should be used for online learning/execution. Whereas the
TimeDelayNode works in a batch manner, for online application
a sliding window is necessary which yields only one row per call.
mdp.nodes.XSFANode:
Perform Non-linear Blind Source Separation using Slow Feature Analysis.
mdp.nodes.QuadraticExpansionNode:
Perform expansion in the space formed by all linear and quadratic
monomials.
QuadraticExpansionNode() is equivalent to a
PolynomialExpansionNode(2)
unreachable.ProjectMatrixMixin:
Mixin class to be inherited by all ICA-like algorithms
mdp.nodes.ICANode:
ICANode is a general class to handle different batch-mode algorithm for
Independent Component Analysis. More information about ICA can be found
among others in
Hyvarinen A., Karhunen J., Oja E. (2001). Independent Component Analysis,
Wiley.
mdp.nodes.CuBICANode:
Perform Independent Component Analysis using the CuBICA algorithm.
Note that CuBICA is a batch-algorithm, which means that it needs
all input data before it can start and compute the ICs. The
algorithm is here given as a Node for convenience, but it actually
accumulates all inputs it receives. Remember that to avoid running
out of memory when you have many components and many time samples.
mdp.nodes.FastICANode:
Perform Independent Component Analysis using the FastICA algorithm.
Note that FastICA is a batch-algorithm. This means that it needs
all input data before it can start and compute the ICs.
The algorithm is here given as a Node for convenience, but it
actually accumulates all inputs it receives. Remember that to avoid
running out of memory when you have many components and many time samples.
mdp.nodes.JADENode:
Perform Independent Component Analysis using the JADE algorithm.
Note that JADE is a batch-algorithm. This means that it needs
all input data before it can start and compute the ICs.
The algorithm is here given as a Node for convenience, but it
actually accumulates all inputs it receives. Remember that to avoid
running out of memory when you have many components and many time samples.
mdp.nodes.TDSEPNode:
Perform Independent Component Analysis using the TDSEP algorithm.
Note that TDSEP, as implemented in this Node, is an online algorithm,
i.e. it is suited to be trained on huge data sets, provided that the
training is done sending small chunks of data for each time.
mdp.utils.QuadraticForm:
Define an inhomogeneous quadratic form as 1/2 x'Hx + f'x + c .
This class implements the quadratic form analysis methods
presented in:
mdp.utils.TemporaryDirectory:
Create and return a temporary directory. This has the same
behavior as mkdtemp but can be used as a context manager. For
example:
mdp.nodes._OneDimensionalHitParade:
Class to produce hit-parades (i.e., a list of the largest
and smallest values) out of a one-dimensional time-series.
mdp.caching.cache:
Context manager for the 'cache_execute' extension.
mdp.config:
Provide information about optional dependencies.