Package mdp :: Package nodes :: Class RBMNode
[hide private]
[frames] | no frames]

Class RBMNode


Restricted Boltzmann Machine node. An RBM is an undirected probabilistic network with binary variables. The graph is bipartite into observed (visible) and hidden (latent) variables.

By default, the execute method returns the probability of one of the hiden variables being equal to 1 given the input.

Use the sample_v method to sample from the observed variables given a setting of the hidden variables, and sample_h to do the opposite. The energy method can be used to compute the energy of a given setting of all variables.

The network is trained by Contrastive Divergence, as described in Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1711-1800

Internal variables of interest

self.w
Generative weights between hidden and observed variables
self.bv
bias vector of the observed variables
self.bh
bias vector of the hidden variables

For more information on RBMs, see Geoffrey E. Hinton (2007) Boltzmann machine. Scholarpedia, 2(5):1668

Instance Methods [hide private]
 
__init__(self, hidden_dim, visible_dim=None, dtype=None)
If the input dimension and the output dimension are unspecified, they will be set when the train or execute method is called for the first time. If dtype is unspecified, it will be inherited from the data it receives at the first call of train or execute.
 
_energy(self, v, h)
 
_execute(self, v, return_probs=True)
If return_probs is True, returns the probability of the hidden variables h[n,i] being 1 given the observations v[n,:]. If return_probs is False, return a sample from that probability.
 
_init_weights(self)
 
_pre_inversion_checks(self, y)
This method contains all pre-inversion checks.
 
_sample_h(self, v)
 
_sample_v(self, h)
 
_stop_training(self)
 
_train(self, v, n_updates=1, epsilon=0.1, decay=0.0, momentum=0.0, update_with_ph=True, verbose=False)
Update the internal structures according to the input data v. The training is performed using Contrastive Divergence (CD).
 
energy(self, v, h)
Compute the energy of the RBM given observed variables state v and hidden variables state h.
 
execute(self, v, return_probs=True)
If return_probs is True, returns the probability of the hidden variables h[n,i] being 1 given the observations v[n,:]. If return_probs is False, return a sample from that probability.
 
sample_h(self, v)
Sample the hidden variables given observations v.
 
sample_v(self, h)
Sample the observed variables given hidden variable state h.
 
stop_training(self)
Stop the training phase.
 
train(self, v, n_updates=1, epsilon=0.1, decay=0.0, momentum=0.0, update_with_ph=True, verbose=False)
Update the internal structures according to the input data v. The training is performed using Contrastive Divergence (CD).

Inherited from unreachable.newobject: __long__, __native__, __nonzero__, __unicode__, next

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __setattr__, __sizeof__, __subclasshook__

    Inherited from Node
 
__add__(self, other)
 
__call__(self, x, *args, **kwargs)
Calling an instance of Node is equivalent to calling its execute method.
 
__repr__(self)
repr(x)
 
__str__(self)
str(x)
 
_check_input(self, x)
 
_check_output(self, y)
 
_check_train_args(self, x, *args, **kwargs)
 
_get_supported_dtypes(self)
Return the list of dtypes supported by this node.
 
_get_train_seq(self)
 
_if_training_stop_training(self)
 
_inverse(self, x)
 
_pre_execution_checks(self, x)
This method contains all pre-execution checks.
 
_refcast(self, x)
Helper function to cast arrays to the internal dtype.
 
_set_dtype(self, t)
 
_set_input_dim(self, n)
 
_set_output_dim(self, n)
 
copy(self, protocol=None)
Return a deep copy of the node.
 
get_current_train_phase(self)
Return the index of the current training phase.
 
get_dtype(self)
Return dtype.
 
get_input_dim(self)
Return input dimensions.
 
get_output_dim(self)
Return output dimensions.
 
get_remaining_train_phase(self)
Return the number of training phases still to accomplish.
 
get_supported_dtypes(self)
Return dtypes supported by the node as a list of dtype objects.
 
has_multiple_training_phases(self)
Return True if the node has multiple training phases.
 
inverse(self, y, *args, **kwargs)
Invert y.
 
is_training(self)
Return True if the node is in the training phase, False otherwise.
 
save(self, filename, protocol=-1)
Save a pickled serialization of the node to filename. If filename is None, return a string.
 
set_dtype(self, t)
Set internal structures' dtype.
 
set_input_dim(self, n)
Set input dimensions.
 
set_output_dim(self, n)
Set output dimensions.
Static Methods [hide private]
 
is_invertible()
Return True if the node can be inverted, False otherwise.
    Inherited from Node
 
is_trainable()
Return True if the node can be trained, False otherwise.
Properties [hide private]

Inherited from object: __class__

    Inherited from Node
  _train_seq
List of tuples:
  dtype
dtype
  input_dim
Input dimensions
  output_dim
Output dimensions
  supported_dtypes
Supported dtypes
Method Details [hide private]

__init__(self, hidden_dim, visible_dim=None, dtype=None)
(Constructor)

 

If the input dimension and the output dimension are unspecified, they will be set when the train or execute method is called for the first time. If dtype is unspecified, it will be inherited from the data it receives at the first call of train or execute.

Every subclass must take care of up- or down-casting the internal structures to match this argument (use _refcast private method when possible).

Parameters:
  • hidden_dim - number of hidden variables
  • visible_dim - number of observed variables
Overrides: object.__init__

_energy(self, v, h)

 

_execute(self, v, return_probs=True)

 
If return_probs is True, returns the probability of the hidden variables h[n,i] being 1 given the observations v[n,:]. If return_probs is False, return a sample from that probability.
Overrides: Node._execute

_init_weights(self)

 

_pre_inversion_checks(self, y)

 

This method contains all pre-inversion checks.

It can be used when a subclass defines multiple inversion methods.

Overrides: Node._pre_inversion_checks
(inherited documentation)

_sample_h(self, v)

 

_sample_v(self, h)

 

_stop_training(self)

 
Overrides: Node._stop_training

_train(self, v, n_updates=1, epsilon=0.1, decay=0.0, momentum=0.0, update_with_ph=True, verbose=False)

 
Update the internal structures according to the input data v. The training is performed using Contrastive Divergence (CD).
Parameters:
  • v - a binary matrix having different variables on different columns and observations on the rows
  • n_updates - number of CD iterations. Default value: 1
  • epsilon - learning rate. Default value: 0.1
  • decay - weight decay term. Default value: 0.
  • momentum - momentum term. Default value: 0.
  • update_with_ph - In his code, G.Hinton updates the hidden biases using the probability of the hidden unit activations instead of a sample from it. This is in order to speed up sequential learning of RBMs. Set this to False to use the samples instead.
Overrides: Node._train

energy(self, v, h)

 
Compute the energy of the RBM given observed variables state v and hidden variables state h.

execute(self, v, return_probs=True)

 
If return_probs is True, returns the probability of the hidden variables h[n,i] being 1 given the observations v[n,:]. If return_probs is False, return a sample from that probability.
Overrides: Node.execute

is_invertible()
Static Method

 
Return True if the node can be inverted, False otherwise.
Overrides: Node.is_invertible
(inherited documentation)

sample_h(self, v)

 
Sample the hidden variables given observations v.
Returns:
a tuple (prob_h, h), where prob_h[n,i] is the probability that variable i is one given the observations v[n,:], and h[n,i] is a sample from the posterior probability.

sample_v(self, h)

 
Sample the observed variables given hidden variable state h.
Returns:
a tuple (prob_v, v), where prob_v[n,i] is the probability that variable i is one given the hidden variables h[n,:], and v[n,i] is a sample from that conditional probability.

stop_training(self)

 

Stop the training phase.

By default, subclasses should overwrite _stop_training to implement this functionality. The docstring of the _stop_training method overwrites this docstring.

Overrides: Node.stop_training

train(self, v, n_updates=1, epsilon=0.1, decay=0.0, momentum=0.0, update_with_ph=True, verbose=False)

 
Update the internal structures according to the input data v. The training is performed using Contrastive Divergence (CD).
Parameters:
  • v - a binary matrix having different variables on different columns and observations on the rows
  • n_updates - number of CD iterations. Default value: 1
  • epsilon - learning rate. Default value: 0.1
  • decay - weight decay term. Default value: 0.
  • momentum - momentum term. Default value: 0.
  • update_with_ph - In his code, G.Hinton updates the hidden biases using the probability of the hidden unit activations instead of a sample from it. This is in order to speed up sequential learning of RBMs. Set this to False to use the samples instead.
Overrides: Node.train