Home | Trees | Indices | Help |
|
---|
|
Perform Independent Component Analysis using the TDSEP algorithm. Note that TDSEP, as implemented in this Node, is an online algorithm, i.e. it is suited to be trained on huge data sets, provided that the training is done sending small chunks of data for each time. Reference: Ziehe, Andreas and Muller, Klaus-Robert (1998). TDSEP an efficient algorithm for blind separation using time structure. in Niklasson, L, Boden, M, and Ziemke, T (Editors), Proc. 8th Int. Conf. Artificial Neural Networks (ICANN 1998). **Internal variables of interest** ``self.white`` The whitening node used for preprocessing. ``self.filters`` The ICA filters matrix (this is the transposed of the projection matrix after whitening). ``self.convergence`` The value of the convergence threshold.
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from Inherited from |
|||
Inherited from ISFANode | |||
---|---|---|---|
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Node | |||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|
|||
Inherited from Node | |||
---|---|---|---|
|
|||
|
|
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
_train_seq List of tuples:: |
|||
dtype dtype |
|||
input_dim Input dimensions |
|||
output_dim Output dimensions |
|||
supported_dtypes Supported dtypes |
|
Input arguments: lags -- list of time-lags to generate the time-delayed covariance matrices. If lags is an integer, time-lags 1,2,...,'lags' are used. Note that time-lag == 0 (instantaneous correlation) is always implicitly used. whitened -- Set whitened is True if input data are already whitened. Otherwise the node will whiten the data itself. white_comp -- If whitened is False, you can set 'white_comp' to the number of whitened components to keep during the calculation (i.e., the input dimensions are reduced to white_comp by keeping the components of largest variance). white_parm -- a dictionary with additional parameters for whitening. It is passed directly to the WhiteningNode constructor. Ex: white_parm = { 'svd' : True } limit -- convergence threshold. max_iter -- If the algorithms does not achieve convergence within max_iter iterations raise an Exception. Should be larger than 100.
|
Stop the training phase. If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white): >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
|
Stop the training phase. If the node is used on large datasets it may be wise to first learn the covariance matrices, and then tune the parameters until a suitable parameter set has been found (learning the covariance matrices is the slowest part in this case). This could be done for example in the following way (assuming the data is already white): >>> covs=[mdp.utils.DelayCovarianceMatrix(dt, dtype=dtype) ... for dt in lags] >>> for block in data: ... [covs[i].update(block) for i in range(len(lags))] You can then initialize the ISFANode with the desired parameters, do a fake training with some random data to set the internal node structure and then call stop_training with the stored covariance matrices. For example: >>> isfa = ISFANode(lags, .....) >>> x = mdp.numx_rand.random((100, input_dim)).astype(dtype) >>> isfa.train(x) >>> isfa.stop_training(covs=covs) This trick has been used in the paper to apply ISFA to surrogate matrices, i.e. covariance matrices that were not learnt on a real dataset.
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Thu Mar 10 15:27:47 2016 | http://epydoc.sourceforge.net |