Reading the following

ICA based on a Smooth Estimation of the Differential Entropy

Lev Faivishevsky, Jacob Goldberger, School of Engineering, Bar-Ilan University

See http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2008_0513.pdf

I found myself translating. Recovering latent variables felt like “tunny wheel breaking.” Observations of unknown linear functions (of the wheel bits) felt like observing Tunny cipher. D independent sources felt like the n Tunny machines in a cryptonet, sharing daily key. A, the unknown square matrix, felt like the tunny rectangling process. Repeated observations felt like the counting of pulses from cipher tapes to make the depths (of similarity information), from which the rectangling convergence process would start.

Only 1 key (setting solution) would showcase as the appropriate Gaussian.

Then note the characterisation of the general process: “a search over the rotation matrices”

The general theory of the Tunny wheel breaking attack seems to then be outlined – in terms of finding a non-linear “contrast” functions. Of course, we recall how Tunny documents noted that the relationship between convergence done crudely using signs, once reduced to decibannage (the mutual information unit measure), had a non-linear relationship to the decibannage computed using accurate convergence.

ICA based on a Smooth Estimation of the Differential Entropy

Lev Faivishevsky, Jacob Goldberger, School of Engineering, Bar-Ilan University

See http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2008_0513.pdf

## About home_pw

Computer Programmer who often does network administration with focus on security servers. Sometimes plays at slot machine programming.