Didn’t know where to put it, so just for the memory will post it to blog.

  • https://charlesmartin14.wordpress.com/2015/03/25/why-does-deep-learning-work/ There is much fuzz nowadays about why deep learning works at all (there is no deep theory under today), and I love reading these hypothetical explanations (though I’m absolutely sure all of them are wrong. A good explanation of success should give you new ideas about what new things will work).

    In this couple of articles, the author argues that the action of RBM can be derived as an action of renormalization group. BTW, this is not the first physical analogy in neural networks. Apart from RBMs which use Gibbs-like distribution, there were explanations of Hopfield neural networks via spin glasses and derivation of update rules from mean-field theory.

  • http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/ This impressive post was written a year ago and shows recent fresh ideas about deep representations of objects. Namely, I was surprised with how a common space of representation for different objects may help in translations.

  • http://ai.stanford.edu/~ang/papers/icml09-ConvolutionalDeepBeliefNetworks.pdf Finally, a link to a paper where convolutional RBMs were introduced. Using softmax to combine with pool-layer is a good idea.

PS. Found a link of recommended reading for new LISA-lab students. http://www.datakit.cn/blog/2014/09/22/Reading_lists_for_new_LISA_students.html