Links on deep learning
Didn't know where to put it, so just for the memory will post it to blog.
- https://charlesmartin14.wordpress.com/2015/03/25/why-does-deep-learning-work/
There is much fuzz nowadays about why deep learning works at all (there is no any deep theory under today), and I love reading these hypothetical explanations (though I'm absolutely sure all of them are wrong. A good explanation of success should give you new ideas about what new things will work).
In this couple of articles author is arguing that the action of RBM can be derived as an action of renormalization group. BTW, this is not a first physical analogy in neural network. Apart from RBMs which use Gibbs-like distribution, there were explanations of Hopfield neural networks via spin glasses and derivation of update rule from mean-field theory. - http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
This impressive post was written a year ago and shows recent fresh ideas about deep representations of objects. Namely, I was surprised with how common space of representation for different objects may help in translations. - http://ai.stanford.edu/~ang/papers/icml09-ConvolutionalDeepBeliefNetworks.pdf
Finally, a link to a paper where convolutional RBMs were introduced. Using softmax to combine with poll-layer is a good idea.
PS. Found a link of recommended reading for new LISA-lab students. http://www.datakit.cn/blog/2014/09/22/Reading_lists_for_new_LISA_students.html