Brilliantly wrong
thoughts on science and programming
About me Testimonials

Convolution operation was often implemented via matrix multiplication

Jul 1, 2015 • Alex

Here you can find a detailed description of how it is done by chunking the data

http://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/


Theano library, by the way, provides many possibilities for implementation of convolution, among which CPUCorrMM (this isn’t the main way however)
http://deeplearning.net/software/theano/library/tensor/nnet/conv.html

Today convolutions typically have small filters and implemented with Winograd optimization

Top posts at "brilliantly wrong": (all posts)

Gradient boosting
playground
Hamiltonian MC
explained
Gradient boosting
explained
Reconstructing pictures
with ML
Neural Networks
visualized in 3d
  • Jupyter (IPython) notebooks features
  • Numpy tips and tricks: part 1, part 2
  • Reweighting with Boosted Decision Trees
  • Machine Learning in Science and Industry
  • Speed benchmarks: numpy vs all.
  • Machine learning in COMET: part 1, part 2
  • ROC curve explained
  • Optimal control of oscillations

Brilliantly wrong

  • Alex Rogozhnikov
  • name.secondname.phd@gmail.com
  • arogozhnikov

Brilliantly Wrong — Alex Rogozhnikov's blog about math, machine learning, programming, physics and biology.