Convolution operation was often implemented via matrix multiplication

Here you can find a detailed description of how it is done by chunking the data
http://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/
Theano library, by the way, provides many possibilities for implementation of convolution, among which CPUCorrMM (this isn’t the main way however)
http://deeplearning.net/software/theano/library/tensor/nnet/conv.html
Today convolutions typically have small filters and implemented with Winograd optimization
                Gradient boosting 
                Hamiltonian MC 
                Gradient boosting 
                Reconstructing pictures 
                Neural Networks