Artificial neural networks became very popular in recent years, mostly because of their success in tasks of image and speech recognition. While research in this area started more than 60 years ago and many different network architectures were developed during first decades of research, the only architecture that became popular in applications is MLP (multilayer perceptron) — parametrized multilayer functions trained (optimized) with variations of gradient descent. Later based on MLP approach of training application-specific architectures emerged (such as convolutional and recurrent networks).
Probably it is a good idea to understand the behavior of a neural network by visualizing it. While dependencies modelled in machine learning, in particular by neural networks, are multidimensional, we are limited in our visualization abilities to three dimensions.
In the demo below you can play with a very small MLP with three inputs (x, y, z) and observe resulting functions (just to remind, MLP is a neat function) to see how flexible is this model.
Visualization requires not-too-old browser and GPU acceleration. Best on laptops and PCs, but also works on modern mobile devices.
If you see this message, your browser / device
doesn't support this beautiful demo.
Try more powerful device!
In this demonstration you can play with a simple neural network in 3 spacial dimensions and visualize the functions the network produces (those are quite interesting despite the simplicity of a network, just click 'randomize weights' button several times).
Animated surfaces are level surfaces of a neural network.
You can stop animation and choose level of surface yourself, note that demo shows surfaces with all equidistant levels that differ by integer:
f(x) = ... , level-1, level, level+1, ...
that is why animation is periodic.
Sparks that are following level surfaces demonstrate the regions with rapid change (small gradient), sparks' color demonstrates the level of surface they are following (red is higher). As the level is changing, the color of sparks also changes (from blue to red).
Network has 4 inputs: 3 spacial inputs (x, y, z) + intercept term (1), 8 hidden neurons in a single hidden layer (h1, ..., h8) and a single output. Weights of the network are connections between inputs and neurons in a hidden layer, and between neurons in a hidden layer and output. Tanh is used as an activation in hidden layer.
Circles in the scheme correspond to weights of a network and describe the strength of connection between neurons, red is +1, blue is -1. You can change the weights: hold cursor over any circle and use mouse wheel.
This demonstration employs a variation of raymarching technique in computer graphics (also known as volume ray casting).
It is highly recommended to have a good GPU-acceleration to enjoy the picture, because in raymarching everything is computed with shaders only (and neural network is calculated with shaders as well).