Marching neural network

Visualizing level surfaces of a neural network with raymarching

Artificial neural networks became very popular in recent years, mostly because of their success in tasks of image and speech recognition. Research in this area started more than 60 years ago and many different network architectures were developed during first decades of research, the only architecture that became popular in applications is MLP (multilayer perceptron) — parametrized multilayer functions trained (optimized) with variations of gradient descent. Later based on MLP approach of training application-specific architectures emerged (such as convolutional and recurrent networks).

Probably it is a good idea to understand the behavior of a neural network by visualizing it. While dependencies modelled in machine learning, in particular by neural networks, are multidimensional, we are limited in our visualization abilities to three dimensions.

In the demo below you can play with a very small MLP with three inputs (x, y, z) and observe resulting functions (just to remind, MLP is a neat function) to see how flexible it is.

Visualization requires not-too-old browser and GPU acceleration. Best on laptops and PCs, but also works on modern mobile devices.

If you see this message, your browser / device
doesn't support this beautiful demo.

Try more powerful device!

Weights of the network

red is 1, blue is -1, use mouse wheel to change weights
Randomize weights!
Record a gif!
takes a while
Record a video!
chromium browsers only

In this demonstration you can play with a simple neural network in 3 spacial dimensions and visualize the functions the network produces (those are quite interesting despite the simplicity of a network, just click 'randomize weights' button several times).

Animated surfaces are level surfaces of a neural network. You can stop animation and choose level of surface yourself, note that demo shows surfaces with all equidistant levels that differ by integer:
f(x) = ... , level-1, level, level+1, ...
that is why animation is periodic.

Sparks that are following level surfaces demonstrate the regions with rapid change (small gradient), sparks' color demonstrates the level of surface they are following (red is higher). As the level is changing, the color of sparks also changes (from blue to red).

Architecture of the network

Network has 4 inputs: 3 spacial inputs (x, y, z) + intercept term (1), 8 hidden neurons in a single hidden layer (h1, ..., h8) and a single output. Weights of the network are connections between inputs and neurons in a hidden layer, and between neurons in a hidden layer and output. Tanh is used as an activation in hidden layer.

Manipulating the network

Circles in the scheme correspond to weights (parmeters) of a network and describe the strength of connection between neurons, red is +1, blue is -1. You can change the weights: position cursor over any circle and use mouse wheel.

Visualization technique

This demonstration employs a variation of raymarching technique in computer graphics (also known as volume ray casting).

It is highly recommended to have a good GPU-acceleration to enjoy the picture, because in raymarching everything is computed with shaders only (and neural network is calculated with shaders as well).

Update: I've simplified the technique to ray-scanning + hord-like method. It is both faster and produces visually more consistent picture.

Inspiration

Inspired by works by Inigo Quilez on raymarching singed distance fields and his shadertoy demos. And also by Johann Korndorfer's demo talk at NVScene.

Links

Made by Alex Rogozhnikov (blog). Source of this demo can be found in repository.

Star Fork Follow @arogozhnikov