One of the things that artificial neural networks share with us is that their "brain" can be sort of a black box. So in an attempt to better understand the innerworkings of an image classification network, Google engineers turned the network upside down, created trippy images that took the internet by storm, and then put the source code on github so that anyone can try their hand at understanding how the code works.

One of the several groups jumping on the opportunity for peaking into an artificial neural network was Deepdreamr. Thousands of people submitted their photographs to the group's website and got a beautiful, trippy image back. "We gained a lot of knowledge about how the code itself works while we were working with it," team member James Bateman said.

Meanwhile, the neural net learned more and more from all the user-submitted images, adding every little new thing it saw to its library. "It wouldn't be the brain it is without the community because they fed it all the images. We couldn't possibly put that many images into it ourselves," team member Roz Woolverton said.

So we asked the kind people of Deepdreamr to feed some biological brain images to the machine and see how the artificial neural network tries to make sense of the neurons.

This is the Deep Dream version of one of the first drawings of a neuron, by Otto Deiters, published in 1865:

An artificial neural network simulates actual networks of neurons and how they are connected to each other. The network is trained by processing millions of images, gradually adjusting its network parameters until it classifies images correctly. Deep Dream hasn't been trained to recognize neurons, at least not extensively. Rather, it's been shown a large number of animal images, mainly dogs. So naturally, it tends to see dogs everywhere, like in this drawing of a dog's olfactory bulb, by Camillo Golgi in 1875:

And this is what happened to the drawing of cells in a pigeon's cerebellum, by Santiago Ramón y Cajal, 1899:

Deep Dream started as an experiment with Google's image recognition software. The engineers wanted to understand what the network actually "sees" at the intermediate levels of its processing. So they fed the software noisy pictures and asked it to recognize patterns in the images. Through repetition, the network enhanced any pattern it saw, until the image was completely altered. This is how in Deep Dream even a small worm can be turned into a dragon. The original image for the results below is of a glowing C. Elegans, made by neurobiologist Martin Chalfie in the 1990s. The worm glows because scientists inserted a piece of DNA coding for green fluorescent protein into the neurons of the worm, causing the cells to produce the glowing protein and light up under the blue light.

The following image is the Deep Dream version of a slice of an adult mouse brain, lit up thanks to fluorescent protein producing neurons:
(Original image via Frontiers in Neural Circuits)

Neural networks consist of several layers of artificial neurons. The first layers that take the image are tasked with extracting basic features of the image, such as edges and corners. Each layer then sends the results to the next layer, which looks for progressively higher-level features of the image, like shapes and objects. The highest layers put those results together to recognize a complex object such as an animal or a building.

The following image is taken from one of the intermediate layers of the neural net. The original image is of neurons in a mouse hippocampus labeled using brainbow method:
(Original image by Jeff Lichtman and colleagues via Center for Brain Science)

This is how Golgi's drawing looks like at the same layer:

Recently, scientists figured out how to make brains transparent using a method called CLARITY. The method allows scientists to look at the glowing neurons in the whole, three-dimensional brain of the animals without having to cut the brain into slices. This is what the machine saw:
(Original image by Karl Deisseroth and colleagues, Stanford University)

Finally, we had the machine take a look at a beautiful sci-art work by neuroscientist Luis de la Torre-Ubieta of UCLA. The original image consists of 5.3-micrometer thick slices of transparent mouse brain with green-glowing neurons that are color-coded by their depth from red on top to orange, yellow, purple, blue and green at the very bottom.