One of the most popular applications in deep learning is neural style transfer, in which an image is redrawn in a distinctive style. There are many ways to use neural style transfer, but perhaps the most common experiment is to refashion a picture in the style of a famous artist.

This blog post shows how a photograph of Ginger, a yellow Labrador retriever, can be redrawn in the style of Vincent van Gogh’s Wheatfield with Crows. We will see that the neural style transfer replicates Van Gogh’s artistic patterns in distinct ways.

Neural Style Transfer

The basic idea of neural style transfer is to take a source picture, called the content image, and redraw it in the design of a style image. The generated image should retain the basic structure of the content image but have the colors and patterns of the style image.

Neural style transfer content image, style image, and generated image.

Here we take a picture of Ginger (top left) and redraw her in the style of Van Gogh’s Wheatfield with Crows (top right). In the generated picture, Ginger is clearly identifiable, but the design of the image has changed dramatically to imitate Van Gogh’s style.

The Convolutional Neural Network

Neural style transfer runs the images through a pre-trained convolutional neural network and modifies the generated image according to predefined criteria. Here, I use the VGG-19 neural network, using Keras and TensorFlow code derived from Andrew Ng’s convolutional neural networks course.

Another Keras implementation of the VGG-19 model is located here.

Broad architecture of the VGG-19 convolutional neural network.
Broad architecture of the VGG-19 convolutional neural network.

To create the generated image, a neural style transfer model evaluates the intermediate outputs of a convolutional neural network such as the VGG-19. The model repeatedly updates the generated image according to a cost function such as this one:

Neural style transfer cost function.

The content portion of the cost function is relatively straightforward: it pushes the generated image in a direction where its layers’ activations are closer to the activations of the content image.

The style part of the objective function relies on correlations between units of a convolutional layer. For example, if the fuzzy or indistinct sections of the style image tend to be colored green—such as what one would see with a tree or forest—the neural style transfer model might try to push the generated image in a direction where unclear areas also have a green tint.

Variations in Neural Style Transfer

Convolutional neural networks have multiple layers, and the generated image will differ depending on which layers the model focuses on. For instance, the VGG-19 neural network has 19 layers in five main sections, and Python code like this will give each section equal importance:

	style_weights = [
		('conv1_1', 0.2),
		('conv2_1', 0.2),
		('conv3_1', 0.2),
		('conv4_1', 0.2),
		('conv5_1', 0.2)]

Putting more attention on the early layers should emphasize the colors and fine patterns of the style source. Placing importance on the later layers will highlight the larger elements of the style image, and sometimes entire objects from the style picture will appear in the generated drawing.

Another way to create different results is to change the alpha and beta parameters of the cost function. This will shift the relative importance of the content image versus the style source. More focus on the content cost results in a generated image that looks closer to the original picture. Putting more weight on the style cost produces a drawing that more closely resembles the artistic source.

Results

Ginger painted in the style of Vincent van Gogh.
Ginger painted in the style of Vincent van Gogh.

A closer look at the generated image shows how Ginger’s picture has been redrawn in Van Gogh’s style. Vincent van Gogh often used the impasto approach, in which “paint is thickly laid on a surface, so that brushstrokes or palette knife marks are visible.” (MoMA)

With this technique, it would be difficult to draw straight lines, sharp edges, or uniform colors. We see this by zooming in on a section of the original image showing the corner of a table and metal rods from a fence:

A fence in the style of Vincent van Gogh.

The neural style transfer altered the sharp edges of the table and the straight lines of the fence into wavy, swirling patterns as if an artist had drawn them with thick paint.

Another example of style transfer is shown in Ginger’s face:

Ginger’s face in the style of Vincent van Gogh.

The original picture contains areas where the color is mostly the same. But the transformed drawing changes these areas of even coloring into bumpy, rippled patterns that one would see from a real-life artist working with paint and a paintbrush.

Despite these style changes, Ginger’s posture and expression still come through in the generated image. This is the intent of neural style transfer: to maintain the basic elements of a scene, but to transform the picture in some way that is artistically pleasing or has some other usefulness.

Conclusion

Experimenting with neural style transfer is a fun way to become more familiar with the inner workings of convolutional neural networks. This blog post showed how neural style transfer can simulate the expressive style of Vincent van Gogh in clear ways.

Neural style techniques produce dazzling images, and it is likely that many of the current hobbyist efforts will eventually work their way into advertising, film production, and other media.


References

A Neural Algorithm of Artistic Style
Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
August 26, 2015

Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan, Andrew Zisserman
September 4, 2014

Convolutional Neural Networks
The fourth course of the Deep Learning Specialization created by deeplearning.ai.

Wheatfield with Crows
Vincent van Gogh