On June 17, 2015, Mordvintsev, Tyka et al posted a blog on the Google Research Blog. The title of the blog was interestingly “Inceptionism: Going Deeper into Neural Networks”. This post discussed using the Machine Learning algorithm, Neural Networks to identify objects in any picture. This was referred to as “Deep Dreaming”. Since then, the concept of machines dreaming has gone wild!
Neural Network is a Machine Learning algorithm used for fields like speech recognition, character recognition etc. The algorithm requires supervised learning at first. At this stage, the algorithm is trained using known samples. The strength of the internal networks gets stronger as the algorithm gets more answers correct. This weakens the pathways leading to incorrect answers, and strengthens the pathways leading to correct answers. Once the “training session” is over, the algorithm will be used on new samples, where the answer is not provided to the algorithm.
Google has created a program aptly named “DeepDream”. In their own words;
Google has spent the last few years teaching computers how to see, understand, and appreciate our world. It’s an important goal that the search giant hopes will allow programs to classify images just by “looking” at them.
And this is where Google’s deep dream ideas originate. With simple words you give to an AI program a couple of images and let it know what those images contain ( what objects – dogs, cats, mountains, bicycles, … ) and give it a random image and ask it what objects it can find in this image. Then the program start transforming the image till it can find something similar to what it already knows and thus you see strange artifacts morphing in the dreamed image (like eyes or human faces morphing in image of a pyramid).
In other words, we are encouraging the machine to run their imaginations wild with pareidolia, the phenomenon where one sees a shape or a form on an unrelated object, like seeing a face on the surface of the moon. The algorithm will try to find objects in a picture, and any resemblance is made stronger and stronger until it creates a familiar object. Of course, I wanted to test this out. You can see the deep dreaming effects on my current LinkedIn profile picture.
You can see that my black hair was used to weave in imaginary eyes. Interestingly, my beard became the face of a dog. The algorithm uses the pictures database it knows (over 1 million pictures). Thus most of the dreamed pictures are common animals and objects.
I thought I would then test the algorithm with Card III from Rorschach Inkblot test. This card is seen to have two humans or ape like shapes interacting with each other.
The algorithm was able to identify two dogs with weird limbs, another dog in the middle, and two containers of some sort.
This program is very entertaining and provides insights into how the machine actually sees or dreams, as Google likes to put it. Other than the entertaining and deep philosophical aspects of a machine dreaming, there are practical uses for this.
What if the machine can imagine a picture through the eyes of a famous artist? Using a similar algorithm, we can have a “new” painting from Vincent van Gogh of the Eiffel Tower. I have taken the following example from here. You can see that the clouds were enhanced with the strong Van Gogh style strokes.
More examples were found here. The following is the Starry Night version of San Francisco Golden Bridge.
We can apparently go one step further, and imagine what a painting of Golden Bridge would look like if it was part Vincent Van Gogh’s Starry Night and part Edvard Munch’s The Scream.
I am blown away by this!
Can we go from Stills to Animation?
Apparently, this does not stop here. Mbartoli has created an algorithm that converted a gif file of a scene from 2001: A Space Odyssey in the artistic style of Edvard Munch’s The Scream.
My Favorite Example of Machine Learning in a Movie or a TV Show:
I have been enjoying learning Machine Learning. I always keep my eyes open for movies or TV shows that feature some form of Machine Learning. Recently, I came across “Be Right Back“, the first episode in season 2 of the British TV show, Black Mirror. If you have not seen the show, I highly recommend it. I will be revealing the main plot of the episode here, so if you have not seen it, you may want to stop reading now.
The premise of the episode is that the main character, Martha, loses her boyfriend, Ash, after he dies tragically in a car accident. Ash was a heavy social media addict. The brilliant notion that the episode put forth is this – can a computer be trained with all the social media left behind by the boyfriend in the form of tweets, blog posts, videos etc. and create an AI (Artificial Intelligence) boyfriend? The AI boyfriend looks and sounds like Ash. It also makes jokes as Ash would. Maybe, this is the final frontier? Man living on through Artificial Intelligence!
What the future holds – DeepDreamFlix?
In the future, you can turn the DeepDreamFlix app on, and type in “Alfred Hitchcock + Jackie Chan + Western” and the app will create a brand new Western movie directed in the style of Hitchcock starring Jackie Chan. 🙂
This post was written to introduce the cool concept of Neural Network, and to encourage the reader to get interested in Machine Learning. There are many free and cost effective courses online. I highly recommend Udacity, and Coursera.
Always keep on learning…