Interactive Projection

Forest Kaleidoscope

Scope
Experience Design/Programming
Timeline
Jan - Mar 20

Summary

Forest kaleidoscope is an machine learning generated animation. The visual images are generated by machine learning models which is trained by thousands of artist paintings and tree images. By rotating their phones, users are able to switch to different styles of kaleidoscope.

Deconstruction

The Persistence of Memory

Waves

Endless Tunnel

Space Pyramid

Black Hole

Texture 03

Inspiration

“A few years ago I lived on a farm in the middle of nowhere of Texas for a while. There was this old well in the farm that was still functional in the context of modern society. One night, I accidentally approached the well. Instead of the dark spooky abyss, I saw the reflection of the starry sky on the surface of the water in the well. As I put my hand into the water and moved around to create wrinkles, I found that I was interacting with the intangible stars by my hand through the medium of the water. Thus, I came to the idea of enabling people to play with stars by constructing such space well”

Cooperation

I cooperated with Cass Yao on this project. In this two-person team, I became the technical leader and was in charge of programming the visual effects and design the interaction of installation.  

This project was shown in the ITP winter show 2018. Link to interview

Experience

Imagine that you are lost in the middle of a forest in Europe, you have a magic compass in your hand. If you turn to the direction of Moscow, then you can see the forest transforms into the style of Kandinsky(Kandinsky was born in Moscow); if you turn to Málaga, you can see the forest transforms to the style of Picasso. The four artists in those four directions are Monet, Picasso, Roerich, and Kandinsky.

Process

To make this work, I mainly used two different machine leanring algorithms. The first one being StyleGan, which is used to generate forest images. The generated images form a giant 2D plain called “latent space”. Every image in latent space are very similar but slightly different.

I use code to take a random image in the latent space as a starting point, and then extract a second image near the first one. Taking these images successively and putting them on top of one another through time will give us an seamless animation.

After that, I feed the frames into “Adaptive Style Transfer”, which gives me 4 completely different visual styles according to the 4 different artists.

Inspiration

The inspiration of this work came from kaleidoscopes that I played during my childhood. The way to play kaleidoscope is simple: turning it can change the image. To make users associate the experience with kaleidoscope, I make the interation on the phone a compass. I modified the 4 directions in traditional compass to 4 artists. As users turn to different direction, the visual style will be changed accordingly.

Reflection

Machine learning algorithms enable me to disassemble the elements in the artists’ works, so that I can use the elements to do re-creation.

Through this project, I want to explore whether the artists distinguishable painting style make their works invalueable?

Can machine learning algorithms really master the visual language of those artists?