ponedjeljak, 6. lipnja 2016.

Terence Broad - Convolutional Autoencoder: Reconstructing films with artificial neural networks









Neuralni algoritam uči razlikovati podatke iz filma Blade Runner od okolnih podataka koji nisu iz filma. Potom mora rekonstruirati film što je moguće vjernije. Rezultat je izvrsna distorzirana verzija Blade Runnera. U osnovi i mi tako opažamo, najprije učimo razlikovati npr. stolicu od njegzine okoline (shvatiti da stolica i pod koji dodiruje nisu isto, kontinuirano biće), a potom rekonstruiramo "pravi" izgled stolice.
Na vrlo sličan, distorziran, način stvarnost percipira Gilgame, lik u romanu na kojem radim, pa se pomalo osjećam kao da su mi ukrali ideju ispred nosa.






 








A guy trained a machine to "watch" Blade Runner. Then things got seriously sci-fi.




Broad decided to use a type of neural network called a convolutional autoencoder. First, he set up what's called a "learned similarity metric" to help the encoder identify Blade Runner data. The metric had the encoder read data from selected frames of the film, as well as "false" data, or data that's not part of the film. By comparing the data from the film to the "outside" data, the encoder "learned" to recognize the similarities among the pieces of data that were actually from Blade Runner. In other words, it now knew what the film "looked" like.
Once it had taught itself to recognize the Blade Runner data, the encoder reduced each frame of the film to a 200-digit representation of itself and reconstructed those 200 digits into a new frame intended to match the original. (Broad chose a small file size, which contributes to the blurriness of the reconstruction in the images and videos I've included in this story.) Finally, Broad had the encoder resequence the reconstructed frames to match the order of the original film.
In addition to Blade Runner, Broad also "taught" his autoencoder to "watch" the rotoscope-animated film A Scanner Darkly. Both films are adaptations of famed Philip K. Dick sci-fi novels, and Broad felt they would be especially fitting for the project (more on that below).
more here:
http://www.vox.com/2016/6/1/11787262/blade-runner-neural-network-encoding








In this blog I detail the work I have been doing over the past year in getting artificial neural networks to reconstruct films — by training them to reconstruct individual frames from films, and then getting them to reconstruct every frame in a given film and resequencing it.
The type of neural network used is an autoencoder. An autoencoder is a type of neural net with a very small bottleneck, it encodes a data sample into a much smaller representation (in this case a 200 digit number), then reconstructs the data sample to the best of its ability. The reconstructions are in no way perfect, but the project was more of a creative exploration of both the capacity and limitations of this approach.
This work was done as the dissertation project for my research masters (MSci) in Creative Computing at Goldsmiths.
more here:



https://medium.com/@Terrybroad/autoencoding-blade-runner-88941213abbe#.zh8886x8w




Autoencoders  UFLDL Tutorial