Two Minute Papers – Synthesizing Sound From Collisions

Two Minute Papers – Adaptive Cloth Simulations

Two Minute Papers – Creating Photographs Using Deep Learning

Two Minute Papers – Reconstructing Sound From Vibrations

Two Minute Papers – Adaptive Fluid Simulations

Two Minute Papers is back with some adaptive fluid simulation awesomeness!


Two Minute Papers – Manipulating Photorealistic Renderings

Photorealistic rendering (also called global illumination) enables us to see how digital objects would look like in real life. It is an amazingly powerful tool in the hands of a professional artist, who can create breathtaking images or animations with. However, for the longest time, artists didn’t use it in the movie industry because it did not offer a great artistic freedom – after all, it works according to the laws of physics, which are exact. This piece of work enables us to apply artistic edits to photorealistic renderings easily and intuitively. I believe this one has the potential to single-handedly change the landscape of photorealistic rendering on a production scale.

Two Minute Papers – Digital Creatures Learn To Walk

In this episode, we are going to talk about computer animation, animating bipeds in particular. If we have the geometry of a creature, we need to specify the bones, the muscle routings and the muscle activations to make them able to walk. Depending on the body proportions and types, it may require quite a bit of trial and error to build muscle layouts so the creature doesn’t collapse. Making them walk is even more difficult! This piece of work not only makes it happen for a variety of bipedal creatures, but the results are robust for a variety of target walking speeds, uneven terrain and other, unpleasant difficulties.

Two Minute Papers – Hydrographic 3D Printing

3D printing is a technique to create digital objects in real life. This technology is mostly focused on reproducing the digital geometry itself – colored patterns (textures) still remains a challenge, and we only have very rudimentary technology to do that.

Hydrographic printing on 3D surfaces is a really simple technique: you place a film in water, use a chemical activator spray on it, and shove the object in the water.

However, since these objects start stretching the film, the technique is not very accurate, and it only helps you putting repetitive patterns on these objects.

Computational Hydrographic Printing is a technique that simulates all of these physical forces that are exerted on the film when your desired object is immersed into the water. Then, it creates a new image map taking all of these distortions into account, and this image you can print with your home inkjet printer. The results will be really accurate, close to indistinguishable from the digitally designed object.

Two Minute Papers – Deep Neural Network Learns Van Gogh’s Art

Artificial neural networks were inspired by the human brain and simulate how neurons behave when they are shown a sensory input (e.g., images, sounds, etc). They are known to be excellent tools for image recognition, any many other problems beyond that – they also excel at weather predictions, breast cancer cell mitosis detection, brain image segmentation and toxicity prediction among many others. Deep learning means that we use an artificial neural network with multiple layers, making it even more powerful for more difficult tasks.

This time they have been shown to be apt at reproducing the artistic style of many famous painters, such as Vincent Van Gogh and Pablo Picasso among many others. All the user needs to do is provide an input photograph and a target image from which the artistic style will be learned.

Two Minute Papers – Time Lapse Videos From Community Photos

Building time lapse videos from community photographs is an incredibly difficult and laborious task: these photos were taken at a different part of the year, from different times of the day, with different viewpoints and cameras. A good algorithm should try to equalize these images and bring them to a common denominator to get rid of the commonly seen flickering effect. Researchers at the University of Washington and Google nailed this regularization in their newest work that they showcased at SIGGRAPH 2015. Check out the video for the details!