Photorealistic Material Editing Through Direct Image Manipulation – Summary

This work enables artists without photorealistic rendering experience to reuse their knowledge in image editing to create a target material. Then, after showing this sample to our learning-based method, it finds the closest photorealistic material to match it.

Abstract

Creating photorealistic materials for light transport algorithms requires carefully fine-tuning a set of material properties to achieve a desired artistic effect. This is typically a lengthy process that involves a trained artist with specialized knowledge. In this work, we present a technique that aims to empower novice and intermediate-level users to synthesize high-quality photorealistic materials by only requiring basic image processing knowledge. In the proposed workflow, the user starts with an input image and applies a few intuitive transforms (e.g., colorization, image inpainting) within a 2D image editor of their choice, and in the next step, our technique produces a photorealistic result that approximates this target image. Our method combines the advantages of a neural network-augmented optimizer and an encoder neural network to produce high-quality output results within 30 seconds. We also demonstrate that it is resilient against poorly-edited target images and propose a simple extension to predict image sequences with a strict time budget of 1-2 seconds per image.

Keywords: neural networks, photorealistic rendering, neural rendering, photorealistic material editing

This paper is under the permissive CC-BY license. The source code is under the even more permissive MIT license. Feel free to reuse the materials and hack away at the code! If you built something on top of this, please drop me a message – I’d love to see where others take these ideas and will leave links to the best ones here.

Resources

Changelog:
2019/09/11 – Published the tech report.

Acknowledgments

We would like to thank Reynante Martinez for providing us the geometry and some of the materials for the Paradigm (Fig. 1) and Genesis scenes (Fig. 3), ianofshields for the Liquify scene that served as a basis for Fig. 9, Robin Marin for the material test scene, Andrew Price and Gábor Mészáros for their help with geometry modeling, Felícia Zsolnai-Fehér for her help improving our figures, Christian Freude, David Ha, Philipp Erler and Adam Celarek for their useful comments. We also thank NVIDIA for providing the hardware to train our neural networks. This work was partially funded by Austrian Science Fund (FWF), project number P27974.

Bibtex

@misc{zsolnaifeher2019pme,
    title={Photorealistic Material Editing Through Direct Image Manipulation},
    author={Károly Zsolnai-Fehér and Peter Wonka and Michael Wimmer},
    year={2019},
    eprint={1909.11622},
    archivePrefix={arXiv},
    primaryClass={cs.GR}
}