Sunday, September 12, 2010

Texture Synthesis, Computer Graphics and Computer Vision

Choosing some problem to work on is a very difficult task, regardless of the area that you work on. Sometimes we choose problems based solely on our personal curiosity, other times due to funding commitments. This time I was trying to decide about a small project assignment using computer graphics based solely on personal curiosity. One insight I had some time back before starting graduate school was the importance of image processing on computer graphics, especially when it comes to rendering and synthesizing textures. I thought this problem was interesting for several reasons: It was a relatively simple but well defined problem, it required knowledge about image processing, graphics, data structures, statistics, visual perception and optimization. There was already a body of knowledge from where I could learn. And there have been a lot of successful applications and future works based on those methods. I can mention now image inpainting, non photo realistic rendering and some popular seam carving methods on image processing.

The problem of texture synthesis can be described as follows: Let's say we need to put a texture on a large floor for a video game, but we only have a texture image of a small size. How do we texturize the entire floor? One thing we can do is to repeat our small image over and over until the whole surface is covered. We will obtain easily noticeable boundaries, and even if we create special textures that match in the boundaries cylindrically we will still notice the same pattern repeating over and over at a fixed rate. The ideal solution would be to create a method that 'learns' the texture in order to create a larger one that can give you the same appearance locally and globally as the original texture.

I looked at several publications on this and invariably the use of image samples to synthesize textures was the one that had established the most profound impact in the area. The idea of these methods is to take small image patches from the image pattern that you want to reproduce and place them randomly in the output texture but taking care of merging the boundaries without noticeable effects. I can't really survey those methods in a blog post but I actually did a small survey on them in this technical report
http://www.cs.sunysb.edu/~vordonezroma/texturesynth.pdf

I chose to implement the method described in the SIGGRAPH 2001 paper named 'Image Quilting for Texture Synthesis and Transfer' by AA Efros (Berkeley) and WT Freeman (MERL).  Now 9 years after the paper was originally published and 1 year after I decided to take a look and implement this method, Prof. Efros was awarded the Significant New Researcher Award at this year SIGGRAPH 2010.   An excerpt of the award recognition reads:

'...Efros has published in a variety of areas in computer graphics
and computer vision, but his work can be broadly characterized
as employing data-driven approaches to solve problems that are
difficult to model using parametric methods. For example his work
in texture synthesis by example revolutionized an area in which
previous researchers had largely employed parametric approaches
with moderate success ...'

Which also summarizes what I had in mind before, this method and problem was one that was significant, has a proven track of applications derived from it, I can learn from and is definitely worth looking at. I created a video of my program synthesizing several textures. I include here the video and the slides that I presented in my class of Computer Graphics under the kind supervision of Prof. Hong Qin.





No comments:

Post a Comment