From Shading to Local Shape

We develop a framework for extracting a concise representation of the shape information available from diffuse shading in a small image patch. This produces a new mid-level scene descriptor, comprised of explicit local shape distributions that are inferred separately at every image patch across multiple scales. Our framework is based on a quadratic representation of local shape that, in the absence of noise, has guarantees on recovering accurate local shape and lighting. And when noise is present, the inferred local shape distributions provide rich shape information without over-committing to any particular image explanation. These local shape distributions enable efficient and robust reconstruction of scene lighting and object-scale shape, by naturally encoding the fact that some smooth diffuse regions are more informative than others. The experimental results show the proposed shape reconstruction algorithm compares well against the state-of-art counterparts on both synthetic and real image data sets.
View on arXiv