Chen Li
I am a fifth year Ph.D candidate advised by Prof. Kun Zhou in State Key Lab of CAD&CG, Zhejiang University. Before that, I received my B.S. in Computer Science also from Zhejiang University, 2011.

I also work with Dr. Steve Lin as a research intern at Internet Graphics Group of Microsoft Research Asia since Feb 2012.

Email  /  CV  /  LinkedIn
  Research Interests My current research interests fall in the field of computer vision, especially in 3D reconstruction, reflectance modeling and image processing. Because of the background in both vision and graphics, I am also interested in image-based rendering and inverse rendering.   Professional Activities Reviewer, Pacific Graphics 2015
Reviewer, IEEE Transactions on Image Processing (IEEE TIP)
  Publications Bayesian Depth-from-Defocus with Shading Constraints Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin IEEE Transactions ON Image Processing (TIP), vol.25, no.2, pp.589-600, Feb. 2016 [Preprint] [Bibtex] Abstract We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations – namely coarse shape reconstruction and poor accuracy on textureless surfaces – that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to recover accurately from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. The shading estimation can be performed in general scenes with unknown illumination using an approximate estimate of scene lighting. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces. *This paper subsumes our CVPR 2013 paper. Simulating Makeup through Physics-based Manipulation of Intrinsic Image Layers Chen Li, Kun Zhou, Stephen Lin IEEE Computer Vision and Pattern Recognition (CVPR), 2015 [Paper] [Extended Abstract] [Poster] [Bibtex] Abstract We present a method for simulating makeup in a face image. To generate realistic results without detailed geometric and reflectance measurements of the user, we propose to separate the image into intrinsic image layers and alter them according to proposed adaptations of physicallybased reflectance models. Through this layer manipulation, the measured properties of cosmetic products are applied while preserving the appearance characteristics and lighting conditions of the target face. This approach is demonstrated on various forms of cosmetics including foundation, blush, lipstick, and eye shadow. Experimental results exhibit a close approximation to ground truth images, without artifacts such as transferred personal features and lighting effects that degrade the results of image-based makeup stransfer methods.
Continuous Symmetric Stereo with Adaptive Outlier Handling Chen Li, Lap-Fai Yu, Zhichao Lu, Yasuyuki Matsushita, Kun Zhou, Stephen Lin International Conference on 3D Vision (3DV), 2015 [Paper] [Supplementary Material] [Poster] [Bibtex] Abstract We present a method for symmetric stereo matching in which outliers from occlusions, texture-less regions, and repeated patterns are handled in a soft and adaptive manner. Rather than making binary outlier decisions, our model incorporates continuous-valued confidence weights that account for outlier likelihood, to promote robustness in disparity estimation. In contrast to previous outlier labeling techniques that fix the labels at the start of optimization, our method iteratively updates our outlier confidence weights as the matching results are gradually refined. By doing this, errors in an initial labeling can be rectified in the matching process. Our model is optimized in an Expectation- Maximization framework that efficiently produces continuous disparity estimates. This approach provides a good combination of accuracy and speed. Experiments show that our method compares favorably to prior outlier labeling techniques on the Middlebury benchmark, and that it can generate high-quality reconstruction for outdoor images with much more complex occlusions.
Intrinsic Face Image Decomposition with Human Face Priors Chen Li, Kun Zhou, Stephen Lin European Conference on Computer Vision (ECCV), 2014 [Paper] [Supplementary Material] [Poster] [Video] [Bibtex] Abstract We present a method for decomposing a single face photograph into its intrinsic image components. Intrinsic image decomposition has commonly been used to facilitate image editing operations such as relighting and re-texturing. Although current single-image intrinsic image methods are able to obtain an approximate decomposition, image operations involving the human face require greater accuracy since slight errors can lead to visually disturbing results. To improve decomposition for faces, we propose to utilize human face priors as constraints for intrinsic image estimation. These priors include statistics on skin reflectance and facial geometry. We also make use of a physically-based model of skin translucency to heighten accuracy, as well as to further decompose the reflectance image into a diffuse and a specular component. With the use of priors and a skin reflectance model for human faces, our method is able to achieve appreciable improvements in intrinsic image decomposition over more generic techniques.
Bayesian Depth-from-Defocus with Shading Constraints Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin IEEE Computer Vision and Pattern Recognition (CVPR), 2013 [Paper] [Poster] [Bibtex] Abstract We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations, namely coarse shape reconstruction and poor accuracy on textureless surfaces that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to recover accurately from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces.
Removal of Dust Artifacts in Focal Stack Image Sequences Chen Li, Kun Zhou, Stephen Lin International Conference on Pattern Recognition (ICPR), 2012 [Paper] [Poster] [Bibtex] Abstract We propose a technique for removing the appearance of sensor dust in a focal stack image sequence captured with multiple focus settings. Our method is based on the key observation that sensor dust artifacts shift in image position with respect to focus setting, which allows scene information occluded by dust in one image to be inferred from other images in the focal stack. To deal with complications arising from differences in local defocus blur among the images, we analyze the relative blur among corresponding image regions in detecting and removing dust artifacts. Our results show improvements over the state-of-art technique for automatic removal of sensor dust.
Last update: Dec 29, 2015