Hi. I'm Menglei Chai.

I am a fifth-year Ph.D. candidate in the Graphics & Parallel Systems Lab (GAPS)Zhejiang University.
Before that, I received my B.S. degree in Computer Science from Zhejiang University in 2011.

Learn about what I am doing

I am doing research in Computer Graphics.

Focus majorly on image manipulation, physical animation & modeling.
My advisor is Professor Kun Zhou.

Curriculum Vitae

Click here to see my CV.

Publications

  • AutoHair: Fully Automatic Hair Modeling from A Single Image,
    Menglei Chai, Tianjia Shao, Hongzhi Wu, Yanlin Weng, and Kun Zhou,
    Siggraph 2016, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video

    We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.
    												
    ...
    												
  • Adaptive Skinning for Interactive Hair-Solid Simulation,
    Menglei Chai, Changxi Zheng, and Kun Zhou,
    IEEE Transactions on Visualization and Computer Graphics (TVCG) 2016.
    Abstruct Bibtex Paper Video

    Reduced hair models have proven successful for interactively simulating a full head of hair strands, building upon a fundamental assumption that only a small set of guide hairs are needed for explicit simulation, and the rest of the hair move coherently and thus can be interpolated using guide hairs. Unfortunately, hair-solid interactions is a pathological case for traditional reduced hair models, as the motion coherence between hair strands can be arbitrarily broken by interacting with solids.
    In this paper, we propose an adaptive hair skinning method for interactive hair simulation with hair-solid collisions. We precompute many eligible sets of guide hairs and the corresponding interpolation relationships that are represented using a compact strand-based hair skinning model. At runtime, we simulate only guide hairs; for interpolating every other hair, we adaptively choose its guide hairs, taking into account motion coherence and potential hair-solid collisions. Further, we introduce a two-way collision correction algorithm to allow sparsely sampled guide hairs to resolve collisions with solids that can have small geometric features. Our method enables interactive simulation of more than 150K hair strands interacting with complex solid objects, using 400 guide hairs. We demonstrate the efficiency and robustness of the method with various hairstyles and user-controlled arbitrary hair-solid interactions.
    												
    ...
    												
  • High-Quality Hair Modeling from A Single Portrait Photo,
    Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou,
    Siggraph Asia 2015, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video Program

    We propose a novel system to reconstruct a high-quality hair depth map from a single portrait photo with minimal user input. We achieve this by combining depth cues such as occlusions, silhouettes, and shading, with a novel 3D helical structural prior for hair reconstruction. We fit a parametric morphable face model to the input photo and construct a base shape in the face, hair and body regions using occlusion and silhouette constraints. We then estimate the normals in the hair region via a Shape-from-Shading based optimization that uses the lighting inferred from the face model and enforces an adaptive albedo prior that models the typical color and occlusion variations of hair. We introduce a 3D helical hair prior that captures the geometric structure of hair, and show that it can be robustly recovered from the input photo in an automatic manner. Our system combines the base shape, the normals estimated by Shape from Shading, and the 3D helical hair prior to reconstruct high-quality 3D hair models. Our single-image reconstruction closely matches the results of a state-of-the-art multiview stereo applied on a multi-view stereo dataset. Our technique can reconstruct a wide variety of hairstyles ranging from short to long and from straight to messy, and we demonstrate the use of our 3D hair models for high-quality portrait relighting, novel view synthesis and 3D-printed portrait reliefs.
    												
    ...
    												
  • A Reduced Model for Interactive Hairs,
    Menglei Chai, Changxi Zheng, and Kun Zhou,
    Siggraph 2014, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video Project Page

    Realistic hair animation is a crucial component in depicting virtual characters in interactive applications. While much progress has been made in high-quality hair simulation, the overwhelming computation cost hinders similar fidelity in realtime simulations. To bridge this gap, we propose a data-driven solution. Building upon precomputed simulation data, our approach constructs a reduced model to optimally represent hair motion characteristics with a small number of guide hairs and the corresponding interpolation relationships. At runtime, utilizing such a reduced model, we only simulate guide hairs that capture the general hair motion and interpolate all rest strands. We further propose a hair correction method that corrects the resulting hair motion with a position-based model to resolve hair collisions and thus captures motion details. Our hair simulation method enables a simulation of a full head of hairs with over 150K strands in realtime. We demonstrate the efficacy and robustness of our method with various hairstyles and driven motions (e.g., head movement and wind force), and compared against full simulation results that does not appear in the training data.
    												
    @article{chai2014reduced,
     author = {Chai, Menglei and Zheng, Changxi and Zhou, Kun},
     title = {A Reduced Model for Interactive Hairs},
     journal = {ACM Trans. Graph.},
     issue_date = {July 2014},
     volume = {33},
     number = {4},
     month = jul,
     year = {2014},
     issn = {0730-0301},
     pages = {124:1--124:11},
     articleno = {124},
     numpages = {11},
     url = {http://doi.acm.org/10.1145/2601097.2601211},
     doi = {10.1145/2601097.2601211},
     acmid = {2601211},
     publisher = {ACM},
     address = {New York, NY, USA},
     keywords = {collisions, data-driven animation, hair simulation},
    } 
    												
  • Dynamic Hair Manipulation in Images and Videos,
    Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou,
    Siggraph 2013, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video Project Page

    This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.
    												
    @article{chai2013dynamic,
      author = {Chai, Menglei and Wang, Lvdi and Weng, Yanlin and Jin, Xiaogang and Zhou, Kun},
      title = {Dynamic Hair Manipulation in Images and Videos},
      journal = {ACM Trans. Graph.},
      issue_date = {July 2013},
      volume = {32},
      number = {4},
      month = jul,
      year = {2013},
      issn = {0730-0301},
      pages = {75:1--75:8},
      articleno = {75},
      numpages = {8},
      url = {http://doi.acm.org/10.1145/2461912.2461990},
      doi = {10.1145/2461912.2461990},
      acmid = {2461990},
      publisher = {ACM},
      address = {New York, NY, USA},
      keywords = {hair modeling, image manipulation, video editing},
    }
    												
  • Single-View Hair Modeling for Portrait Manipulation,
    Menglei Chai, Lvdi Wang, Yanlin Weng, Yizhou Yu, Baining Guo, and Kun Zhou,
    Siggraph 2012, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video Project Page

    Human hair is known to be very difficult to model or reconstruct. In this paper, we focus on applications related to portrait manipulation and take an application-driven approach to hair modeling. To enable an average user to achieve interesting portrait manipulation results, we develop a single-view hair modeling technique with modest user interaction to meet the unique requirements set by portrait manipulation. Our method relies on heuristics to generate a plausible high-resolution strand-based 3D hair model. This is made possible by an effective high-precision 2D strand tracing algorithm, which explicitly models uncertainty and local layering during tracing. The depth of the traced strands is solved through an optimization, which simultaneously considers depth constraints, layering constraints as well as regularization terms. Our single-view hair modeling enables a number of interesting applications that were previously challenging, including transferring the hairstyle of one subject to another in a potentially different pose, rendering the original portrait in a novel view and image-space hair editing.
    												
    @article{chai2012single,
      author = {Chai, Menglei and Wang, Lvdi and Weng, Yanlin and Yu, Yizhou and Guo, Baining and Zhou, Kun},
      title = {Single-view Hair Modeling for Portrait Manipulation},
      journal = {ACM Trans. Graph.},
      issue_date = {July 2012},
      volume = {31},
      number = {4},
      month = jul,
      year = {2012},
      issn = {0730-0301},
      pages = {116:1--116:8},
      articleno = {116},
      numpages = {8},
      url = {http://doi.acm.org/10.1145/2185520.2185612},
      doi = {10.1145/2185520.2185612},
      acmid = {2185612},
      publisher = {ACM},
      address = {New York, NY, USA},
      keywords = {hairstyle replacement, portrait pop-ups, strand tracing},
    }
    												
  • Cone Tracing for Furry Object Rendering,
    Hao Qin, Menglei Chai, Qiming Hou, Zhong Ren, and Kun Zhou,
    IEEE Transactions on Visualization and Computer Graphics (TVCG) 2014.
    Abstruct Bibtex Paper Video

    We present a cone-based ray tracing algorithm for high-quality rendering of furry objects with reflection, refraction and defocus effects. By aggregating many sampling rays in a pixel as a single cone, we significantly reduce the high supersampling rate required by the thin geometry of fur fibers. To reduce the cost of intersecting fur fibers with cones, we construct a bounding volume hierarchy for the fiber geometry to find the fibers potentially intersecting with cones, and use a set of connected ribbons to approximate the projections of these fibers on the image plane. The computational cost of compositing and filtering transparent samples within each cone is effectively reduced by approximating away in-cone variations of shading, opacity and occlusion. The result is a highly efficient ray tracing algorithm for furry objects which is able to render images of quality comparable to those generated by alternative methods, while significantly reducing the rendering time. We demonstrate the rendering quality and performance of our algorithm using several examples and a user study.
    												
    @article{qin2014cone, 
      author={Qin, H. and Chai, M. and Hou, Q. and Ren, Z. and Zhou, K.}, 
      journal={Visualization and Computer Graphics, IEEE Transactions on}, 
      title={Cone Tracing for Furry Object Rendering}, 
      year={2014}, 
      month={}, 
      volume={PP}, 
      number={99}, 
      pages={1-1}, 
      keywords={Geometry; Graphics processing units; Hair; Image segmentation; Lighting; Ray tracing;Rendering (computer graphics); Antialiasing; Raytracing}, 
      doi={10.1109/TVCG.2013.270}, 
      ISSN={1077-2626},
    }
    												
  • As-Rigid-As-Possible Distance Field Metamorphosis,
    Yanlin Weng, Menglei Chai, Weiwei Xu, Yiying Tong, and Kun Zhou,
    Pacific Graphics 2013, Computer Graphics Forum (CGF).
    Abstruct Bibtex Paper Video

    Widely used for morphing between objects with arbitrary topology, distance field interpolation (DFI) handles topological transition naturally without the need for correspondence or remeshing, unlike surface-based interpolation approaches. However, lack of correspondence in DFI also leads to ineffective control over the morphing process. In particular, unless the user specifies a dense set of landmarks, it is not even possible to measure the distortion of intermediate shapes during interpolation, let alone control it. To remedy such issues, we introduce an approach for establishing correspondence between the interior of two arbitrary objects, formulated as an optimal mass transport problem with a sparse set of landmarks. This correspondence enables us to compute non-rigid warping functions that better align the source and target objects as well as to incorporate local rigidity constraints to perform as-rigid-aspossible DFI. We demonstrate how our approach helps achieve flexible morphing results with a small number of landmarks.
    												
    @inproceedings{weng2013rigid,
      title={As-Rigid-As-Possible Distance Field Metamorphosis},
      author={Weng, Yanlin and Chai, Menglei and Xu, Weiwei and Tong, Yiying and Zhou, Kun},
      booktitle={Computer Graphics Forum},
      volume={32},
      number={7},
      pages={381--389},
      year={2013},
      organization={Wiley Online Library},
    }
    												
  • Hair Interpolation for Portrait Morphing,
    Yanlin Weng, Lvdi Wang, Xiao Li, Menglei Chai, and Kun Zhou,
    Pacific Graphics 2013, Computer Graphics Forum (CGF).
    Abstruct Bibtex Paper Video Project Page

    In this paper we study the problem of hair interpolation: given two 3D hair models, we want to generate a sequence of intermediate hair models that transform from one input to another both smoothly and aesthetically pleasing. We propose an automatic method that efficiently calculates a many-to-many strand correspondence between two or more given hair models, taking into account the multi-scale clustering structure of hair. Experiments demonstrate that hair interpolation can be used for producing more vivid portrait morphing effects and enabling a novel example-based hair styling methodology, where a user can interactively create new hairstyles by continuously exploring a “style space” spanning multiple input hair models.
    												
    @inproceedings{weng2013hair,
      title={Hair Interpolation for Portrait Morphing},
      author={Weng, Yanlin and Wang, Lvdi and Li, Xiao and Chai, Menglei and Zhou, Kun},
      booktitle={Computer Graphics Forum},
      volume={32},
      number={7},
      pages={79--84},
      year={2013},
      organization={Wiley Online Library},
    }
    												
  • Surface Mesh Controlled Fast Hair Modeling,
    Menglei Chai, Yanlin Weng, Qiming Hou, and Zhong Ren,
    Journal of Computer-Aided Design & Computer Graphics 2012.
    Abstruct Paper (in Chinese)

    We propose a fast hair modeling method based on polygonal surface mesh editing to reduce the complexity of current hair modeling systems. First, coarse surface meshes are created to represent the overall shapes of the target hair models. Then, parameterizations are computed for these meshes to guide the strand directions. Finally, a set of hair steam-lines are generated in space to refine the final strands automatically in order to fit the expected shapes while still preserving the particular hair style effects. Experiment results show that this method can produce high-quality hair models with relatively simple mesh modeling operations. This greatly simplifies the hair modeling procedure and enhances users' ability to control the final hair shape. Furthermore, this method can be easily integrated with physical-based hair simulation technique to produce realistic hair animations.
    												

Contact

Send me e-mail to cmlatsim at gmail dot com.
Or find me at #405, Infomation & Control Building, Zijin'gang Campus, Zhejiang University.