We present a fully automatic approach to real-time facial tracking and animation with a single video camera. Our approach does not need any calibration for each individual user. It learns a generic regressor from public image datasets, which can be applied to any user and arbitrary video cameras to infer accurate 2D facial landmarks as well as the 3D facial shape from 2D video frames. The inferred 2D landmarks are then used to adapt the camera matrix and the user identity to better match the facial expressions of the current user. The regression and adaptation are performed in an alternating manner. With more and more facial expressions observed in the video, the whole process converges quickly with accurate facial tracking and animation. In experiments, our approach demonstrates a level of robustness and accuracy on par with state-of-the-art techniques that require a time-consuming calibration step for each individual user, while running at 28 fps on average. We consider our approach to be an attractive solution for wide deployment in consumer-level applications.
We publish the training data for research purpose. We used 14,460 facial images from three public image datasets. For each image, 74 landmarks were labelled to describe positions of a set of facial features, e.g., the mouth corner, the nose tip, the face contour etc. The landmark files can be directly downloaded from this website. You can get the image files from the corresponding website.
Landmark File Format
74 // Number of landmarks
246.039673 254.651047 // Position of the 1st landmark
245.308151 236.336166 // Position of the 2nd landmark
344.440308 251.455673 // Position of the 74th landmark
Cao Chen, Yanlin Weng, Shun Zhou, Yiying Tong, Kun Zhou: "FaceWarehouse: a 3D Facial Expression Database for Visual Computing", IEEE Transactions on Visualization and Computer Graphics, 20(3): 413-425, 2014. [Website]