Dictionary-based Face and Person Recognition from Unconstrained Video
P J. Phillips, Yi-Chen Chen, Vishal M. Patel, Rama Chellappa
The main challenge in recognizing people in uncon- strained video is exploiting the identity information in multiple frames and the accompanying dynamic signature. These identity cues include face, body, and motion. Our approach is based on video-dictionaries for face and body. Video-dictionaries are generalization of sparse representation and dictionaries for still images. We design the video-dictionaries to implicitly encode temporal, pose, and illumination information. In addition, our video-dictionaries are learned for both face and body, which enables the algorithm to encode both identity cues. To increase the ability of our algorithm to learn nonlinearities, we further apply kernel methods for learning the dictionaries. We demonstrate our method on the Multiple Biometric Grand Challenge (MBGC), Face and Ocular Challenge Series (FOCS), Honda/UCSD, and UMD datasets that consist of unconstrained video sequences. Our experimental results on these four datasets are superior to those published in the literature. We show that fusing face and body identity cues can improve performance over face alone.
IEEE Transactions on Pattern Analysis and Machine Intelligence
, Chen, Y.
, Patel, V.
and Chellappa, R.
Dictionary-based Face and Person Recognition from Unconstrained Video, IEEE Transactions on Pattern Analysis and Machine Intelligence, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=913140
(Accessed June 5, 2023)