TY - GEN
T1 - Learning joint gait representation via quintuplet loss minimization
AU - Zhang, Kaihao
AU - Luo, Wenhan
AU - Ma, Lin
AU - Liu, Wei
AU - Li, Hongdong
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - Gait recognition is an important biometric method popularly used in video surveillance, where the task is to identify people at a distance by their walking patterns from video sequences. Most of the current successful approaches for gait recognition either use a pair of gait images to form a cross-gait representation or rely on a single gait image for unique-gait representation. These two types of representations emperically complement one another. In this paper, we propose a new Joint Unique-gait and Cross-gait Network (JUCNet), to combine the advantages of unique-gait representation with that of cross-gait representation, leading to an significantly improved performance. Another key contribution of this paper is a novel quintuplet loss function, which simultaneously increases the inter-class differences by pushing representations extracted from different subjects apart and decreases the intra-class variations by pulling representations extracted from the same subject together. Experiments show that our method achieves the state-of-the-art performance tested on standard benchmark datasets, demonstrating its superiority over existing methods.
AB - Gait recognition is an important biometric method popularly used in video surveillance, where the task is to identify people at a distance by their walking patterns from video sequences. Most of the current successful approaches for gait recognition either use a pair of gait images to form a cross-gait representation or rely on a single gait image for unique-gait representation. These two types of representations emperically complement one another. In this paper, we propose a new Joint Unique-gait and Cross-gait Network (JUCNet), to combine the advantages of unique-gait representation with that of cross-gait representation, leading to an significantly improved performance. Another key contribution of this paper is a novel quintuplet loss function, which simultaneously increases the inter-class differences by pushing representations extracted from different subjects apart and decreases the intra-class variations by pulling representations extracted from the same subject together. Experiments show that our method achieves the state-of-the-art performance tested on standard benchmark datasets, demonstrating its superiority over existing methods.
KW - Biometrics
UR - http://www.scopus.com/inward/record.url?scp=85078783798&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2019.00483
DO - 10.1109/CVPR.2019.00483
M3 - Conference contribution
AN - SCOPUS:85078783798
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 4695
EP - 4704
BT - Proceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
PB - IEEE Computer Society
T2 - 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2019
Y2 - 16 June 2019 through 20 June 2019
ER -