TY - GEN
T1 - Efficiently learning a distance metric for large margin nearest neighbor classification
AU - Park, Kyoungup
AU - Shen, Chunhua
AU - Hao, Zhihui
AU - Kim, Junae
PY - 2011
Y1 - 2011
N2 - We concern the problem of learning a Mahalanobis distance metric for improving nearest neighbor classification. Our work is built upon the large margin nearest neighbor (LMNN) classification framework. Due to the semidefiniteness constraint in the optimization problem of LMNN, it is not scalable in terms of the dimensionality of the input data. The original LMNN solver partially alleviates this problem by adopting alternating projection methods instead of standard interior-point methods. Still, at each iteration, the computation complexity is at least O(D3) (D is the dimension of input data). In this work, we propose a column generation based algorithm to solve the LMNN optimization problem much more efficiently. Our algorithm is much more scalable in that at each iteration, it does not need full eigen-decomposition. Instead, we only need to find the leading eigenvalue and its corresponding eigenvector, which is of O(D2) complexity. Experiments show the efficiency and efficacy of our algorithms.
AB - We concern the problem of learning a Mahalanobis distance metric for improving nearest neighbor classification. Our work is built upon the large margin nearest neighbor (LMNN) classification framework. Due to the semidefiniteness constraint in the optimization problem of LMNN, it is not scalable in terms of the dimensionality of the input data. The original LMNN solver partially alleviates this problem by adopting alternating projection methods instead of standard interior-point methods. Still, at each iteration, the computation complexity is at least O(D3) (D is the dimension of input data). In this work, we propose a column generation based algorithm to solve the LMNN optimization problem much more efficiently. Our algorithm is much more scalable in that at each iteration, it does not need full eigen-decomposition. Instead, we only need to find the leading eigenvalue and its corresponding eigenvector, which is of O(D2) complexity. Experiments show the efficiency and efficacy of our algorithms.
UR - http://www.scopus.com/inward/record.url?scp=80055053689&partnerID=8YFLogxK
M3 - Conference contribution
SN - 9781577355083
T3 - Proceedings of the National Conference on Artificial Intelligence
SP - 453
EP - 458
BT - AAAI-11 / IAAI-11 - Proceedings of the 25th AAAI Conference on Artificial Intelligence and the 23rd Innovative Applications of Artificial Intelligence Conference
T2 - 25th AAAI Conference on Artificial Intelligence and the 23rd Innovative Applications of Artificial Intelligence Conference, AAAI-11 / IAAI-11
Y2 - 7 August 2011 through 11 August 2011
ER -