TY - JOUR
T1 - From known to the unknown
T2 - Transferring knowledge to answer questions about novel visual and semantic concepts
AU - Farazi, Moshiur R.
AU - Khan, Salman H.
AU - Barnes, Nick
N1 - Publisher Copyright:
© 2020 Elsevier B.V.
PY - 2020/11
Y1 - 2020/11
N2 - Current Visual Question Answering (VQA) systems can answer intelligent questions about ‘known’ visual content. However, their performance drops significantly when questions about visually and linguistically ‘unknown’ concepts are presented during inference (‘Open-world’ scenario). A practical VQA system should be able to deal with novel concepts in real world settings. To address this problem, we propose an exemplar-based approach that transfers learning (i.e., knowledge) from previously ‘known’ concepts to answer questions about the ‘unknown’. We learn a highly discriminative joint embedding (JE) space, where visual and semantic features are fused to give a unified representation. Once novel concepts are presented to the model, it looks for the closest match from an exemplar set in the JE space. This auxiliary information is used alongside the given Image-Question pair to refine visual attention in a hierarchical fashion. Our novel attention model is based on a dual-attention mechanism that combines the complementary effect of spatial and channel attention. Since handling the high dimensional exemplars on large datasets can be a significant challenge, we introduce an efficient matching scheme that uses a compact feature description for search and retrieval. To evaluate our model, we propose a new dataset for VQA, separating unknown visual and semantic concepts from the training set. Our approach shows significant improvements over state-of-the-art VQA models on the proposed Open-World VQA dataset and other standard VQA datasets.
AB - Current Visual Question Answering (VQA) systems can answer intelligent questions about ‘known’ visual content. However, their performance drops significantly when questions about visually and linguistically ‘unknown’ concepts are presented during inference (‘Open-world’ scenario). A practical VQA system should be able to deal with novel concepts in real world settings. To address this problem, we propose an exemplar-based approach that transfers learning (i.e., knowledge) from previously ‘known’ concepts to answer questions about the ‘unknown’. We learn a highly discriminative joint embedding (JE) space, where visual and semantic features are fused to give a unified representation. Once novel concepts are presented to the model, it looks for the closest match from an exemplar set in the JE space. This auxiliary information is used alongside the given Image-Question pair to refine visual attention in a hierarchical fashion. Our novel attention model is based on a dual-attention mechanism that combines the complementary effect of spatial and channel attention. Since handling the high dimensional exemplars on large datasets can be a significant challenge, we introduce an efficient matching scheme that uses a compact feature description for search and retrieval. To evaluate our model, we propose a new dataset for VQA, separating unknown visual and semantic concepts from the training set. Our approach shows significant improvements over state-of-the-art VQA models on the proposed Open-World VQA dataset and other standard VQA datasets.
KW - Computer vision
KW - Dataset bias
KW - Deep learning
KW - Natural language processing
KW - Visual Question Answering
UR - http://www.scopus.com/inward/record.url?scp=85089810894&partnerID=8YFLogxK
U2 - 10.1016/j.imavis.2020.103985
DO - 10.1016/j.imavis.2020.103985
M3 - Article
SN - 0262-8856
VL - 103
JO - Image and Vision Computing
JF - Image and Vision Computing
M1 - 103985
ER -