TY - JOUR
T1 - Pose-Based View Synthesis for Vehicles
T2 - A Perspective Aware Method
AU - Lv, Kai
AU - Sheng, Hao
AU - Xiong, Zhang
AU - Li, Wei
AU - Zheng, Liang
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2020
Y1 - 2020
N2 - In this paper, we focus on the problem of novel view synthesis for vehicles. Some previous works solve the problem of novel view synthesis in a controlled 3D environment by exploiting additional 3D details (i.e., camera viewpoints and underlying 3D models). However, in real scenarios, the 3D details are difficult to obtain. In this case, we find that introducing vehicle pose to represent the views of vehicles is an alternative paradigm to solve the lack of 3D details. In novel view synthesis, preserving local details is one of the most challenging problems. To address this problem, we propose a perspective-aware generative model (PAGM). We are motivated by the prior that vehicles are made of quadrilateral planes. Preserving these rigid planes during image generation ensures that image details are kept. To this end, a classic image transformation method is leveraged, i.e., perspective transformation. In our GAN-based system, the perspective transformation is applied to the encoder feature maps, and the resulting maps are regarded as new conditions for the decoder. This strategy preserves the quadrilateral planes all the way through the network, thus shuttling the texture details from the input image to the generated image. In the experiments, we show that PAGM can generate high-quality vehicle images with fine details. Quantitatively, our method is superior to several competing approaches employing either GAN or the perspective transformation. Code is available at: https://github.com/ilvkai/view-synthesis-for-vehicles.
AB - In this paper, we focus on the problem of novel view synthesis for vehicles. Some previous works solve the problem of novel view synthesis in a controlled 3D environment by exploiting additional 3D details (i.e., camera viewpoints and underlying 3D models). However, in real scenarios, the 3D details are difficult to obtain. In this case, we find that introducing vehicle pose to represent the views of vehicles is an alternative paradigm to solve the lack of 3D details. In novel view synthesis, preserving local details is one of the most challenging problems. To address this problem, we propose a perspective-aware generative model (PAGM). We are motivated by the prior that vehicles are made of quadrilateral planes. Preserving these rigid planes during image generation ensures that image details are kept. To this end, a classic image transformation method is leveraged, i.e., perspective transformation. In our GAN-based system, the perspective transformation is applied to the encoder feature maps, and the resulting maps are regarded as new conditions for the decoder. This strategy preserves the quadrilateral planes all the way through the network, thus shuttling the texture details from the input image to the generated image. In the experiments, we show that PAGM can generate high-quality vehicle images with fine details. Quantitatively, our method is superior to several competing approaches employing either GAN or the perspective transformation. Code is available at: https://github.com/ilvkai/view-synthesis-for-vehicles.
KW - Novel view synthesis
KW - generative adversarial nets
KW - generative model
KW - perspective transformation
KW - vehicle pose
UR - http://www.scopus.com/inward/record.url?scp=85082510983&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.2980130
DO - 10.1109/TIP.2020.2980130
M3 - Review article
SN - 1057-7149
VL - 29
SP - 5163
EP - 5174
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
M1 - 9042874
ER -