TY - GEN
T1 - Neural Fields for Co-Reconstructing 3D Objects from Incidental 2D Data
AU - Campbell, Dylan
AU - Insafutdinov, Eldar
AU - Henriques, João F.
AU - Vedaldi, Andrea
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - We ask whether 3D objects can be reconstructed from real world data collected for some other purpose, such as autonomous driving or augmented reality, thus inferring objects only incidentally. 3D reconstruction from incidental data is a major challenge because, in addition to significant noise, only a few views of each object are observed, which are insufficient for reconstruction. We approach this problem as a co-reconstruction task, where multiple objects are reconstructed together, learning shape and appearance priors for regularization. In order to do so, we introduce a neural radiance field that is conditioned via an attention mechanism on the identity of the individual objects. We further disentangle shape from appearance and diffuse color from specular color via an asymmetric two-stream network, which factors shared information from instance-specific details. We demonstrate the ability of this method to reconstruct full 3D objects from partial, incidental observations in autonomous driving and other datasets.
AB - We ask whether 3D objects can be reconstructed from real world data collected for some other purpose, such as autonomous driving or augmented reality, thus inferring objects only incidentally. 3D reconstruction from incidental data is a major challenge because, in addition to significant noise, only a few views of each object are observed, which are insufficient for reconstruction. We approach this problem as a co-reconstruction task, where multiple objects are reconstructed together, learning shape and appearance priors for regularization. In order to do so, we introduce a neural radiance field that is conditioned via an attention mechanism on the identity of the individual objects. We further disentangle shape from appearance and diffuse color from specular color via an asymmetric two-stream network, which factors shared information from instance-specific details. We demonstrate the ability of this method to reconstruct full 3D objects from partial, incidental observations in autonomous driving and other datasets.
KW - 3D reconstruction
KW - novel view synthesis
UR - http://www.scopus.com/inward/record.url?scp=85206432500&partnerID=8YFLogxK
U2 - 10.1109/CVPRW63382.2024.00294
DO - 10.1109/CVPRW63382.2024.00294
M3 - Conference contribution
AN - SCOPUS:85206432500
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 2883
EP - 2893
BT - Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024
PB - IEEE Computer Society
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024
Y2 - 16 June 2024 through 22 June 2024
ER -