LoCUS: Learning Multiscale 3D-consistent Features from Posed Images

Dominik A. Kloepfer*, Dylan Campbell, João F. Henriques

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

    Abstract

    An important challenge for autonomous agents such as robots is to maintain a spatially and temporally consistent model of the world. It must be maintained through occlusions, previously-unseen views, and long time horizons (e.g., loop closure and re-identification). It is still an open question how to train such a versatile neural representation without supervision. We start from the idea that the training objective can be framed as a patch retrieval problem: given an image patch in one view of a scene, we would like to retrieve (with high precision and recall) all patches in other views that map to the same real-world location. One drawback is that this objective does not promote reusability of features: by being unique to a scene (achieving perfect precision/recall), a representation will not be useful in the context of other scenes. We find that it is possible to balance retrieval and reusability by constructing the retrieval set carefully, leaving out patches that map to far-away locations. Similarly, we can easily regulate the scale of the learned features (e.g., points, objects, or rooms) by adjusting the spatial tolerance for considering a retrieval to be positive. We optimize for (smooth) Average Precision (AP), in a single unified ranking-based objective. This objective also doubles as a criterion for choosing landmarks or keypoints, as patches with high AP. We show results creating sparse, multi-scale, semantic spatial maps composed of highly identifiable landmarks, with applications in landmark retrieval, localization, semantic segmentation and instance segmentation.

    Original languageEnglish
    Title of host publicationProceedings - 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    Pages16588-16598
    Number of pages11
    ISBN (Electronic)9798350307184
    DOIs
    Publication statusPublished - 15 Jan 2023
    Event2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023 - Paris, France
    Duration: 2 Oct 20236 Oct 2023

    Publication series

    NameProceedings of the IEEE International Conference on Computer Vision
    ISSN (Print)1550-5499

    Conference

    Conference2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023
    Country/TerritoryFrance
    CityParis
    Period2/10/236/10/23

    Fingerprint

    Dive into the research topics of 'LoCUS: Learning Multiscale 3D-consistent Features from Posed Images'. Together they form a unique fingerprint.

    Cite this