Deep Hierarchical Representation of Point Cloud Videos via Spatio-Temporal Decomposition

Hehe Fan*, Xin Yu, Yi Yang, Mohan Kankanhalli

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)


In point cloud videos, point coordinates are irregular and unordered but point timestamps exhibit regularities and order. Grid-based networks for conventional video processing cannot be directly used to model raw point cloud videos. Therefore, in this work, we propose a point-based network that directly handles raw point cloud videos. First, to preserve the spatio-temporal local structure of point cloud videos, we design a point tube covering a local range along spatial and temporal dimensions. By progressively subsampling frames and points and enlarging the spatial radius as the point features are fed into higher-level layers, the point tube can capture video structure in a spatio-temporally hierarchical manner. Second, to reduce the impact of the spatial irregularity on temporal modeling, we decompose space and time when extracting point tube representations. Specifically, a spatial operation is employed to encode the local structure of each spatial region in a tube and a temporal operation is used to encode the dynamics of the spatial regions along the tube. Empirically, the proposed network shows strong performance on 3D action recognition, 4D semantic segmentation and scene flow estimation. Theoretically, we analyse the necessity to decompose space and time in point cloud video modeling and why the network outperforms existing methods.

Original languageEnglish
Pages (from-to)9918-9930
Number of pages13
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number12
Publication statusPublished - 1 Dec 2022
Externally publishedYes


Dive into the research topics of 'Deep Hierarchical Representation of Point Cloud Videos via Spatio-Temporal Decomposition'. Together they form a unique fingerprint.

Cite this