TY - JOUR
T1 - Introduction to the special issue on tensor decomposition for signal processing and machine learning
AU - Chen, Hongyang
AU - Vorobyov, Sergiy A.
AU - So, Hing Cheung
AU - Ahmad, Fauzia
AU - Porikli, Fatih
PY - 2021/4
Y1 - 2021/4
N2 - The papers in this special section focus on tensor decomposition for signal processing and machine learning. Tensor decomposition, also called tensor factorization, is useful for representing and analyzing multi-dimensional data. Tensor decompositions have been applied in signal processing applications (speech, acoustics, communications, radar, biomedicine), machine learning (clustering, dimensionality reduction, latent factor models, subspace learning), and well beyond. These tools aid in learning a variety of models, including community models, probabilistic context-free-grammars, Gaussian mixture model, and two-layer neural networks. Although considerable research has been carried out in this area, there are many challenges still outstanding that need to be explored and addressed; some examples being tensor deflation, massive tensor decompositions, and high computational cost of algorithms. The multi-dimensional nature of the signals and even bigger data, particularly in next-generation advanced information and communication technology systems, provide good opportunities to exploit tensor-based models and tensor networks, with the aim of meeting the strong requirements on system flexibility, convergence, and efficiency.
AB - The papers in this special section focus on tensor decomposition for signal processing and machine learning. Tensor decomposition, also called tensor factorization, is useful for representing and analyzing multi-dimensional data. Tensor decompositions have been applied in signal processing applications (speech, acoustics, communications, radar, biomedicine), machine learning (clustering, dimensionality reduction, latent factor models, subspace learning), and well beyond. These tools aid in learning a variety of models, including community models, probabilistic context-free-grammars, Gaussian mixture model, and two-layer neural networks. Although considerable research has been carried out in this area, there are many challenges still outstanding that need to be explored and addressed; some examples being tensor deflation, massive tensor decompositions, and high computational cost of algorithms. The multi-dimensional nature of the signals and even bigger data, particularly in next-generation advanced information and communication technology systems, provide good opportunities to exploit tensor-based models and tensor networks, with the aim of meeting the strong requirements on system flexibility, convergence, and efficiency.
UR - http://www.scopus.com/inward/record.url?scp=85103964091&partnerID=8YFLogxK
U2 - 10.1109/JSTSP.2021.3065184
DO - 10.1109/JSTSP.2021.3065184
M3 - Editorial
SN - 1932-4553
VL - 15
SP - 433
EP - 437
JO - IEEE Journal on Selected Topics in Signal Processing
JF - IEEE Journal on Selected Topics in Signal Processing
IS - 3
M1 - 9393485
ER -