TY - JOUR
T1 - SIPHON: Silicon Photonic Computing based Chiplet Accelerator for Deep Learning
AU - Xia, Chengpeng
AU - Zhang, Haibo
AU - Chen, Yawen
AU - Barnard, Amanda S.
N1 - 011 IEEE.
PY - 2025/10/10
Y1 - 2025/10/10
N2 - With the substantial increase in computing workload for deep learning applications, traditional electronic accelerators are facing growing constraints and approaching their practical limits. Silicon photonics has emerged as a promising technology for both communication and computation in accelerating deep learning workloads. Existing photonic accelerators focus on either designing monolithic photonic computing cores or using photonic interconnects with electronic cores for deep neural network (DNN) acceleration. However, the integration of photonic computing within many-core photonic interconnect architectures has not been extensively studied. In this paper, we propose a novel scalable chiplet-based photonic accelerator named SIPHON that leverages both photonic computing and communication for ultrafast and energy-efficient DNN training and inference. A photonic interconnection with dynamically configurable multiple communication modes is proposed to address the broad wavelength and bandwidth demands of general photonic computing cores. We design a photonic computing unit (PCU) for the multiply-accumulate operations and gradient computations in forward and backward propagations. A dataflow is developed to facilitate efficient data reuse and parallel computing by leveraging multiple communication modes. To validate SIPHON’s photonic computing, we prototype the optical platform using FPGA, RF, and photonic devices. Simulations on five deep learning models show that, compared with the GPU and the state-of-the-art optical-memristor-based backpropagation accelerators, SIPHON achieves up to 11.5× and 2.2× acceleration in time and 55.4× and 6.0× in energy efficiency for DNN training.
AB - With the substantial increase in computing workload for deep learning applications, traditional electronic accelerators are facing growing constraints and approaching their practical limits. Silicon photonics has emerged as a promising technology for both communication and computation in accelerating deep learning workloads. Existing photonic accelerators focus on either designing monolithic photonic computing cores or using photonic interconnects with electronic cores for deep neural network (DNN) acceleration. However, the integration of photonic computing within many-core photonic interconnect architectures has not been extensively studied. In this paper, we propose a novel scalable chiplet-based photonic accelerator named SIPHON that leverages both photonic computing and communication for ultrafast and energy-efficient DNN training and inference. A photonic interconnection with dynamically configurable multiple communication modes is proposed to address the broad wavelength and bandwidth demands of general photonic computing cores. We design a photonic computing unit (PCU) for the multiply-accumulate operations and gradient computations in forward and backward propagations. A dataflow is developed to facilitate efficient data reuse and parallel computing by leveraging multiple communication modes. To validate SIPHON’s photonic computing, we prototype the optical platform using FPGA, RF, and photonic devices. Simulations on five deep learning models show that, compared with the GPU and the state-of-the-art optical-memristor-based backpropagation accelerators, SIPHON achieves up to 11.5× and 2.2× acceleration in time and 55.4× and 6.0× in energy efficiency for DNN training.
KW - Chiplet
KW - Neural networks accelerator
KW - Photonic communication
KW - Photonic computing
KW - Photonic interconnect
UR - https://www.scopus.com/pages/publications/105018713587
U2 - 10.1109/JETCAS.2025.3619942
DO - 10.1109/JETCAS.2025.3619942
M3 - Article
AN - SCOPUS:105018713587
SN - 2156-3357
JO - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
JF - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
ER -