TY - CHAP
T1 - Locally-Connected Interrelated Network
T2 - A Forward Propagation Primitive
AU - Collins, Nicholas
AU - Kurniawati, Hanna
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2021
Y1 - 2021
N2 - End-to-end learning for planning is a promising approach for finding good robot strategies in situations where the state transition, observation, and reward functions are initially unknown. Many neural network architectures for this approach have shown positive results. Across these networks, seemingly small components have been used repeatedly in different architectures, which means improving the efficiency of these components has great potential to improve the overall performance of the network. This paper aims to improve one such component: The forward propagation module. In particular, we propose Locally-Connected Interrelated Network (LCI-Net)—a novel type of locally connected layer with unshared but interrelated weights—to improve the efficiency of information propagation and learning stochastic transition models for planning. LCI-Net is a small differentiable neural network module that can be plugged into various existing architectures. For evaluation purposes, we apply LCI-Net to QMDP-Net; QMDP-Net is a neural network for solving POMDP problems whose transition, observation, and reward functions are learned. Simulation tests on benchmark problems involving 2D and 3D navigation and grasping indicate promising results: Changing only the forward propagation module alone with LCI-Net improves QMDP-Net generalization capability by a factor of up to 10.
AB - End-to-end learning for planning is a promising approach for finding good robot strategies in situations where the state transition, observation, and reward functions are initially unknown. Many neural network architectures for this approach have shown positive results. Across these networks, seemingly small components have been used repeatedly in different architectures, which means improving the efficiency of these components has great potential to improve the overall performance of the network. This paper aims to improve one such component: The forward propagation module. In particular, we propose Locally-Connected Interrelated Network (LCI-Net)—a novel type of locally connected layer with unshared but interrelated weights—to improve the efficiency of information propagation and learning stochastic transition models for planning. LCI-Net is a small differentiable neural network module that can be plugged into various existing architectures. For evaluation purposes, we apply LCI-Net to QMDP-Net; QMDP-Net is a neural network for solving POMDP problems whose transition, observation, and reward functions are learned. Simulation tests on benchmark problems involving 2D and 3D navigation and grasping indicate promising results: Changing only the forward propagation module alone with LCI-Net improves QMDP-Net generalization capability by a factor of up to 10.
UR - http://www.scopus.com/inward/record.url?scp=85107075470&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-66723-8_8
DO - 10.1007/978-3-030-66723-8_8
M3 - Chapter
T3 - Springer Proceedings in Advanced Robotics
SP - 124
EP - 142
BT - Springer Proceedings in Advanced Robotics
PB - Springer Science and Business Media B.V.
ER -