TY - GEN
T1 - From UI design image to GUI skeleton
T2 - 40th International Conference on Software Engineering, ICSE 2018
AU - Chen, Chunyang
AU - Su, Ting
AU - Meng, Guozhu
AU - Xing, Zhenchang
AU - Liu, Yang
N1 - Publisher Copyright:
© 2018 ACM.
PY - 2018/5/27
Y1 - 2018/5/27
N2 - A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout in the image, and then translate this understanding into proper GUI components and their compositions. Automating this visual understanding and translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features' spatial layouts, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-world applications. We carry out extensive experiments to evaluate the accuracy, generality and usefulness of our approach.
AB - A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout in the image, and then translate this understanding into proper GUI components and their compositions. Automating this visual understanding and translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features' spatial layouts, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-world applications. We carry out extensive experiments to evaluate the accuracy, generality and usefulness of our approach.
KW - Deep learning
KW - Reverse engineering
KW - User interface
UR - http://www.scopus.com/inward/record.url?scp=85049411978&partnerID=8YFLogxK
U2 - 10.1145/3180155.3180240
DO - 10.1145/3180155.3180240
M3 - Conference contribution
T3 - Proceedings - International Conference on Software Engineering
SP - 665
EP - 676
BT - Proceedings of the 40th International Conference on Software Engineering, ICSE 2018
PB - IEEE Computer Society
Y2 - 27 May 2018 through 3 June 2018
ER -