TY - GEN
T1 - Object-based touch manipulation for remote guidance of physical tasks
AU - Adcock, Matt
AU - Ranatunga, Dulitha
AU - Smith, Ross
AU - Thomas, Bruce H.
N1 - Publisher Copyright:
Copyright © 2014 ACM.
PY - 2014/10/4
Y1 - 2014/10/4
N2 - This paper presents a spatial multi-touch system for the remote guidance of physical tasks that uses semantic information about the physical properties of the environment. It enables a remote expert to observe a video feed of the local worker's environment and directly specify object movements via a touch display. Visual feedback for the gestures is displayed directly in the local worker's physical environment with Spatial Augmented Reality and observed by the remote expert through the video feed. A virtual representation of the physical environment is captured with a Kinect that facilitates the context-based interactions. We evaluate two methods of remote worker interaction, object-based and sketch-based, and also investigate the impact of two camera positions, top and side, for task performance. Our results indicate translation and aggregate tasks could be more accurately performed via the object based technique when the top-down camera feed was used. While, in the case of the side on camera view, sketching was faster and rotations were more accurate. We also found that for object-based interactions the top view was better on all four of our measured criteria, while for sketching no significant difference was found between camera views.
AB - This paper presents a spatial multi-touch system for the remote guidance of physical tasks that uses semantic information about the physical properties of the environment. It enables a remote expert to observe a video feed of the local worker's environment and directly specify object movements via a touch display. Visual feedback for the gestures is displayed directly in the local worker's physical environment with Spatial Augmented Reality and observed by the remote expert through the video feed. A virtual representation of the physical environment is captured with a Kinect that facilitates the context-based interactions. We evaluate two methods of remote worker interaction, object-based and sketch-based, and also investigate the impact of two camera positions, top and side, for task performance. Our results indicate translation and aggregate tasks could be more accurately performed via the object based technique when the top-down camera feed was used. While, in the case of the side on camera view, sketching was faster and rotations were more accurate. We also found that for object-based interactions the top view was better on all four of our measured criteria, while for sketching no significant difference was found between camera views.
KW - 3D CHI
KW - Multi touch interaction
KW - Object manipulation
KW - Remote guidance
KW - Spatially augmented reality
UR - http://www.scopus.com/inward/record.url?scp=84910674448&partnerID=8YFLogxK
U2 - 10.1145/2659766.2659768
DO - 10.1145/2659766.2659768
M3 - Conference contribution
T3 - SUI 2014 - Proceedings of the 2nd ACM Symposium on Spatial User Interaction
SP - 113
EP - 122
BT - SUI 2014 - Proceedings of the 2nd ACM Symposium on Spatial User Interaction
PB - Association for Computing Machinery
T2 - 2nd ACM Symposium on Spatial User Interaction, SUI 2014
Y2 - 4 October 2014 through 5 October 2014
ER -