Abstract:Selection of moving targets is a common, yet complex task in human-computer interaction and virtual reality. Predicting user intention may be beneficial to address the challenges inherent in interaction techniques for moving-target selection. This article extends previous models by integrating relative head-target and hand-target features to predict intended moving targets. The features are calculated in a time window ending at roughly two-thirds of the total target selection time and evaluated using decision trees. With two targets, this model predicts choice with up to ~72% accuracy on general moving-target selection tasks and up to ~78% by also including task-related target properties.