Timothy B. Morgan, Diana Jarrell, Judy M. Vance
Natural user interaction, Microsoft Kinect, depth camera, virtual reality
The availability of low-cost natural user interaction (NUI) hardware took a significant step forward in 2010 with the introduction of the Microsoft Kinect. Despite significant work on the available software development kits for the Kinect, tasks beyond simple single-Kinect skeleton tracking remain challenging to implement. This paper introduces a software tool that significantly accelerates the prototyping and implementation of NUI in virtual reality, particularly for developers with limited programming skills. This is achieved by creating a graphical user interface to provide a consistent development environment for defining Kinect settings and voice commands. This is coupled with a server to transmit skeleton and voice information using the Virtual Reality Peripheral Network (VRPN). Furthermore, the system is capable of combining data from multiple Kinect sensors into one data stream, abstracting the implementation details so the designer may focus on the environment creation and development.