Sensor Fusion for Learning-based Motion Estimation in VR

1 935
43
Опубликовано 9 октября 2018, 18:32
Tracking 3D-position of controllers is an important problem in AR and VR devices. Current state-of-the-art in Windows Mixed Reality (MXR) utilizes a constellation of LEDs on the controllers to track pose. The performance of this vision-based system suffers in sunlight and when the controller moves out of the camera's field-of-view (Out-of-FOV). In this work, we employ sensor fusion within a learning-based framework to track the controller position. Specifically, we utilize ultrasound sensors on hand-held controllers and the head-mounted display to obtain ranging information. We then combine this information within the feedback loop of an auto-regressive forecasting model that is built with Recurrent Neural Networks (RNN). Finally, we fuse the RNN output with the default MXR tracking result via a Kalman Filter across different positional states (including Out-of-FOV). Thanks to the proposed approach, we demonstrate near-isotropic accuracy levels for estimating controller position, which was not possible to achieve before with the default MXR tracking system.

See more at microsoft.com/en-us/research/v...
автотехномузыкадетское