Microsoft Research334 тыс
Опубликовано 6 июня 2018, 21:27
Although existing inertial motion-capture systems work reasonably well (less than 10 degree error in Euler angles), their accuracy suffers when sensor positions change relative to the associated body segments (+/1 60 degree mean error and 120 degree standard deviation). We attribute this performance degradation to undermined calibration values, sensor movement latency and displacement offsets. The latter specifically leads to incongruent rotation matrices in kinematic algorithms that rely on homogeneous transformations. To overcome these limitations, we propose to employ machine-learning techniques. In particular, we use multi-layer perceptrons to learn sensor-displacement patterns based on 3 hours of motion data collected from 12 test subjects in the lab over 215 trials. Furthermore, to compensate for calibration and latency errors, we directly process sensor data with deep neural networks and estimate the joint angles. Based on these approaches, we demonstrate up to 69% reduction in tracking errors.
See more at microsoft.com/en-us/research/v...
See more at microsoft.com/en-us/research/v...
Свежие видео