Tongue-Gesture Recognition in Head-Mounted Displays

1 379
20.9
Опубликовано 22 декабря 2022, 16:21
Head mounted displays are often used in many situations where the hands of an user may be occupied or otherwise unusable due to situational or permanent movement impairments. Hands-free interaction is critical for making mixed reality applications versatile and accessible in head mounted displays. Traditionally, hands-free interaction relies on voice, which is not private and unusable in loud environments; or gaze, which is slow and demands attention. In this talk, I will propose a non-intrusive tongue gesture interface using sensors in head mounted displays that can be used either independently or in conjunction with gaze tracking to reduce its limitations. Then, I will share gesture recognition results as well as usability metrics from an experimental study and demonstrate a system combining tongue gestures with gaze tracking.

Speaker: Tan Gemicioglu, Georgia Tech
Tan Gemicioglu is a summer intern in the MSR Audio & Acoustics Research Group and an undergraduate student at Georgia Tech. At MSR, they investigated multimodal brain-computer interfaces and gesture interaction advised by Mike Winters and Yu-Te Wang. At Georgia Tech, they are advised by Thad Starner and Melody Jackson, studying passive haptic learning and movement-based brain-computer interfaces. Their primary research interests are in wearable interfaces assisting in communication and learning using physiological sensing and haptics.
автотехномузыкадетское