Learning Models of Human Activities and Interactions using Multi-Modal Wearable Sensors

173
Опубликовано 6 сентября 2016, 16:40
If computers are to become proactive and assistive, they will need to sense and infer human activities and interactions in unconstrained real world settings. This presents us with the challenge of building systems that can handle the real world's noisy data and complexities. Furthermore, deployment of such technology imposes some tight constraints: users' privacy must be protected, the sensors must be lightweight and unobtrusive, and the machine learning algorithms must not require intensive human supervision or guidance. In this talk, I will present the work we have done in recognizing people's behavior in the presence of these constraints. First, I will describe our work on activity recognition and describe the methods we have developed for feature selection and parameter estimation in conditional random fields(CRFs) and its semi-supervised extension that reduces the need for human labeling. Second, I will present our work on modeling social networks and their dynamics from sensor data, specifically how we infer multi-person conversations in a privacy-sensitive manner, quantify conversational influence, learn the network topology, and discover novel correlations between people's behavioral signals and their social roles. I will provide experimental validation of our approach on several real-world datasets, demonstrating advantages over existing methods in terms of privacy and accuracy.
автотехномузыкадетское