Google Chrome Developers766 тыс
Следующее
Опубликовано 28 марта 2014, 14:14
Web UIs are getting better at detecting and optimising for touch, but it continues to be a struggle, with much lower level primitives to work with than in the native world. Should we be aiming to abstract all spacial interaction into a 'pointer'? How can more complex spacial interactions like gestures and 3D motion be handled without extraordinary amounts of effort?