Microsoft Research334 тыс
Опубликовано 27 июня 2016, 21:19
Real-time augmented reality (AR) is actively studied for the future user interface and experience (UI/UX) in smart glasses platforms. However, due to the small battery size and limited computing power of the current smart glasses, it has failed to implement the real-time markerless AR in the glasses-type form-factor. In the presentation, I propose a real-time and low-power AR processor for advanced and recognition-based AR applications. For the high throughput, the processor adopts task-level pipelined SIMD-PE clusters and a congestion-aware network-on-chip (NoC). Both of these two features exploit the high data-level parallelism (DLP) and task-level parallelism (TLP) with the pipelined multicore architecture. For the low power consumption, it employs a vocabulary forest accelerator and visual attention algorithm reduces overall workload by removing background clutters from the input video frames to reduce unnecessary external memory accesses and core activation. The proposed processor is successfully demonstrated in a battery-powered head mounted display platform, performing the full-chain of AR operation in real-time.