Computational Video: Methods for Video Segmentation and Video Stabilization, and their Applications.
3 238
25.7
Microsoft Research335 тыс
Следующее
Опубликовано 22 июня 2016, 18:18
In this talk, I will present two specific methods for Computational Video and their applications. First I will describe a method for Video Stabilization. I will describe a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer, running live on youtube.com Second, I will describe an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a region graph over the ob- tained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach gen- erates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video. This system is now available for use via the videosegmentation.com site. I will describe some applications of how this system is used for dynamic scene understanding. This talk is based on efforts of research by Matthias Grundmann, Daniel Castro and S. Hussain Raza, as part of their research efforts as students at GA Tech. Some parts of the work described above were also done at Google, where Matthias Grundmann, Vivek Kwatra and Mei Han are, and where Professor Essa is working as a Consultant. For more details, see prof.irfanessa.com
Свежие видео
Случайные видео