Recent Results on Learning Filters and Style Transfer

1 629
38.8
Следующее
Популярные
18.10.22 – 15 8151:25
Project Silica Library 2022
Опубликовано 23 марта 2018, 22:24
In the first part of this talk, I will present recent results on learning image filters for low-level vision. We formulate numerous low-level vision problems (e.g., edge-preserving filtering and denoising) as recursive image filtering via a hybrid neural network. The network contains several spatially variant recurrent neural networks (RNN) as equivalents of a group of distinct recursive filters for each pixel, and a deep convolutional neural network (CNN) that learns the weights of the RNNs. The deep CNN can learn regulations of recurrent propagation for various tasks and effectively guides recurrent propagation over an entire image. The proposed model does not need a large number of convolutional channels nor big kernels to learn features for low-level vision filters. It is much smaller and faster compared to a deep CNN based image filter. Experimental results show that many low-level vision tasks can be effectively learned and carried out in real-time by the proposed algorithm. In addition, we show that spatial propagation can be effectively carried out in a two-dimensional manner with better results.

In the second part, I will present recent results on style transfer. I will first present an algorithm to one style transfer network from training examples of 1,000 styles. Next, I will present recent results on universal style transfer without prior learning. As most style transfer methods significantly distort the content of the input image, I will also discuss the most recent results on photorealistic style transfer.

When time allows, I will also give previews of most recent results on portraiture rendering from a monocular camera, image/video segmentation, and semi-supervised optical flow.

See more at microsoft.com/en-us/research/v...
автотехномузыкадетское