Microsoft Research335 тыс
Следующее
Опубликовано 7 ноября 2016, 20:58
In machine learning, traditional linear prediction techniques are well understood and methods for their efficient solution have been developed. Many real-world applications are better modelled using non-linear techniques, which often have high computational requirements. Kernel methods utilise linear methods in a non-linear feature space and combine the advantages of both. Commonly used kernel methods include the support vector machine (SVM), Gaussian processes and regularisation networks. These are batch-based, and a global optimisation is conducted over all input exemplars to create a model. In contrast, online methods, such as the kernel recursive least squares (KRLS) algorithm, update the state in a recursive and incremental fashion upon receiving a new exemplar. Although not as extensively studied as batch methods, online approaches are advantageous when throughput and latency are critical. In this talk I will describe efforts in the Computer Engineering Laboratory to produce high-performance FPGA-based implementations of online kernel methods. These have included: (1) a microcoded vector processor optimised for kernel methods; (2) a fully pipelined implementation of kernel normalised least mean squares which achieves 160 GFLOPS; (3) an implementation of Naive Online regularised Risk Minimization Algorithm (NORMA) which uses "braiding" to resolve data hazards and reduce latency by an order of magnitude; and (4) a distributed kernel recursive least squares algorithm which constructs a compact model while enabling massive parallelism.
See more on this video at microsoft.com/en-us/research/v...
See more on this video at microsoft.com/en-us/research/v...
Свежие видео
Случайные видео