Architectures for the FPGA Implementation of Online Kernel Methods

1 231
22.8
Следующее
07.11.16 – 1211:12:11
Newton Harrision on Climate Change
Популярные
61 день – 4563:15
Ludic Design for Accessibility
Опубликовано 7 ноября 2016, 20:58
In machine learning, traditional linear prediction techniques are well understood and methods for their efficient solution have been developed. Many real-world applications are better modelled using non-linear techniques, which often have high computational requirements. Kernel methods utilise linear methods in a non-linear feature space and combine the advantages of both. Commonly used kernel methods include the support vector machine (SVM), Gaussian processes and regularisation networks. These are batch-based, and a global optimisation is conducted over all input exemplars to create a model. In contrast, online methods, such as the kernel recursive least squares (KRLS) algorithm, update the state in a recursive and incremental fashion upon receiving a new exemplar. Although not as extensively studied as batch methods, online approaches are advantageous when throughput and latency are critical. In this talk I will describe efforts in the Computer Engineering Laboratory to produce high-performance FPGA-based implementations of online kernel methods. These have included: (1) a microcoded vector processor optimised for kernel methods; (2) a fully pipelined implementation of kernel normalised least mean squares which achieves 160 GFLOPS; (3) an implementation of Naive Online regularised Risk Minimization Algorithm (NORMA) which uses "braiding" to resolve data hazards and reduce latency by an order of magnitude; and (4) a distributed kernel recursive least squares algorithm which constructs a compact model while enabling massive parallelism.

See more on this video at microsoft.com/en-us/research/v...
Случайные видео
302 дня – 654 80811:51
Best TV for PS5 2024!
332 дня – 652 6615:20
Stealing Noctua's quiet airflow mod
автотехномузыкадетское