Provable Algorithms for Learning Neural Networks

417
34.8
Опубликовано 13 июня 2016, 21:42
We study the learning of fully connected neural networks for binary classification. For the networks of interest, we assume that the L1-norm of the incoming weights of any neuron is bounded by a constant. We further assume that there exists a neural network which separates the positive and negative samples by a constant margin. Under these assumptions, we present an efficient algorithm which learns a neural network with arbitrary generalization error ε>0 . The algorithm's sample complexity and time complexity are polynomial in the input dimension and in 1/ε . We also present a kernel-based improper learning algorithm which achieves the same learnability result, but not relying on the separability assumption. Experiments on synthetic and real datasets demonstrate that the proposed algorithms are not only understandable in theory, but also useful in practice.
Случайные видео
26.07.23 – 178 5670:57
The Samsung Z Flip 5...
07.11.22 – 202 04723:40
the ULTIMATE Hummer EV Review!
25.07.22 – 2 4382:05
Averie's Story | Intel
автотехномузыкадетское