Provable Algorithms for Learning Neural Networks

53
Опубликовано 22 июня 2016, 19:07
We study the learning of fully connected neural networks for binary classification. For the networks of interest, we assume that the L1-norm of the incoming weights of any neuron is bounded by a constant. We further assume that there exists a neural network which separates the positive and negative samples by a constant margin. Under these assumptions, we present an efficient algorithm which learns a neural network with arbitrary generalization error ε>0 . The algorithm's sample complexity and time complexity are polynomial in the input dimension and in 1/ε . We also present a kernel-based improper learning algorithm which achieves the same learnability result, but not relying on the separability assumption. Experiments on synthetic and real datasets demonstrate that the proposed algorithms are not only understandable in theory, but also useful in practice.
Свежие видео
7 дней – 121 1490:16
Apple Intelligence on iPhone 16 Pro
9 дней – 7 6480:08
#Xiaomi14Ultra taking its shot
14 дней – 1 2030:38
Racing to the cloud with IoT
Случайные видео
326 дней – 21 5700:30
Copilot in Word | Transform a document
13.02.21 – 32 1558:18
Apple's March Event Seems PACKED!
автотехномузыкадетское