Microsoft Research335 тыс
Следующее
Опубликовано 13 июня 2016, 21:42
We study the learning of fully connected neural networks for binary classification. For the networks of interest, we assume that the L1-norm of the incoming weights of any neuron is bounded by a constant. We further assume that there exists a neural network which separates the positive and negative samples by a constant margin. Under these assumptions, we present an efficient algorithm which learns a neural network with arbitrary generalization error ε>0 . The algorithm's sample complexity and time complexity are polynomial in the input dimension and in 1/ε . We also present a kernel-based improper learning algorithm which achieves the same learnability result, but not relying on the separability assumption. Experiments on synthetic and real datasets demonstrate that the proposed algorithms are not only understandable in theory, but also useful in practice.
Свежие видео