Training of Binary Classifiers with Quantum Optimization

215
Опубликовано 12 августа 2016, 2:51
Modern machine learning theory formulates training of a classifier as minimization of an objective function which is the sum of two terms: the empirical risk which characterizes how well the classifier performs on a training data set and the regularization which controls the classifier complexity. I will discuss the advantages that can be obtained if either of these terms are chosen to be non-convex. A non-convex risk allows the training to cope with a significant amount of label noise while retaining the ability to learn a Bayes optimal classifier. This reduces the quality requirements for the training data, a major bottleneck for machine learning applications, increasing the autonomy of the learner. Non-convex regularization can achieve very sparse classifiers leading to increased execution speed and classifiers suitable for power constrained environments. I will describe our efforts to map the training problems onto quadratic binary optimization, the native input format of the D-Wave quantum optimization processors. The talk will discuss the evidence that the processors behave quantumly and the challenge to perform the mapping such that only a sufficiently small number of ancillary qubits are required.
автотехномузыкадетское