Learning using Large Datasets

45
Опубликовано 6 сентября 2016, 16:48
The statistical learning theory suggests to choose large capacity models that barely avoid over-fitting the training data. In that perspective, all datasets are small. Things become more complicated when one considers the computational cost of processing large datasets. Computationally challenging training sets appear when one want to emulate intelligence: biological brains learn quite efficiently from the continuous streams of perceptual data generated by our six senses, using limited amounts of sugar as a source of power. Computationally challenging training sets also appear when one want to analyze the masses of data that describe the life of our computerized society. The more data we understand, the more we enjoy competitive advantages. The first part of the presentation clarifies the relation between the statistical efficiency, the design of learning algorithms and their computational cost. The second part makes a detailed exploration of specific learning algorithms and of their implementation, with both simple and complex examples. The third part considers algorithms that learn with a single pass over the data. Certain algorithms have optimal properties but are often too costly. Workarounds are discussed.
Случайные видео
206 дней – 335 94012:24
Adam Savage's Issue With A.I.-Generated Art
06.05.22 – 242 0482:28
Meet Windows 11 | The Basics
12.12.21 – 2 8430:14
Shorts?? How? Why? What?
11.07.07 – 1 062 4301:17
Google Maps: Add personalized content
автотехномузыкадетское