Model Compression

2 190
31.7
Следующее
Популярные
16 дней – 3943:15
Ludic Design for Accessibility
Опубликовано 6 сентября 2016, 17:32
Accurate models often are complex models. For example, ensembles often contain 100's or 1000's of base-level classifiers. This complexity makes ensembles more difficult to store, more expensive to execute, and also harder to interpret. Ultimately, this complexity restricts their use in applications where test sets are very large (e.g. web search and image/video recognition), where storage is at a premium (e.g. cell phones and digital cameras), and where computational power is limited (e.g. hearing aids and Mars rovers). In this talk I'll present Model Compression, a method for compressing large, complex models into smaller, faster models without sacrificing the accuracy of the original model. To help motivate model compression, I'll summarize our prior work on Ensemble Selection, a method for generating very complex ensembles that usually outperform bagging, boosting, random forests, stacking, and Bayesian averaging. With model compression, we can train classifiers that are nearly as accurate as ensemble selection, but which are more than 1000 times smaller and faster. I'll also present a new algorithm for density estimation that we developed to make model compression more effective in some applications.
автотехномузыкадетское