Efficient Machine Learning at the Edge in Parallel

644
11.3
Опубликовано 12 декабря 2022, 20:34
2022 Data-driven Optimization Workshop: Efficient Machine Learning at the Edge in Parallel

Speaker: Furong Huang, The University of Maryland

Since the beginning of the digital age, the size and quantity of data sets have grown exponentially because of the proliferation of data captured by mobile devices, vehicles, cameras, microphones, and other internet of things (IoT) devices. Given this boom in personal data, major advances in areas such as healthcare, natural language processing, computer vision, and more have been made with the use of deep learning. Federated Learning (FL) is an increasingly popular setting to train powerful deep neural networks with data derived from an assortment of devices. It is a framework for use-cases involving training machine learning models on edge devices without transmitting the collected data to a central server.

In this talk, I will address some major challenges of efficient machine learning at the edge in parallel. Model efficiency, data efficiency and learning paradigm efficiency will be discussed respectively. As some highlights, I will introduce our recent progress on model compression via tensor representation, data efficiency through the lens of generalization analysis and a decentralized federated learning framework via wait-free model communication.
Случайные видео
14.09.20 – 1 2542:30
Nokia partner voices - EY
18 дней – 122 1830:12
Happy April Fool‘s Day! | Samsung
автотехномузыкадетское