Efficient Machine Learning at the Edge in Parallel

694
11.6
Опубликовано 12 декабря 2022, 20:34
2022 Data-driven Optimization Workshop: Efficient Machine Learning at the Edge in Parallel

Speaker: Furong Huang, The University of Maryland

Since the beginning of the digital age, the size and quantity of data sets have grown exponentially because of the proliferation of data captured by mobile devices, vehicles, cameras, microphones, and other internet of things (IoT) devices. Given this boom in personal data, major advances in areas such as healthcare, natural language processing, computer vision, and more have been made with the use of deep learning. Federated Learning (FL) is an increasingly popular setting to train powerful deep neural networks with data derived from an assortment of devices. It is a framework for use-cases involving training machine learning models on edge devices without transmitting the collected data to a central server.

In this talk, I will address some major challenges of efficient machine learning at the edge in parallel. Model efficiency, data efficiency and learning paradigm efficiency will be discussed respectively. As some highlights, I will introduce our recent progress on model compression via tensor representation, data efficiency through the lens of generalization analysis and a decentralized federated learning framework via wait-free model communication.
Случайные видео
72 дня – 1 0020:41
Is slow data holding you back?
213 дней – 423 9290:52
Xiaomi SU7 | Drift Mode
13.07.20 – 49 7438:57
Shipping a PWA as an Android app
автотехномузыкадетское