Enhancing AI Segmentation Models for Autonomous Vehicle Safety - NVIDIA DRIVE Labs Ep. 28
12 621
16.7
NVIDIA1.79 млн
Следующее
Опубликовано 25 мая 2023, 16:05
Precise environmental perception is critical for #autonomousvehicle (AV) safety, especially when handling unseen conditions. In this episode of DRIVE Labs, we discuss a Vision Transformer model called SegFormer, which generates robust semantic segmentation while maintaining high efficiency. This video introduces the mechanism behind SegFormer that enables its robustness and efficiency.
00:00:00 - Robust Perception with SegFormer
00:00:05 - Why accuracy and robustness are important for developlng autonomous vehicles
00:00:15 - What is SegFormer?
00:00:28 - The difference between CNN and Transformer Models
00:01:23 - Testing semantic segmentation results on MB’s Cityscapes Dataset
00:02:09 - The impact of JPEG compression on SegFormer
00:02:27 - How SegFormer understands unseen conditions
00:02:41 - Learn more about segmentation for autonomous vehicle use cases
GitHub: github.com/NVlabs/SegFormer
Read more: arxiv.org/abs/2105.15203
Watch the full series here: nvda.ws/3lHQP7H
Learn more about DRIVE Labs: nvda.ws/36r5c6t
Follow us on social:
Twitter: nvda.ws/3LRdkSs
LinkedIn: nvda.ws/3wI4kue
#NVIDIADRIVE
00:00:00 - Robust Perception with SegFormer
00:00:05 - Why accuracy and robustness are important for developlng autonomous vehicles
00:00:15 - What is SegFormer?
00:00:28 - The difference between CNN and Transformer Models
00:01:23 - Testing semantic segmentation results on MB’s Cityscapes Dataset
00:02:09 - The impact of JPEG compression on SegFormer
00:02:27 - How SegFormer understands unseen conditions
00:02:41 - Learn more about segmentation for autonomous vehicle use cases
GitHub: github.com/NVlabs/SegFormer
Read more: arxiv.org/abs/2105.15203
Watch the full series here: nvda.ws/3lHQP7H
Learn more about DRIVE Labs: nvda.ws/36r5c6t
Follow us on social:
Twitter: nvda.ws/3LRdkSs
LinkedIn: nvda.ws/3wI4kue
#NVIDIADRIVE