Microsoft Research335 тыс
Опубликовано 17 сентября 2021, 0:30
The goal of Computer Vision, as coined by Marr, is to develop algorithms to answer What are Where at When from visual appearance. The speaker, among others, recognizes the importance of studying underlying entities and relations beyond visual appearance, following an Active Perception paradigm. This talk will present the speaker's efforts over the last decade, ranging from 1) reasoning beyond appearance for visual question answering, image/video captioning tasks, and their evaluation, through 2) temporal and self-supervised knowledge distillation with incremental knowledge transfer, till 3) their roles in a Robotic visual learning framework via a Robotic Indoor Object Search task. The talk will also feature the Active Perception Group (APG)’s ongoing projects (NSF RI, NRI and CPS, DARPA KAIROS, and Arizona IAM) addressing emerging challenges of the nation in autonomous driving and AI security domains, at the ASU School of Computing, Informatics, and Decision Systems Engineering (CIDSE).
Speaker: Yezhou Yang, School of Computing, Informatics, and Decision Systems Engineering, Arizona State University
List of major papers covered in the talk:
V&L model robustness:
ECCV 2020: arxiv.org/abs/2002.08325 VQA-LOL: Visual Question Answering under the Lens of Logic
ACL 2021: arxiv.org/abs/2106.01444 SMURF: SeMantic and linguistic UndeRstanding Fusion for Caption Evaluation via Typicality Analysis
EMNLP 2020: arxiv.org/abs/2009.08566 MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering
arxiv.org/abs/2003.05162 Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning
Robotic object search:
CVPR 2021: arxiv.org/abs/2103.01350 Hierarchical and Partially Observable Goal-driven Policy Learning with Goals Relational Graph
ICRA 2021/RA-L: arxiv.org/abs/2010.08596 Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning with Intrinsic-Extrinsic Modeling
Other teasers:
AI security/GAN attribution
ICLR 2021: arxiv.org/abs/2010.13974 Decentralized Attribution of Generative Models
AAAI 2021: arxiv.org/abs/2012.01806 Attribute-Guided Adversarial Training for Robustness to Natural Perturbations
Microsoft Research Deep Learning team: microsoft.com/en-us/research/g...
Speaker: Yezhou Yang, School of Computing, Informatics, and Decision Systems Engineering, Arizona State University
List of major papers covered in the talk:
V&L model robustness:
ECCV 2020: arxiv.org/abs/2002.08325 VQA-LOL: Visual Question Answering under the Lens of Logic
ACL 2021: arxiv.org/abs/2106.01444 SMURF: SeMantic and linguistic UndeRstanding Fusion for Caption Evaluation via Typicality Analysis
EMNLP 2020: arxiv.org/abs/2009.08566 MUTANT: A Training Paradigm for Out-of-Distribution Generalization in Visual Question Answering
arxiv.org/abs/2003.05162 Video2Commonsense: Generating Commonsense Descriptions to Enrich Video Captioning
Robotic object search:
CVPR 2021: arxiv.org/abs/2103.01350 Hierarchical and Partially Observable Goal-driven Policy Learning with Goals Relational Graph
ICRA 2021/RA-L: arxiv.org/abs/2010.08596 Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning with Intrinsic-Extrinsic Modeling
Other teasers:
AI security/GAN attribution
ICLR 2021: arxiv.org/abs/2010.13974 Decentralized Attribution of Generative Models
AAAI 2021: arxiv.org/abs/2012.01806 Attribute-Guided Adversarial Training for Robustness to Natural Perturbations
Microsoft Research Deep Learning team: microsoft.com/en-us/research/g...