A. Prof. Xiaodan Liang from Invited as Conference Technical Committees of MLAI 2022

A. Prof. Xiaodan Liang from Invited as Conference Technical Committees of MLAI 2022

梁小丹缩略图.png

A. Prof. Xiaodan Liang, Sun Yat-sen University

梁小丹副教授,中山大学


梁小丹博士任职于中山大学副教授, 博导,广东省杰青,深圳市优青。之前于美国卡内基梅隆大学CMU任博士后研究员。主要研究方向为可解释和认知智能及其在大规模视觉识别,人物生成和多模态人机交互领域的应用。Google Scholar引用超过1.2万余次,兼任ICCV 2019, CVPR 2020, NeurIPS 2021, WACV 2021的领域主席和CVPR 2021的Tutorial Chair, 担任Neural Network (中科院二区)期刊副主编, ACM中国SIGAI程序主席, 中国图象图形学会青工委委员. 荣获ACM中国和CCF优秀博士论文奖,阿里巴巴达摩院青橙奖、中国图象图形学会科学技术一等奖,中国图象图形学会石青云女科学家奖,吴文俊人工智能优秀青年奖和ACL 2019最佳展示论文提名奖,Nvidia 2017 Pioneer Research Award, 福布斯中国30 Under 30科学榜等。作为主要组织者举办CVPR 2017-2021 Look into Person Workshop 和ICCV 2021 Self-supervised Learning on Self-driving Workshop, 并举办ICML 2019, ICLR 2021, NeurIPS 2021 Workshop. 研究成果成功落地腾讯医疗产品,阿里商品检索和华为无人驾驶车等场景。


Xiaodan Liang is currently an Associate Professor at Sun Yat-sen University. She was a Project Scientist at Carnegie Mellon University, working with Prof. Eric Xing. She focuses on interpretable and cognitive intelligence and its applications on large-scale visual recognition, automatic machine learning and cross-modality dialogue systems. She has published over 80 cutting-edge papers which have appeared in the most prestigious journals and conferences in the field, Google Citation 12000+. She serves as an Area Chair of ICCV 2019, CVPR 2020 and Tutorial Chair (Organization committee) of CVPR 2021. She has been awarded the ACM China and CCF Best Doctoral Dissertation Award, the Alibaba DAMO Academy Young Fellow (Top10 under 35 in China), and the ACL 2019 Best Demo paper nomination. She is named one of the young innovators 30 under 30 by  Forbes (China). She and her collaborators have also published the largest human parsing dataset to advance the research on human understanding and successfully organized four workshops and challenges on CVPR 2017, CVPR 2018, CVPR 2019, CVPR 2020. She also organized ICML 2019 and ICLR 2021 workshop. Her current research focuses on self-supervised and life-long learning techniques for large-scale task-driven visual understanding.