梁雪峰教授:Learning features for image patch matching

10月23日 10:30,现代交通工程中心7950会议室

发布者:韦钰发布时间:2019-10-21浏览次数:5011

报告内容:Learning features for image patch matching(图像块匹配的特征学习)

报告人:梁雪峰 教授

报告时间:10月23日 10:30

报告地点:现代交通工程中心7950会议室

  

报告人简介:

Bio.Xuefeng Liang, who was selected for Shannxi Province thousand talents program in 2018, is a Huashang distinguished professor with the School of Artificial Intelligence, Xidian University. His research focuses on visual perception & cognition, computer vision and intelligent algorithms. He has published more than 70 research papers. He received his Ph.D. from Japan Advanced Institute of Science and Technology in 2006. During the three-year program, he explored computational geometry algorithms on a variety of vision problems. Afterward, he moved to National Institute of Advanced Industrial Science and Technology at Tsukuba, and worked on robotics vision. From 2008, he simultaneously worked at University College London & Queen Mary University of London for the research of visual perception on motion. From 2010, he was assigned as Associate Professor at Graduate School of Informatics, Kyoto University. In 2018, he joined Xidian University. He serves as the leading guest editor of Signal Processing Image Communication (Elsevier) and Sensors (MDPI), and on the Editorial Board of two international journals. He has chaired and co-chaired seven international conferences, including ICIT (2017, 2018, 2019), DSIT (2019),IReDLiA (2018), ICVIP (2017), and UCC (2017).

  

报告内容简介:

Establishing the local correspondences between images plays a crucial role in many computer vision tasks. Recently, the image patch matching demonstrates a better performance than the conventional local feature point matching, but faces many challenges in both the single-spectral and cross-spectral domains. In this talk, we address this issue on three aspects: 1. Learning the shared feature.  We consider cross-spectral image patches can be matched because there exists a shared semantic feature space among them, in which the semantic features from different spectral images will be more independent of the spectral domains. To learn this shared feature space, we propose a progressive comparison of spatially connected feature metric learning with a feature discrimination constrain (SCFDM). 2. Learning the aggregated feature difference.  We find the feature differences in all levels of CNN provide useful learning information. Thus, the multi-level feature differences are aggregated to enhance the discrimination. We then propose an aggregated feature difference learning network (AFD-Net). 3. Learning feature from hard samples. We find the conventional Siamese and triplet losses of CNN treat all samples linearly, thus make network training time consuming. Therefore, we propose the exponential Siamese and triplet losses, which can naturally focus more on hard samples and put less emphasis on easy ones, meanwhile, speed up the optimization. Our methods outperform other state-of-the-arts in terms of effectiveness and efficiency on image patching matching and image retrieval task.