黄正能教授:Ally Complementary Experts (ACE) and Localizing Unfamiliarity Near Acquaintance (LUNA) for OLTR

发布者:赵葳发布时间:2021-10-27浏览次数:539


讲座人:黄正能教授

讲座时间:2021年11月8日9:30-11:00

腾讯会议:892 2625 8886



   Abstract: The predefined artificially-balanced training classes in object recognition have limited capability in modeling real-world scenarios where objects are imbalanced-distributed with unknown classes. In this talk, I will discuss a promising solution to the Long-Tailed Recognition (LTR) task for both closed-set and open-set scenarios. First, I will present a one-stage long-tailed recognition scheme, ally complementary experts (ACE), where the expert is the most knowledgeable specialist in a subset that dominates its training, and is complementary to other experts in the less-seen categories without being disturbed by what it has never seen. We design a distribution adaptive optimizer to adjust the learning pace of each expert to avoid over-fitting. Without special bells and whistles, the vanilla ACE outperforms the current one-stage SOTA method by 3 10% on CIFAR10-LT, CIFAR100-LT, ImageNet-LT and iNaturalist datasets. It is also shown to be the first one to break the “seesaw” trade-off by improving the accuracy of the majority and minority categories simultaneously in only one stage. Furthermore, I will also discuss a promising solution to the Open-set Long-Tailed Recognition (OLTR) task utilizing metric learning. Firstly, we propose a distribution-sensitive loss, which weighs more on the tail classes to decrease the intra-class distance in the feature space. Building upon these concentrated feature clusters, a local-density-based metric is introduced, called Localizing Unfamiliarity Near Acquaintance (LUNA), to measure the novelty of a testing sample. LUNA is flexible with different cluster sizes and is reliable on the cluster boundary by considering neighbors of different properties. Moreover, contrary to most of the existing works that alleviate the open-set detection as a simple binary decision, LUNA is a quantitative measurement with interpretable meanings. The proposed method exceeds the state-of-the-art algorithm by over 4-6% in the closed-set recognition accuracy and 4% in F-measure under the open-set on the public benchmark datasets, including our own newly introduced fine-grained OLTR dataset about marine species (MS-LT), which is the first naturally-distributed OLTR dataset revealing the genuine genetic relationships of the classes.







   Short Bio: Dr. Jenq-Neng Hwang received the BS and MS degrees, both in electrical engineering from the National Taiwan University, Taipei, Taiwan, in 1981 and 1983 separately. He then received his Ph.D. degree from the University of Southern California. In the summer of 1989, Dr. Hwang joined the Department of Electrical and Computer Engineering (ECE) of the University of Washington in Seattle, where he has been promoted to Full Professor since 1999. He is the Director of the Information Processing Lab. (IPL), which has won several AI City Challenges and BMTT Tracking awards in the past years. Dr. Hwang served as associate editors for IEEE T-SP, T-NN and T-CSVT, T-IP and Signal Processing Magazine (SPM). He was the General Co-Chair of 2021 IEEE World AI IoT Congress, as well as the program Co-Chairs of IEEE ICME 2016, ICASSP 1998 and ISCAS 2009. Dr. Hwang is a fellow of IEEE since 2001.