您当前的位置:
学术活动

【学术报告】美国中佛罗里达大学齐国君助理教授于6月16日来访实验室

发布时间: 2017-06-16

报告题目:Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities
时  间:2017年6月16日下午15:30-16:30
地  点:计算所446会议室


报告摘要:

In this talk, I will present a novel Loss-Sensitive GAN (LS-GAN) that learns a loss function to separate generated samples from their real examples.  An important property of the LS-GAN is it allows the generator to focus on improving poor data points that are far apart from real examples rather than wasting efforts on those samples that have already been well generated, and thus can improve the overall quality of generated samples. The theoretical analysis also shows that the LS-GAN can generate samples following the true data density. In particular, we present a regularity condition on the underlying data density, which allows us to use a class of Lipschitz losses and generators to model the LS-GAN.  It relaxes the assumption that the classic GAN should have infinite modeling capacity to obtain the similar theoretical guarantee. Furthermore, we show the generalization ability of the LS-GAN by bounding the difference between the model performances over the empirical and real distributions, as well as deriving a tractable sample complexity to train the LS-GAN model in terms of its generalization ability. We also derive a non-parametric solution that characterizes the upper and lower bounds of the losses learned by the LS-GAN, both of which are cone-shaped and have non-vanishing gradient almost everywhere. This shows there will be sufficient gradient to update the generator of the LS-GAN even if the loss function is over-trained, relieving the vanishing gradient problem in the classic GAN.    
We also extend the unsupervised LS-GAN to a conditional model generating samples based on given conditions, and show its applications in both supervised and semi-supervised learning problems.    
We conduct experiments to compare different models on both generation and classification tasks, and show the LS-GAN is resilient against vanishing gradient and model collapse even with overtrained loss function or mildly changed network architecture.

报告人简介:

Dr. Qi is an assistant professor of Computer Science in the University of Central Florida. Prior to joining UCF, he was a Research Staff Member at the IBM T.J. Watson Research Center (Yorktown Heights, NY). He worked with Professor Thomas Huang in the Image Formation and Processing Group at the Beckman Institute in the University of Illinois at Urbana-Champaign, and received the Ph.D. in Electrical and Computer Engineering in December 2013. His main research interests include computer vision, pattern recognition, data mining, and multimedia computing. In particular, he is interested in information and knowledge discovery, analysis and aggregation, from multiple data sources of diverse modalities (e.g., images, audios, sensors and text). His research also aims at effectively leveraging and aggregating data shared in an open connected environment (e.g., social, sensor and mobile networks), as well as developing computational models and theory for general-purpose knowledge and information systems.
Dr. Qi's researches have been published in several venues, including CVPR, ICCV, ACM Multimedia, KDD, ICML, IEEE T. PAMI, IEEE T. KDE, Proceedings of IEEE. He has served or will serve as Area Chair (Senior Program Committee Member) for ICCV, ACM Multimedia, KDD, CIKM and ICME. He also was a Program Committee Chair for MMM 2016. In addition, he has co-edited two special issues of "Deep Learning for Multimedia Computing" and "Big Media Data: Understanding, Search and Mining" for IEEE T. Multimedia and IEEE T. Big Data respectively.


附件: