The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Rangkaian Adversarial Generatif (GAN) ialah salah satu prinsip pembelajaran model generatif yang paling berjaya dan digunakan secara liar pada banyak tugas generasi. Pada mulanya, penalti kecerunan (GP) telah digunakan untuk menguatkuasakan diskriminasi dalam GAN untuk memenuhi kesinambungan Lipschitz dalam Wasserstein GAN. Walaupun versi vanila bagi penalti kecerunan telah diubah suai lagi untuk tujuan yang berbeza, mencari keseimbangan yang lebih baik dan kualiti penjanaan yang lebih tinggi dalam pembelajaran lawan masih mencabar. Baru-baru ini, DRAGAN telah dicadangkan untuk mencapai kelinearan tempatan dalam manifold data sekeliling dengan menggunakan penalti kecerunan bunyi untuk mempromosikan kecembungan tempatan dalam pengoptimuman model. Walau bagaimanapun, kami menunjukkan bahawa pendekatan mereka akan mengenakan beban untuk memuaskan kesinambungan Lipschitz untuk diskriminasi. Konflik sedemikian antara kesinambungan Lipschitz dan kelinearan tempatan dalam DRAGAN akan mengakibatkan keseimbangan yang lemah, dan dengan itu kualiti penjanaan adalah jauh dari ideal. Untuk tujuan ini, kami mencadangkan pendekatan baru untuk memanfaatkan kedua-dua lineariti tempatan dan kesinambungan Lipschitz untuk mencapai keseimbangan yang lebih baik tanpa konflik. Secara terperinci, kami menggunakan fungsi pengaktifan disegerakkan kami dalam diskriminator untuk menerima bentuk penalti kecerunan bunyi tertentu untuk mencapai kelinearan tempatan tanpa kehilangan sifat kesinambungan Lipschitz dalam diskriminator. Keputusan percubaan menunjukkan bahawa kaedah kami boleh mencapai kualiti imej yang unggul dan mengatasi prestasi WGAN-GP, DiracGAN dan DRAGAN dari segi Skor Permulaan dan Jarak Permulaan Fréchet pada set data dunia sebenar.
Rui YANG
University of Tokyo
Raphael SHU
Amazon AI
Hideki NAKAYAMA
University of Tokyo
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Rui YANG, Raphael SHU, Hideki NAKAYAMA, "Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 9, pp. 1537-1545, September 2022, doi: 10.1587/transinf.2022EDP7019.
Abstract: Generative Adversarial Networks (GANs) are one of the most successful learning principles of generative models and were wildly applied to many generation tasks. In the beginning, the gradient penalty (GP) was applied to enforce the discriminator in GANs to satisfy Lipschitz continuity in Wasserstein GAN. Although the vanilla version of the gradient penalty was further modified for different purposes, seeking a better equilibrium and higher generation quality in adversarial learning remains challenging. Recently, DRAGAN was proposed to achieve the local linearity in a surrounding data manifold by applying the noised gradient penalty to promote the local convexity in model optimization. However, we show that their approach will impose a burden on satisfying Lipschitz continuity for the discriminator. Such conflict between Lipschitz continuity and local linearity in DRAGAN will result in poor equilibrium, and thus the generation quality is far from ideal. To this end, we propose a novel approach to benefit both local linearity and Lipschitz continuity for reaching a better equilibrium without conflict. In detail, we apply our synchronized activation function in the discriminator to receive a particular form of noised gradient penalty for achieving local linearity without losing the property of Lipschitz continuity in the discriminator. Experimental results show that our method can reach the superior quality of images and outperforms WGAN-GP, DiracGAN, and DRAGAN in terms of Inception Score and Fréchet Inception Distance on real-world datasets.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022EDP7019/_p
Salinan
@ARTICLE{e105-d_9_1537,
author={Rui YANG, Raphael SHU, Hideki NAKAYAMA, },
journal={IEICE TRANSACTIONS on Information},
title={Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks},
year={2022},
volume={E105-D},
number={9},
pages={1537-1545},
abstract={Generative Adversarial Networks (GANs) are one of the most successful learning principles of generative models and were wildly applied to many generation tasks. In the beginning, the gradient penalty (GP) was applied to enforce the discriminator in GANs to satisfy Lipschitz continuity in Wasserstein GAN. Although the vanilla version of the gradient penalty was further modified for different purposes, seeking a better equilibrium and higher generation quality in adversarial learning remains challenging. Recently, DRAGAN was proposed to achieve the local linearity in a surrounding data manifold by applying the noised gradient penalty to promote the local convexity in model optimization. However, we show that their approach will impose a burden on satisfying Lipschitz continuity for the discriminator. Such conflict between Lipschitz continuity and local linearity in DRAGAN will result in poor equilibrium, and thus the generation quality is far from ideal. To this end, we propose a novel approach to benefit both local linearity and Lipschitz continuity for reaching a better equilibrium without conflict. In detail, we apply our synchronized activation function in the discriminator to receive a particular form of noised gradient penalty for achieving local linearity without losing the property of Lipschitz continuity in the discriminator. Experimental results show that our method can reach the superior quality of images and outperforms WGAN-GP, DiracGAN, and DRAGAN in terms of Inception Score and Fréchet Inception Distance on real-world datasets.},
keywords={},
doi={10.1587/transinf.2022EDP7019},
ISSN={1745-1361},
month={September},}
Salinan
TY - JOUR
TI - Improving Noised Gradient Penalty with Synchronized Activation Function for Generative Adversarial Networks
T2 - IEICE TRANSACTIONS on Information
SP - 1537
EP - 1545
AU - Rui YANG
AU - Raphael SHU
AU - Hideki NAKAYAMA
PY - 2022
DO - 10.1587/transinf.2022EDP7019
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2022
AB - Generative Adversarial Networks (GANs) are one of the most successful learning principles of generative models and were wildly applied to many generation tasks. In the beginning, the gradient penalty (GP) was applied to enforce the discriminator in GANs to satisfy Lipschitz continuity in Wasserstein GAN. Although the vanilla version of the gradient penalty was further modified for different purposes, seeking a better equilibrium and higher generation quality in adversarial learning remains challenging. Recently, DRAGAN was proposed to achieve the local linearity in a surrounding data manifold by applying the noised gradient penalty to promote the local convexity in model optimization. However, we show that their approach will impose a burden on satisfying Lipschitz continuity for the discriminator. Such conflict between Lipschitz continuity and local linearity in DRAGAN will result in poor equilibrium, and thus the generation quality is far from ideal. To this end, we propose a novel approach to benefit both local linearity and Lipschitz continuity for reaching a better equilibrium without conflict. In detail, we apply our synchronized activation function in the discriminator to receive a particular form of noised gradient penalty for achieving local linearity without losing the property of Lipschitz continuity in the discriminator. Experimental results show that our method can reach the superior quality of images and outperforms WGAN-GP, DiracGAN, and DRAGAN in terms of Inception Score and Fréchet Inception Distance on real-world datasets.
ER -