The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Dalam kertas kerja ini, kami mencadangkan Rangkaian Adversarial Generatif yang dipertingkatkan dengan modul perhatian dalam Penjana, yang boleh meningkatkan keberkesanan Penjana. Tambahan pula, kerja baru-baru ini telah menunjukkan bahawa pelaziman Penjana mempengaruhi prestasi GAN. Dengan memanfaatkan cerapan ini, kami meneroka kesan penormalan yang berbeza (penormalan spektrum, penormalan contoh) pada Penjana dan Diskriminasi. Selain itu, fungsi kehilangan yang dipertingkatkan yang dipanggil jarak Wasserstein Divergence, boleh mengurangkan masalah modul sukar untuk dilatih dalam amalan.
KaiXu CHEN
Kanazawa University
Satoshi YAMANE
Kanazawa University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
KaiXu CHEN, Satoshi YAMANE, "Enhanced Full Attention Generative Adversarial Networks" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 5, pp. 813-817, May 2023, doi: 10.1587/transinf.2022DLL0007.
Abstract: In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022DLL0007/_p
Salinan
@ARTICLE{e106-d_5_813,
author={KaiXu CHEN, Satoshi YAMANE, },
journal={IEICE TRANSACTIONS on Information},
title={Enhanced Full Attention Generative Adversarial Networks},
year={2023},
volume={E106-D},
number={5},
pages={813-817},
abstract={In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.},
keywords={},
doi={10.1587/transinf.2022DLL0007},
ISSN={1745-1361},
month={May},}
Salinan
TY - JOUR
TI - Enhanced Full Attention Generative Adversarial Networks
T2 - IEICE TRANSACTIONS on Information
SP - 813
EP - 817
AU - KaiXu CHEN
AU - Satoshi YAMANE
PY - 2023
DO - 10.1587/transinf.2022DLL0007
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2023
AB - In this paper, we propose improved Generative Adversarial Networks with attention module in Generator, which can enhance the effectiveness of Generator. Furthermore, recent work has shown that Generator conditioning affects GAN performance. Leveraging this insight, we explored the effect of different normalization (spectral normalization, instance normalization) on Generator and Discriminator. Moreover, an enhanced loss function called Wasserstein Divergence distance, can alleviate the problem of difficult to train module in practice.
ER -