The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Dalam makalah ini, kami mencadangkan algoritma pembelajaran untuk meningkatkan toleransi kesalahan rangkaian neural suapan (NNs) dengan memanipulasi kecerunan fungsi pengaktifan sigmoid neuron. Kami menganggap kesalahan tersekat-di-0 dan tersangkut-1 pada pautan sambungan. Untuk lapisan keluaran, kami menggunakan fungsi dengan kecerunan yang agak lembut untuk meningkatkan toleransi kesalahannya. Untuk meningkatkan toleransi kesalahan lapisan tersembunyi, kami meningkatkan kecerunan fungsi selepas penumpuan. Keputusan eksperimen untuk masalah pengecaman aksara menunjukkan bahawa NN kami lebih unggul dalam toleransi kesalahan, kitaran pembelajaran dan masa pembelajaran kepada NN lain yang dilatih dengan algoritma yang menggunakan suntikan kesalahan, had berat paksa dan pengiraan perkaitan setiap berat dengan ralat output. Selain manipulasi kecerunan yang digabungkan dalam algoritma kami tidak pernah merosakkan keupayaan generalisasi.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Naotake KAMIURA, Yasuyuki TANIGUCHI, Yutaka HATA, Nobuyuki MATSUI, "A Learning Algorithm with Activation Function Manipulation for Fault Tolerant Neural Networks" in IEICE TRANSACTIONS on Information,
vol. E84-D, no. 7, pp. 899-905, July 2001, doi: .
Abstract: In this paper we propose a learning algorithm to enhance the fault tolerance of feedforward neural networks (NNs for short) by manipulating the gradient of sigmoid activation function of the neuron. We assume stuck-at-0 and stuck-at-1 faults of the connection link. For the output layer, we employ the function with the relatively gentle gradient to enhance its fault tolerance. For enhancing the fault tolerance of hidden layer, we steepen the gradient of function after convergence. The experimental results for a character recognition problem show that our NN is superior in fault tolerance, learning cycles and learning time to other NNs trained with the algorithms employing fault injection, forcible weight limit and the calculation of relevance of each weight to the output error. Besides the gradient manipulation incorporated in our algorithm never spoils the generalization ability.
URL: https://global.ieice.org/en_transactions/information/10.1587/e84-d_7_899/_p
Salinan
@ARTICLE{e84-d_7_899,
author={Naotake KAMIURA, Yasuyuki TANIGUCHI, Yutaka HATA, Nobuyuki MATSUI, },
journal={IEICE TRANSACTIONS on Information},
title={A Learning Algorithm with Activation Function Manipulation for Fault Tolerant Neural Networks},
year={2001},
volume={E84-D},
number={7},
pages={899-905},
abstract={In this paper we propose a learning algorithm to enhance the fault tolerance of feedforward neural networks (NNs for short) by manipulating the gradient of sigmoid activation function of the neuron. We assume stuck-at-0 and stuck-at-1 faults of the connection link. For the output layer, we employ the function with the relatively gentle gradient to enhance its fault tolerance. For enhancing the fault tolerance of hidden layer, we steepen the gradient of function after convergence. The experimental results for a character recognition problem show that our NN is superior in fault tolerance, learning cycles and learning time to other NNs trained with the algorithms employing fault injection, forcible weight limit and the calculation of relevance of each weight to the output error. Besides the gradient manipulation incorporated in our algorithm never spoils the generalization ability.},
keywords={},
doi={},
ISSN={},
month={July},}
Salinan
TY - JOUR
TI - A Learning Algorithm with Activation Function Manipulation for Fault Tolerant Neural Networks
T2 - IEICE TRANSACTIONS on Information
SP - 899
EP - 905
AU - Naotake KAMIURA
AU - Yasuyuki TANIGUCHI
AU - Yutaka HATA
AU - Nobuyuki MATSUI
PY - 2001
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E84-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2001
AB - In this paper we propose a learning algorithm to enhance the fault tolerance of feedforward neural networks (NNs for short) by manipulating the gradient of sigmoid activation function of the neuron. We assume stuck-at-0 and stuck-at-1 faults of the connection link. For the output layer, we employ the function with the relatively gentle gradient to enhance its fault tolerance. For enhancing the fault tolerance of hidden layer, we steepen the gradient of function after convergence. The experimental results for a character recognition problem show that our NN is superior in fault tolerance, learning cycles and learning time to other NNs trained with the algorithms employing fault injection, forcible weight limit and the calculation of relevance of each weight to the output error. Besides the gradient manipulation incorporated in our algorithm never spoils the generalization ability.
ER -