The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Perceptron berbilang lapisan biasanya dianggap sebagai pelajar pasif yang hanya menerima data latihan yang diberikan. Walau bagaimanapun, jika perceptron berbilang lapisan secara aktif mengumpul data latihan yang menyelesaikan ketidakpastiannya tentang masalah yang sedang dipelajari, klasifikasi yang cukup tepat dicapai dengan data latihan yang lebih sedikit. Baru-baru ini, pembelajaran aktif sebegini telah mendapat minat yang semakin meningkat. Dalam makalah ini, kami mencadangkan strategi pembelajaran aktif yang baru. Strategi ini cuba menghasilkan hanya data latihan yang berguna untuk perceptron berbilang lapisan untuk mencapai pengelasan yang tepat, dan mengelakkan penjanaan data latihan yang berlebihan. Tambahan pula, strategi ini cuba mengelak daripada menjana data latihan yang berguna buat sementara waktu yang akan menjadi berlebihan pada masa hadapan. Akibatnya, strategi ini boleh membenarkan perceptron berbilang lapisan mencapai pengelasan yang tepat dengan data latihan yang lebih sedikit. Untuk menunjukkan prestasi strategi berbanding dengan strategi pembelajaran aktif yang lain, kami juga mencadangkan algoritma pembelajaran aktif empirikal sebagai pelaksanaan strategi, yang tidak memerlukan pengiraan yang mahal. Keputusan eksperimen menunjukkan bahawa algoritma yang dicadangkan meningkatkan ketepatan klasifikasi perceptron berbilang lapisan dengan data latihan yang lebih sedikit daripada algoritma pemilihan rawak konvensional yang membina set data latihan tanpa strategi yang jelas. Selain itu, algoritma mengatasi algoritma pembelajaran aktif biasa dalam eksperimen. Keputusan tersebut menunjukkan bahawa algoritma boleh membina set data latihan yang sesuai pada kos pengiraan yang lebih rendah, kerana penjanaan data latihan biasanya mahal. Sehubungan itu, algoritma membuktikan keberkesanan strategi melalui eksperimen. Kami juga membincangkan beberapa kelemahan algoritma.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Hiroyuki TAKIZAWA, Taira NAKAJIMA, Hiroaki KOBAYASHI, Tadao NAKAMURA, "An Active Learning Algorithm Based on Existing Training Data" in IEICE TRANSACTIONS on Information,
vol. E83-D, no. 1, pp. 90-99, January 2000, doi: .
Abstract: A multilayer perceptron is usually considered a passive learner that only receives given training data. However, if a multilayer perceptron actively gathers training data that resolve its uncertainty about a problem being learnt, sufficiently accurate classification is attained with fewer training data. Recently, such active learning has been receiving an increasing interest. In this paper, we propose a novel active learning strategy. The strategy attempts to produce only useful training data for multilayer perceptrons to achieve accurate classification, and avoids generating redundant training data. Furthermore, the strategy attempts to avoid generating temporarily useful training data that will become redundant in the future. As a result, the strategy can allow multilayer perceptrons to achieve accurate classification with fewer training data. To demonstrate the performance of the strategy in comparison with other active learning strategies, we also propose an empirical active learning algorithm as an implementation of the strategy, which does not require expensive computations. Experimental results show that the proposed algorithm improves the classification accuracy of a multilayer perceptron with fewer training data than that for a conventional random selection algorithm that constructs a training data set without explicit strategies. Moreover, the algorithm outperforms typical active learning algorithms in the experiments. Those results show that the algorithm can construct an appropriate training data set at lower computational cost, because training data generation is usually costly. Accordingly, the algorithm proves the effectiveness of the strategy through the experiments. We also discuss some drawbacks of the algorithm.
URL: https://global.ieice.org/en_transactions/information/10.1587/e83-d_1_90/_p
Salinan
@ARTICLE{e83-d_1_90,
author={Hiroyuki TAKIZAWA, Taira NAKAJIMA, Hiroaki KOBAYASHI, Tadao NAKAMURA, },
journal={IEICE TRANSACTIONS on Information},
title={An Active Learning Algorithm Based on Existing Training Data},
year={2000},
volume={E83-D},
number={1},
pages={90-99},
abstract={A multilayer perceptron is usually considered a passive learner that only receives given training data. However, if a multilayer perceptron actively gathers training data that resolve its uncertainty about a problem being learnt, sufficiently accurate classification is attained with fewer training data. Recently, such active learning has been receiving an increasing interest. In this paper, we propose a novel active learning strategy. The strategy attempts to produce only useful training data for multilayer perceptrons to achieve accurate classification, and avoids generating redundant training data. Furthermore, the strategy attempts to avoid generating temporarily useful training data that will become redundant in the future. As a result, the strategy can allow multilayer perceptrons to achieve accurate classification with fewer training data. To demonstrate the performance of the strategy in comparison with other active learning strategies, we also propose an empirical active learning algorithm as an implementation of the strategy, which does not require expensive computations. Experimental results show that the proposed algorithm improves the classification accuracy of a multilayer perceptron with fewer training data than that for a conventional random selection algorithm that constructs a training data set without explicit strategies. Moreover, the algorithm outperforms typical active learning algorithms in the experiments. Those results show that the algorithm can construct an appropriate training data set at lower computational cost, because training data generation is usually costly. Accordingly, the algorithm proves the effectiveness of the strategy through the experiments. We also discuss some drawbacks of the algorithm.},
keywords={},
doi={},
ISSN={},
month={January},}
Salinan
TY - JOUR
TI - An Active Learning Algorithm Based on Existing Training Data
T2 - IEICE TRANSACTIONS on Information
SP - 90
EP - 99
AU - Hiroyuki TAKIZAWA
AU - Taira NAKAJIMA
AU - Hiroaki KOBAYASHI
AU - Tadao NAKAMURA
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E83-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2000
AB - A multilayer perceptron is usually considered a passive learner that only receives given training data. However, if a multilayer perceptron actively gathers training data that resolve its uncertainty about a problem being learnt, sufficiently accurate classification is attained with fewer training data. Recently, such active learning has been receiving an increasing interest. In this paper, we propose a novel active learning strategy. The strategy attempts to produce only useful training data for multilayer perceptrons to achieve accurate classification, and avoids generating redundant training data. Furthermore, the strategy attempts to avoid generating temporarily useful training data that will become redundant in the future. As a result, the strategy can allow multilayer perceptrons to achieve accurate classification with fewer training data. To demonstrate the performance of the strategy in comparison with other active learning strategies, we also propose an empirical active learning algorithm as an implementation of the strategy, which does not require expensive computations. Experimental results show that the proposed algorithm improves the classification accuracy of a multilayer perceptron with fewer training data than that for a conventional random selection algorithm that constructs a training data set without explicit strategies. Moreover, the algorithm outperforms typical active learning algorithms in the experiments. Those results show that the algorithm can construct an appropriate training data set at lower computational cost, because training data generation is usually costly. Accordingly, the algorithm proves the effectiveness of the strategy through the experiments. We also discuss some drawbacks of the algorithm.
ER -