The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Kertas kerja ini membentangkan algoritma penghalaan pintar, dipanggil Q-Agents, yang mendasarkan tindakannya hanya pada interaksi ejen-persekitaran. Algoritma ini menggabungkan ciri-ciri tiga strategi pembelajaran (pembelajaran Q, pembelajaran pengukuhan dwi dan pembelajaran berdasarkan tingkah laku koloni semut), menambah kepada mereka dua mekanisme lanjut untuk meningkatkan kebolehsuaiannya. Oleh itu, algoritma yang dicadangkan terdiri daripada satu set ejen, bergerak melalui rangkaian secara bebas dan serentak, mencari laluan terbaik. Ejen berkongsi pengetahuan tentang kualiti laluan yang dilalui melalui komunikasi tidak langsung. Maklumat tentang rangkaian dan status trafik dikemas kini dengan menggunakan peraturan pengemaskinian Q-pembelajaran dan pengukuhan dwi. Q-Agents telah digunakan pada model rangkaian suis litar AT&T. Eksperimen telah dijalankan ke atas prestasi algoritma di bawah variasi corak trafik, tahap beban dan topologi, dan dengan penambahan bunyi dalam maklumat yang digunakan untuk menghalakan panggilan. Q-Agents mengalami bilangan panggilan hilang yang lebih rendah daripada dua algoritma berdasarkan sepenuhnya pada tingkah laku koloni semut.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Karla VITTORI, Aluizio F. R. ARAUJO, "Agent-Oriented Routing in Telecommunications Networks" in IEICE TRANSACTIONS on Communications,
vol. E84-B, no. 11, pp. 3006-3013, November 2001, doi: .
Abstract: This paper presents an intelligent routing algorithm, called Q-Agents, which bases its actions only on the agent-environment interaction. This algorithm combines properties of three learning strategies (Q-learning, dual reinforcement learning and learning based on ant colony behavior), adding to them two further mechanisms to improve its adaptability. Hence, the proposed algorithm is composed of a set of agents, moving through the network independently and concurrently, searching for the best routes. The agents share knowledge about the quality of the paths traversed through indirect communication. Information about the network and traffic status is updated by using Q-learning and dual reinforcement updating rules. Q-Agents were applied to a model of an AT&T circuit-switched network. Experiments were carried out on the performance of the algorithm under variations of traffic patterns, load level and topology, and with addition of noise in the information used to route calls. Q-Agents suffered a lower number of lost calls than two algorithms based entirely on ant colony behavior.
URL: https://global.ieice.org/en_transactions/communications/10.1587/e84-b_11_3006/_p
Salinan
@ARTICLE{e84-b_11_3006,
author={Karla VITTORI, Aluizio F. R. ARAUJO, },
journal={IEICE TRANSACTIONS on Communications},
title={Agent-Oriented Routing in Telecommunications Networks},
year={2001},
volume={E84-B},
number={11},
pages={3006-3013},
abstract={This paper presents an intelligent routing algorithm, called Q-Agents, which bases its actions only on the agent-environment interaction. This algorithm combines properties of three learning strategies (Q-learning, dual reinforcement learning and learning based on ant colony behavior), adding to them two further mechanisms to improve its adaptability. Hence, the proposed algorithm is composed of a set of agents, moving through the network independently and concurrently, searching for the best routes. The agents share knowledge about the quality of the paths traversed through indirect communication. Information about the network and traffic status is updated by using Q-learning and dual reinforcement updating rules. Q-Agents were applied to a model of an AT&T circuit-switched network. Experiments were carried out on the performance of the algorithm under variations of traffic patterns, load level and topology, and with addition of noise in the information used to route calls. Q-Agents suffered a lower number of lost calls than two algorithms based entirely on ant colony behavior.},
keywords={},
doi={},
ISSN={},
month={November},}
Salinan
TY - JOUR
TI - Agent-Oriented Routing in Telecommunications Networks
T2 - IEICE TRANSACTIONS on Communications
SP - 3006
EP - 3013
AU - Karla VITTORI
AU - Aluizio F. R. ARAUJO
PY - 2001
DO -
JO - IEICE TRANSACTIONS on Communications
SN -
VL - E84-B
IS - 11
JA - IEICE TRANSACTIONS on Communications
Y1 - November 2001
AB - This paper presents an intelligent routing algorithm, called Q-Agents, which bases its actions only on the agent-environment interaction. This algorithm combines properties of three learning strategies (Q-learning, dual reinforcement learning and learning based on ant colony behavior), adding to them two further mechanisms to improve its adaptability. Hence, the proposed algorithm is composed of a set of agents, moving through the network independently and concurrently, searching for the best routes. The agents share knowledge about the quality of the paths traversed through indirect communication. Information about the network and traffic status is updated by using Q-learning and dual reinforcement updating rules. Q-Agents were applied to a model of an AT&T circuit-switched network. Experiments were carried out on the performance of the algorithm under variations of traffic patterns, load level and topology, and with addition of noise in the information used to route calls. Q-Agents suffered a lower number of lost calls than two algorithms based entirely on ant colony behavior.
ER -