The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Kertas kerja ini memperkenalkan pendekatan pembelajaran pengukuhan mendalam untuk menyelesaikan masalah penjadualan fungsi rangkaian maya dalam senario dinamik. Kami merumuskan model pengaturcaraan linear integer untuk masalah dalam senario statik. Dalam senario dinamik, kami mentakrifkan keadaan, tindakan dan ganjaran untuk membentuk pendekatan pembelajaran. Ejen pembelajaran digunakan dengan algoritma aktor-kritik kelebihan tak segerak. Kami menetapkan ejen induk dan beberapa ejen pekerja kepada setiap nod virtualisasi fungsi rangkaian dalam masalah. Ejen pekerja bekerja secara selari untuk membantu ejen induk membuat keputusan. Kami membandingkan pendekatan yang diperkenalkan dengan pendekatan sedia ada dengan mengaplikasikannya dalam persekitaran simulasi. Pendekatan sedia ada termasuk tiga pendekatan tamak, pendekatan penyepuhlindapan simulasi, dan pendekatan pengaturcaraan linear integer. Keputusan berangka menunjukkan bahawa pendekatan pembelajaran pengukuhan mendalam yang diperkenalkan meningkatkan prestasi sebanyak 6-27% dalam kes kami yang diperiksa.
Zixiao ZHANG
Kyoto University
Fujun HE
Kyoto University
Eiji OKI
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Zixiao ZHANG, Fujun HE, Eiji OKI, "Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach" in IEICE TRANSACTIONS on Communications,
vol. E106-B, no. 7, pp. 557-570, July 2023, doi: 10.1587/transcom.2022EBP3160.
Abstract: This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.
URL: https://global.ieice.org/en_transactions/communications/10.1587/transcom.2022EBP3160/_p
Salinan
@ARTICLE{e106-b_7_557,
author={Zixiao ZHANG, Fujun HE, Eiji OKI, },
journal={IEICE TRANSACTIONS on Communications},
title={Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach},
year={2023},
volume={E106-B},
number={7},
pages={557-570},
abstract={This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.},
keywords={},
doi={10.1587/transcom.2022EBP3160},
ISSN={1745-1345},
month={July},}
Salinan
TY - JOUR
TI - Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach
T2 - IEICE TRANSACTIONS on Communications
SP - 557
EP - 570
AU - Zixiao ZHANG
AU - Fujun HE
AU - Eiji OKI
PY - 2023
DO - 10.1587/transcom.2022EBP3160
JO - IEICE TRANSACTIONS on Communications
SN - 1745-1345
VL - E106-B
IS - 7
JA - IEICE TRANSACTIONS on Communications
Y1 - July 2023
AB - This paper introduces a deep reinforcement learning approach to solve the virtual network function scheduling problem in dynamic scenarios. We formulate an integer linear programming model for the problem in static scenarios. In dynamic scenarios, we define the state, action, and reward to form the learning approach. The learning agents are applied with the asynchronous advantage actor-critic algorithm. We assign a master agent and several worker agents to each network function virtualization node in the problem. The worker agents work in parallel to help the master agent make decision. We compare the introduced approach with existing approaches by applying them in simulated environments. The existing approaches include three greedy approaches, a simulated annealing approach, and an integer linear programming approach. The numerical results show that the introduced deep reinforcement learning approach improves the performance by 6-27% in our examined cases.
ER -