The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Konsep Laluan Maya (VP) dalam rangkaian ATM memudahkan struktur rangkaian, kawalan trafik dan pengurusan sumber. Untuk formulasi VP, VP boleh membawa trafik dari jenis yang sama (skim berasingan) atau jenis yang berbeza (skim bersatu). Untuk pelarasan VP, amaun lebar jalur tertentu boleh diperuntukkan secara dinamik (terpelihara) kepada VP, di mana amaun (saiz penambahan/penurunan lebar jalur) ialah parameter sistem yang telah ditetapkan. Dalam makalah ini, kami mengkaji skim penghalaan dinamik berasaskan Laluan Paling Dimuat dengan pelbagai definisi lebar jalur sisa di bawah skema peruntukan lebar jalur yang berbeza (rumusan dan pelarasan VP). Khususnya, kami menilai kebarangkalian menyekat panggilan dan beban pemprosesan persediaan VP dengan saiz tambahan (lebar jalur) yang berbeza-beza. Selain itu, Kami menyiasat secara berangka bagaimana penggunaan VP memperdagangkan kebarangkalian penyekatan dengan beban pemprosesan. Didapati bahawa skim bersatu boleh mengatasi skema berasingan dalam saiz tambahan tertentu. Selain itu, kami mencadangkan dua cara untuk mengurangkan beban pemprosesan tanpa meningkatkan kebarangkalian menyekat. Menggunakan kaedah ini, skim berasingan sentiasa mengatasi skema bersatu.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Eric W. M. WONG, Andy K. M. CHAN, Sammy CHAN, King-Tim KO, "Bandwidth Allocation for Virtual Paths in ATM Networks with Dynamic Routing" in IEICE TRANSACTIONS on Communications,
vol. E83-B, no. 3, pp. 626-637, March 2000, doi: .
Abstract: The Virtual Path (VP) concept in ATM networks simplifies network structure, traffic control and resource management. For VP formulation, a VP can carry traffic of the same type (the separate scheme) or of different types (the unified scheme). For VP adjustment, a certain amount of bandwidth can be dynamically assigned (reserved) to VPs, where the amount (the bandwidth incremental/decremental size) is a predetermined system parameter. In this paper, we study Least Loaded Path-based dynamic routing schemes with various residual bandwidth definitions under different bandwidth allocation (VP formulation and adjustment) schemes. In particular, we evaluate the call blocking probability and VP set-up processing load with varying (bandwidth) incremental sizes. Also, We investigate numerically how the use of VP trades the blocking probability with the processing load. It is found that the unified scheme could outperform the separate scheme in certain incremental sizes. Moreover, we propose two ways to reduce the processing load without increasing the blocking probability. Using these methods, the separate scheme always outperforms the unified scheme.
URL: https://global.ieice.org/en_transactions/communications/10.1587/e83-b_3_626/_p
Salinan
@ARTICLE{e83-b_3_626,
author={Eric W. M. WONG, Andy K. M. CHAN, Sammy CHAN, King-Tim KO, },
journal={IEICE TRANSACTIONS on Communications},
title={Bandwidth Allocation for Virtual Paths in ATM Networks with Dynamic Routing},
year={2000},
volume={E83-B},
number={3},
pages={626-637},
abstract={The Virtual Path (VP) concept in ATM networks simplifies network structure, traffic control and resource management. For VP formulation, a VP can carry traffic of the same type (the separate scheme) or of different types (the unified scheme). For VP adjustment, a certain amount of bandwidth can be dynamically assigned (reserved) to VPs, where the amount (the bandwidth incremental/decremental size) is a predetermined system parameter. In this paper, we study Least Loaded Path-based dynamic routing schemes with various residual bandwidth definitions under different bandwidth allocation (VP formulation and adjustment) schemes. In particular, we evaluate the call blocking probability and VP set-up processing load with varying (bandwidth) incremental sizes. Also, We investigate numerically how the use of VP trades the blocking probability with the processing load. It is found that the unified scheme could outperform the separate scheme in certain incremental sizes. Moreover, we propose two ways to reduce the processing load without increasing the blocking probability. Using these methods, the separate scheme always outperforms the unified scheme.},
keywords={},
doi={},
ISSN={},
month={March},}
Salinan
TY - JOUR
TI - Bandwidth Allocation for Virtual Paths in ATM Networks with Dynamic Routing
T2 - IEICE TRANSACTIONS on Communications
SP - 626
EP - 637
AU - Eric W. M. WONG
AU - Andy K. M. CHAN
AU - Sammy CHAN
AU - King-Tim KO
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Communications
SN -
VL - E83-B
IS - 3
JA - IEICE TRANSACTIONS on Communications
Y1 - March 2000
AB - The Virtual Path (VP) concept in ATM networks simplifies network structure, traffic control and resource management. For VP formulation, a VP can carry traffic of the same type (the separate scheme) or of different types (the unified scheme). For VP adjustment, a certain amount of bandwidth can be dynamically assigned (reserved) to VPs, where the amount (the bandwidth incremental/decremental size) is a predetermined system parameter. In this paper, we study Least Loaded Path-based dynamic routing schemes with various residual bandwidth definitions under different bandwidth allocation (VP formulation and adjustment) schemes. In particular, we evaluate the call blocking probability and VP set-up processing load with varying (bandwidth) incremental sizes. Also, We investigate numerically how the use of VP trades the blocking probability with the processing load. It is found that the unified scheme could outperform the separate scheme in certain incremental sizes. Moreover, we propose two ways to reduce the processing load without increasing the blocking probability. Using these methods, the separate scheme always outperforms the unified scheme.
ER -