The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Dalam makalah ini, kami mencadangkan kaedah berasaskan subgradien teragih ke atas rangkaian komunikasi terkuantisasi dan tercetus peristiwa untuk pengoptimuman cembung terkurung. Dalam kaedah yang dicadangkan, setiap ejen menghantar keadaan terkuantasi kepada ejen jiran hanya pada masa pencetusnya melalui skema pengekodan dan penyahkodan dinamik. Selepas pertukaran maklumat terkuantisasi dan tercetus peristiwa, setiap ejen mengemas kini keadaannya secara tempatan dengan algoritma subgradient berasaskan konsensus. Kami menunjukkan keadaan yang mencukupi untuk penumpuan di bawah keadaan kebolehjumlahan saiz langkah yang semakin berkurangan.
Naoki HAYASHI
Osaka University
Kazuyuki ISHIKAWA
Osaka University
Shigemasa TAKAI
Osaka University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Naoki HAYASHI, Kazuyuki ISHIKAWA, Shigemasa TAKAI, "Distributed Subgradient Method for Constrained Convex Optimization with Quantized and Event-Triggered Communication" in IEICE TRANSACTIONS on Fundamentals,
vol. E103-A, no. 2, pp. 428-434, February 2020, doi: 10.1587/transfun.2019MAP0007.
Abstract: In this paper, we propose a distributed subgradient-based method over quantized and event-triggered communication networks for constrained convex optimization. In the proposed method, each agent sends the quantized state to the neighbor agents only at its trigger times through the dynamic encoding and decoding scheme. After the quantized and event-triggered information exchanges, each agent locally updates its state by a consensus-based subgradient algorithm. We show a sufficient condition for convergence under summability conditions of a diminishing step-size.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2019MAP0007/_p
Salinan
@ARTICLE{e103-a_2_428,
author={Naoki HAYASHI, Kazuyuki ISHIKAWA, Shigemasa TAKAI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Distributed Subgradient Method for Constrained Convex Optimization with Quantized and Event-Triggered Communication},
year={2020},
volume={E103-A},
number={2},
pages={428-434},
abstract={In this paper, we propose a distributed subgradient-based method over quantized and event-triggered communication networks for constrained convex optimization. In the proposed method, each agent sends the quantized state to the neighbor agents only at its trigger times through the dynamic encoding and decoding scheme. After the quantized and event-triggered information exchanges, each agent locally updates its state by a consensus-based subgradient algorithm. We show a sufficient condition for convergence under summability conditions of a diminishing step-size.},
keywords={},
doi={10.1587/transfun.2019MAP0007},
ISSN={1745-1337},
month={February},}
Salinan
TY - JOUR
TI - Distributed Subgradient Method for Constrained Convex Optimization with Quantized and Event-Triggered Communication
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 428
EP - 434
AU - Naoki HAYASHI
AU - Kazuyuki ISHIKAWA
AU - Shigemasa TAKAI
PY - 2020
DO - 10.1587/transfun.2019MAP0007
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E103-A
IS - 2
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - February 2020
AB - In this paper, we propose a distributed subgradient-based method over quantized and event-triggered communication networks for constrained convex optimization. In the proposed method, each agent sends the quantized state to the neighbor agents only at its trigger times through the dynamic encoding and decoding scheme. After the quantized and event-triggered information exchanges, each agent locally updates its state by a consensus-based subgradient algorithm. We show a sufficient condition for convergence under summability conditions of a diminishing step-size.
ER -