The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Pengkomputeran tepi, yang telah mendapat perhatian sejak beberapa tahun kebelakangan ini, mempunyai banyak kelebihan, seperti mengurangkan beban pada awan, tidak terjejas oleh persekitaran komunikasi, dan menyediakan keselamatan yang sangat baik. Oleh itu, ramai penyelidik telah mencuba untuk melaksanakan rangkaian saraf, yang mewakili pembelajaran mesin dalam pengkomputeran tepi. Rangkaian saraf boleh dibahagikan kepada bahagian inferens dan pembelajaran; walau bagaimanapun, terdapat sedikit kajian untuk melaksanakan komponen pembelajaran dalam pengkomputeran tepi berbanding bahagian inferens. Ini kerana pembelajaran memerlukan lebih banyak ingatan dan pengiraan daripada inferens, dengan mudah melebihi had sumber yang tersedia untuk pengkomputeran tepi. Bagi mengatasi masalah ini, kajian ini memfokuskan kepada pengoptimum iaitu nadi pembelajaran. Dalam kertas kerja ini, kami memperkenalkan pengoptimum baharu kami, anggaran momentum logaritma berorientasikan perkakasan (Holmes), yang menggabungkan perspektif baharu yang tidak terdapat dalam pengoptimum sedia ada dari segi ciri dan kekuatan perkakasan. Prestasi Holmes dinilai dengan membandingkannya dengan pengoptimum lain berkenaan dengan kemajuan pembelajaran dan kelajuan penumpuan. Aspek penting pelaksanaan perkakasan, seperti memori dan keperluan operasi juga dibincangkan. Keputusan menunjukkan bahawa Holmes adalah padanan yang baik untuk pengkomputeran tepi dengan keperluan sumber yang agak rendah dan penumpuan pembelajaran pantas. Holmes akan membantu mencipta era di mana pembelajaran mesin lanjutan boleh direalisasikan pada pengkomputeran tepi.
Yoshiharu YAMAGISHI
Hokkaido University
Tatsuya KANEKO
Hokkaido University
Megumi AKAI-KASAYA
Hokkaido University,Graduate School of Engineering
Tetsuya ASAI
Hokkaido University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Yoshiharu YAMAGISHI, Tatsuya KANEKO, Megumi AKAI-KASAYA, Tetsuya ASAI, "Holmes: A Hardware-Oriented Optimizer Using Logarithms" in IEICE TRANSACTIONS on Information,
vol. E105-D, no. 12, pp. 2040-2047, December 2022, doi: 10.1587/transinf.2022PAP0001.
Abstract: Edge computing, which has been gaining attention in recent years, has many advantages, such as reducing the load on the cloud, not being affected by the communication environment, and providing excellent security. Therefore, many researchers have attempted to implement neural networks, which are representative of machine learning in edge computing. Neural networks can be divided into inference and learning parts; however, there has been little research on implementing the learning component in edge computing in contrast to the inference part. This is because learning requires more memory and computation than inference, easily exceeding the limit of resources available for edge computing. To overcome this problem, this research focuses on the optimizer, which is the heart of learning. In this paper, we introduce our new optimizer, hardware-oriented logarithmic momentum estimation (Holmes), which incorporates new perspectives not found in existing optimizers in terms of characteristics and strengths of hardware. The performance of Holmes was evaluated by comparing it with other optimizers with respect to learning progress and convergence speed. Important aspects of hardware implementation, such as memory and operation requirements are also discussed. The results show that Holmes is a good match for edge computing with relatively low resource requirements and fast learning convergence. Holmes will help create an era in which advanced machine learning can be realized on edge computing.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022PAP0001/_p
Salinan
@ARTICLE{e105-d_12_2040,
author={Yoshiharu YAMAGISHI, Tatsuya KANEKO, Megumi AKAI-KASAYA, Tetsuya ASAI, },
journal={IEICE TRANSACTIONS on Information},
title={Holmes: A Hardware-Oriented Optimizer Using Logarithms},
year={2022},
volume={E105-D},
number={12},
pages={2040-2047},
abstract={Edge computing, which has been gaining attention in recent years, has many advantages, such as reducing the load on the cloud, not being affected by the communication environment, and providing excellent security. Therefore, many researchers have attempted to implement neural networks, which are representative of machine learning in edge computing. Neural networks can be divided into inference and learning parts; however, there has been little research on implementing the learning component in edge computing in contrast to the inference part. This is because learning requires more memory and computation than inference, easily exceeding the limit of resources available for edge computing. To overcome this problem, this research focuses on the optimizer, which is the heart of learning. In this paper, we introduce our new optimizer, hardware-oriented logarithmic momentum estimation (Holmes), which incorporates new perspectives not found in existing optimizers in terms of characteristics and strengths of hardware. The performance of Holmes was evaluated by comparing it with other optimizers with respect to learning progress and convergence speed. Important aspects of hardware implementation, such as memory and operation requirements are also discussed. The results show that Holmes is a good match for edge computing with relatively low resource requirements and fast learning convergence. Holmes will help create an era in which advanced machine learning can be realized on edge computing.},
keywords={},
doi={10.1587/transinf.2022PAP0001},
ISSN={1745-1361},
month={December},}
Salinan
TY - JOUR
TI - Holmes: A Hardware-Oriented Optimizer Using Logarithms
T2 - IEICE TRANSACTIONS on Information
SP - 2040
EP - 2047
AU - Yoshiharu YAMAGISHI
AU - Tatsuya KANEKO
AU - Megumi AKAI-KASAYA
AU - Tetsuya ASAI
PY - 2022
DO - 10.1587/transinf.2022PAP0001
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E105-D
IS - 12
JA - IEICE TRANSACTIONS on Information
Y1 - December 2022
AB - Edge computing, which has been gaining attention in recent years, has many advantages, such as reducing the load on the cloud, not being affected by the communication environment, and providing excellent security. Therefore, many researchers have attempted to implement neural networks, which are representative of machine learning in edge computing. Neural networks can be divided into inference and learning parts; however, there has been little research on implementing the learning component in edge computing in contrast to the inference part. This is because learning requires more memory and computation than inference, easily exceeding the limit of resources available for edge computing. To overcome this problem, this research focuses on the optimizer, which is the heart of learning. In this paper, we introduce our new optimizer, hardware-oriented logarithmic momentum estimation (Holmes), which incorporates new perspectives not found in existing optimizers in terms of characteristics and strengths of hardware. The performance of Holmes was evaluated by comparing it with other optimizers with respect to learning progress and convergence speed. Important aspects of hardware implementation, such as memory and operation requirements are also discussed. The results show that Holmes is a good match for edge computing with relatively low resource requirements and fast learning convergence. Holmes will help create an era in which advanced machine learning can be realized on edge computing.
ER -