The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Kecerdasan buatan (AI), terutamanya pembelajaran mendalam (DL), telah menjadi luar biasa dan digunakan untuk pelbagai industri. Walau bagaimanapun, contoh adversarial (AE), yang menambah gangguan kecil pada data input rangkaian saraf dalam (DNN) untuk salah klasifikasi, menarik perhatian. Dalam makalah ini, kami mencadangkan serangan kotak hitam baru untuk mencipta AE hanya menggunakan masa pemprosesan yang merupakan maklumat saluran sisi DNN, tanpa menggunakan data latihan, seni bina model dan parameter, model pengganti atau kebarangkalian keluaran. Walaupun, beberapa serangan kotak hitam sedia ada menggunakan kebarangkalian keluaran, serangan kami mengeksploitasi hubungan antara bilangan nod yang diaktifkan dan masa pemprosesan DNN. Gangguan untuk AE ditentukan oleh masa pemprosesan pembezaan mengikut data input dalam serangan kami. Kami menunjukkan hasil percubaan di mana AE serangan kami meningkatkan bilangan nod yang diaktifkan dan menyebabkan salah klasifikasi kepada salah satu label yang salah dengan berkesan. Di samping itu, keputusan percubaan menyerlahkan bahawa serangan kami boleh mengelak langkah balas pelindung kecerunan yang menutupi kebarangkalian keluaran untuk menghalang pembuatan AE terhadap beberapa serangan kotak hitam.
Tsunato NAKAI
Mitsubishi Electric Corporation
Daisuke SUZUKI
Mitsubishi Electric Corporation
Fumio OMATSU
Mitsubishi Electric Corporation
Takeshi FUJINO
Ritsumeikan University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Tsunato NAKAI, Daisuke SUZUKI, Fumio OMATSU, Takeshi FUJINO, "Adversarial Black-Box Attacks with Timing Side-Channel Leakage" in IEICE TRANSACTIONS on Fundamentals,
vol. E104-A, no. 1, pp. 143-151, January 2021, doi: 10.1587/transfun.2020CIP0022.
Abstract: Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2020CIP0022/_p
Salinan
@ARTICLE{e104-a_1_143,
author={Tsunato NAKAI, Daisuke SUZUKI, Fumio OMATSU, Takeshi FUJINO, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Adversarial Black-Box Attacks with Timing Side-Channel Leakage},
year={2021},
volume={E104-A},
number={1},
pages={143-151},
abstract={Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.},
keywords={},
doi={10.1587/transfun.2020CIP0022},
ISSN={1745-1337},
month={January},}
Salinan
TY - JOUR
TI - Adversarial Black-Box Attacks with Timing Side-Channel Leakage
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 143
EP - 151
AU - Tsunato NAKAI
AU - Daisuke SUZUKI
AU - Fumio OMATSU
AU - Takeshi FUJINO
PY - 2021
DO - 10.1587/transfun.2020CIP0022
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E104-A
IS - 1
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - January 2021
AB - Artificial intelligence (AI), especially deep learning (DL), has been remarkable and applied to various industries. However, adversarial examples (AE), which add small perturbations to input data of deep neural networks (DNNs) for misclassification, are attracting attention. In this paper, we propose a novel black-box attack to craft AE using only processing time which is side-channel information of DNNs, without using training data, model architecture and parameters, substitute models or output probability. While, several existing black-box attacks use output probability, our attack exploits a relationship between the number of activated nodes and the processing time of DNNs. The perturbations for AE are decided by the differential processing time according to input data in our attack. We show experimental results in which our attack's AE increase the number of activated nodes and cause misclassification to one of the incorrect labels effectively. In addition, the experimental results highlight that our attack can evade gradient masking countermeasures which mask output probability to prevent crafting AE against several black-box attacks.
ER -