The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Kami mencadangkan pintu belakang berbilang sasaran yang mengelirukan model yang berbeza ke kelas yang berbeza. Kaedah ini melatih berbilang model dengan data yang termasuk pencetus khusus yang akan disalahklasifikasikan oleh model yang berbeza ke dalam kelas yang berbeza. Sebagai contoh, penyerang boleh menggunakan sampel pintu belakang berbilang sasaran tunggal untuk menjadikan model A mengenalinya sebagai tanda berhenti, model B sebagai tanda belok kiri, model C sebagai tanda belok kanan dan model D sebagai tanda U- tanda belok. Kami menggunakan MNIST dan Fashion-MNIST sebagai set data percubaan dan Tensorflow sebagai perpustakaan pembelajaran mesin. Keputusan eksperimen menunjukkan bahawa kaedah yang dicadangkan dengan pencetus boleh menyebabkan salah klasifikasi sebagai kelas yang berbeza oleh model yang berbeza dengan kadar kejayaan serangan 100% pada MNIST dan Fashion-MNIST sambil mengekalkan ketepatan 97.18% dan 91.1%, masing-masing, pada data tanpa pencetus.
Hyun KWON
Korea Advanced Institute of Science and Technology,Korea Military Academy
Hyunsoo YOON
Korea Advanced Institute of Science and Technology
Ki-Woong PARK
Sejong University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Hyun KWON, Hyunsoo YOON, Ki-Woong PARK, "Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 4, pp. 883-887, April 2020, doi: 10.1587/transinf.2019EDL8170.
Abstract: We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019EDL8170/_p
Salinan
@ARTICLE{e103-d_4_883,
author={Hyun KWON, Hyunsoo YOON, Ki-Woong PARK, },
journal={IEICE TRANSACTIONS on Information},
title={Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks},
year={2020},
volume={E103-D},
number={4},
pages={883-887},
abstract={We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.},
keywords={},
doi={10.1587/transinf.2019EDL8170},
ISSN={1745-1361},
month={April},}
Salinan
TY - JOUR
TI - Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
T2 - IEICE TRANSACTIONS on Information
SP - 883
EP - 887
AU - Hyun KWON
AU - Hyunsoo YOON
AU - Ki-Woong PARK
PY - 2020
DO - 10.1587/transinf.2019EDL8170
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 4
JA - IEICE TRANSACTIONS on Information
Y1 - April 2020
AB - We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.
ER -