The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Penilaian kualiti saliency bertujuan untuk menganggar kualiti objektif peta saliency tanpa akses kepada ground-truth. Kerja-kerja sedia ada biasanya menilai kualiti saliency dengan menggunakan maklumat daripada peta saliency untuk menilai kekompakan dan ketertutupannya sambil mengabaikan maklumat daripada kandungan imej yang boleh digunakan untuk menilai ketekalan dan kesempurnaan latar depan. Dalam surat ini, kami mencadangkan rangkaian gabungan pelbagai maklumat yang baru untuk menangkap maklumat daripada kedua-dua peta kepentingan dan kandungan imej. Idea utama ialah memperkenalkan modul siam untuk mengumpul maklumat dari latar depan dan latar belakang, bertujuan untuk menilai konsistensi dan kesempurnaan latar depan dan perbezaan antara latar depan dan latar belakang. Eksperimen menunjukkan bahawa dengan memasukkan maklumat kandungan imej, prestasi kaedah yang dicadangkan meningkat dengan ketara. Tambahan pula, kami mengesahkan kaedah kami pada dua aplikasi: pengesanan saliency dan segmentasi. Kaedah kami digunakan untuk memilih peta saliency optimum daripada set peta saliency calon dan peta saliency yang dipilih dimasukkan ke dalam algoritma segmentasi untuk menjana peta segmentasi. Keputusan eksperimen mengesahkan keberkesanan kaedah kami.
Kai TAN
University of Electronic Science and Technology of China
Qingbo WU
University of Electronic Science and Technology of China
Fanman MENG
University of Electronic Science and Technology of China
Linfeng XU
University of Electronic Science and Technology of China
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Kai TAN, Qingbo WU, Fanman MENG, Linfeng XU, "Multi Information Fusion Network for Saliency Quality Assessment" in IEICE TRANSACTIONS on Information,
vol. E102-D, no. 5, pp. 1111-1114, May 2019, doi: 10.1587/transinf.2019EDL8002.
Abstract: Saliency quality assessment aims at estimating the objective quality of a saliency map without access to the ground-truth. Existing works typically evaluate saliency quality by utilizing information from saliency maps to assess its compactness and closedness while ignoring the information from image content which can be used to assess the consistence and completeness of foreground. In this letter, we propose a novel multi-information fusion network to capture the information from both the saliency map and image content. The key idea is to introduce a siamese module to collect information from foreground and background, aiming to assess the consistence and completeness of foreground and the difference between foreground and background. Experiments demonstrate that by incorporating image content information, the performance of the proposed method is significantly boosted. Furthermore, we validate our method on two applications: saliency detection and segmentation. Our method is utilized to choose optimal saliency map from a set of candidate saliency maps, and the selected saliency map is feeded into an segmentation algorithm to generate a segmentation map. Experimental results verify the effectiveness of our method.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019EDL8002/_p
Salinan
@ARTICLE{e102-d_5_1111,
author={Kai TAN, Qingbo WU, Fanman MENG, Linfeng XU, },
journal={IEICE TRANSACTIONS on Information},
title={Multi Information Fusion Network for Saliency Quality Assessment},
year={2019},
volume={E102-D},
number={5},
pages={1111-1114},
abstract={Saliency quality assessment aims at estimating the objective quality of a saliency map without access to the ground-truth. Existing works typically evaluate saliency quality by utilizing information from saliency maps to assess its compactness and closedness while ignoring the information from image content which can be used to assess the consistence and completeness of foreground. In this letter, we propose a novel multi-information fusion network to capture the information from both the saliency map and image content. The key idea is to introduce a siamese module to collect information from foreground and background, aiming to assess the consistence and completeness of foreground and the difference between foreground and background. Experiments demonstrate that by incorporating image content information, the performance of the proposed method is significantly boosted. Furthermore, we validate our method on two applications: saliency detection and segmentation. Our method is utilized to choose optimal saliency map from a set of candidate saliency maps, and the selected saliency map is feeded into an segmentation algorithm to generate a segmentation map. Experimental results verify the effectiveness of our method.},
keywords={},
doi={10.1587/transinf.2019EDL8002},
ISSN={1745-1361},
month={May},}
Salinan
TY - JOUR
TI - Multi Information Fusion Network for Saliency Quality Assessment
T2 - IEICE TRANSACTIONS on Information
SP - 1111
EP - 1114
AU - Kai TAN
AU - Qingbo WU
AU - Fanman MENG
AU - Linfeng XU
PY - 2019
DO - 10.1587/transinf.2019EDL8002
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E102-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2019
AB - Saliency quality assessment aims at estimating the objective quality of a saliency map without access to the ground-truth. Existing works typically evaluate saliency quality by utilizing information from saliency maps to assess its compactness and closedness while ignoring the information from image content which can be used to assess the consistence and completeness of foreground. In this letter, we propose a novel multi-information fusion network to capture the information from both the saliency map and image content. The key idea is to introduce a siamese module to collect information from foreground and background, aiming to assess the consistence and completeness of foreground and the difference between foreground and background. Experiments demonstrate that by incorporating image content information, the performance of the proposed method is significantly boosted. Furthermore, we validate our method on two applications: saliency detection and segmentation. Our method is utilized to choose optimal saliency map from a set of candidate saliency maps, and the selected saliency map is feeded into an segmentation algorithm to generate a segmentation map. Experimental results verify the effectiveness of our method.
ER -