The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Pelbagai kaedah capaian silang modal yang boleh mendapatkan semula imej yang berkaitan dengan ayat pertanyaan tanpa anotasi teks telah dicadangkan. Walaupun tahap prestasi perolehan semula yang tinggi dicapai dengan kaedah ini, ia telah dibangunkan untuk tetapan pengambilan semula domain tunggal. Apabila mendapatkan semula imej calon datang daripada pelbagai domain, prestasi mendapatkan semula kaedah ini mungkin berkurangan. Untuk menangani masalah ini, kami mencadangkan kaedah pengambilan silang mod adaptif domain baharu. Dengan menterjemahkan modaliti dan domain pertanyaan dan imej calon, kaedah kami boleh mendapatkan semula imej yang diingini dengan tepat dalam tetapan pengambilan domain yang berbeza. Keputusan eksperimen untuk set data clipart dan lukisan menunjukkan bahawa kaedah yang dicadangkan mempunyai prestasi perolehan yang lebih baik daripada kaedah konvensional dan terkini yang lain.
Rintaro YANAGI
Hokkaido University
Ren TOGO
Hokkaido University
Takahiro OGAWA
Hokkaido University
Miki HASEYAMA
Hokkaido University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Rintaro YANAGI, Ren TOGO, Takahiro OGAWA, Miki HASEYAMA, "Domain Adaptive Cross-Modal Image Retrieval via Modality and Domain Translations" in IEICE TRANSACTIONS on Fundamentals,
vol. E104-A, no. 6, pp. 866-875, June 2021, doi: 10.1587/transfun.2020IMP0011.
Abstract: Various cross-modal retrieval methods that can retrieve images related to a query sentence without text annotations have been proposed. Although a high level of retrieval performance is achieved by these methods, they have been developed for a single domain retrieval setting. When retrieval candidate images come from various domains, the retrieval performance of these methods might be decreased. To deal with this problem, we propose a new domain adaptive cross-modal retrieval method. By translating a modality and domains of a query and candidate images, our method can retrieve desired images accurately in a different domain retrieval setting. Experimental results for clipart and painting datasets showed that the proposed method has better retrieval performance than that of other conventional and state-of-the-art methods.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2020IMP0011/_p
Salinan
@ARTICLE{e104-a_6_866,
author={Rintaro YANAGI, Ren TOGO, Takahiro OGAWA, Miki HASEYAMA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Domain Adaptive Cross-Modal Image Retrieval via Modality and Domain Translations},
year={2021},
volume={E104-A},
number={6},
pages={866-875},
abstract={Various cross-modal retrieval methods that can retrieve images related to a query sentence without text annotations have been proposed. Although a high level of retrieval performance is achieved by these methods, they have been developed for a single domain retrieval setting. When retrieval candidate images come from various domains, the retrieval performance of these methods might be decreased. To deal with this problem, we propose a new domain adaptive cross-modal retrieval method. By translating a modality and domains of a query and candidate images, our method can retrieve desired images accurately in a different domain retrieval setting. Experimental results for clipart and painting datasets showed that the proposed method has better retrieval performance than that of other conventional and state-of-the-art methods.},
keywords={},
doi={10.1587/transfun.2020IMP0011},
ISSN={1745-1337},
month={June},}
Salinan
TY - JOUR
TI - Domain Adaptive Cross-Modal Image Retrieval via Modality and Domain Translations
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 866
EP - 875
AU - Rintaro YANAGI
AU - Ren TOGO
AU - Takahiro OGAWA
AU - Miki HASEYAMA
PY - 2021
DO - 10.1587/transfun.2020IMP0011
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E104-A
IS - 6
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - June 2021
AB - Various cross-modal retrieval methods that can retrieve images related to a query sentence without text annotations have been proposed. Although a high level of retrieval performance is achieved by these methods, they have been developed for a single domain retrieval setting. When retrieval candidate images come from various domains, the retrieval performance of these methods might be decreased. To deal with this problem, we propose a new domain adaptive cross-modal retrieval method. By translating a modality and domains of a query and candidate images, our method can retrieve desired images accurately in a different domain retrieval setting. Experimental results for clipart and painting datasets showed that the proposed method has better retrieval performance than that of other conventional and state-of-the-art methods.
ER -