The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
pandangan teks lengkap
196
Dalam kajian ini, kami membentangkan kaedah baru untuk mengeluarkan asap daripada video berdasarkan jujukan imej tunggal. Asap ialah artifak penting dalam imej atau video kerana ia boleh mengurangkan keterlihatan dalam adegan bencana. Kaedah cadangan kami untuk mengeluarkan asap melibatkan dua proses utama: (1) pembangunan model pengimejan asap dan (2) penyingkiran asap menggunakan pampasan piksel spatio-temporal. Pertama, kami memodelkan fenomena optik dalam pemandangan semula jadi termasuk asap, yang dipanggil model pengimejan asap. Model pengimejan asap kami dibangunkan dengan memanjangkan model pengimejan jerebu konvensional. Kami kemudian mengeluarkan asap daripada video dalam cara bingkai demi bingkai berdasarkan model pengimejan asap. Seterusnya, kami memperhalusi penampilan video bebas asap dengan pampasan piksel spatio-temporal, di mana kami menjajarkan bingkai bebas asap menggunakan piksel yang sepadan. Untuk mendapatkan piksel yang sepadan, kami menggunakan SIFT dan ciri warna dengan kekangan jarak. Akhir sekali, untuk mendapatkan video yang jelas, kami memperhalusi nilai piksel berdasarkan pemberat spatio-temporal bagi piksel yang sepadan dalam bingkai bebas asap. Kami menggunakan video asap simulasi dan sebenar dalam eksperimen pengesahan kami. Keputusan eksperimen menunjukkan bahawa kaedah kami boleh mendapatkan hasil penyingkiran asap yang berkesan daripada adegan dinamik. Kami juga menilai kaedah kami secara kuantitatif berdasarkan ukuran koheren temporal.
Shiori YAMAGUCHI
Chiba University
Keita HIRAI
Chiba University
Takahiko HORIUCHI
Chiba University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Shiori YAMAGUCHI, Keita HIRAI, Takahiko HORIUCHI, "Video Smoke Removal from a Single Image Sequence" in IEICE TRANSACTIONS on Fundamentals,
vol. E104-A, no. 6, pp. 876-886, June 2021, doi: 10.1587/transfun.2020IMP0013.
Abstract: In this study, we present a novel method for removing smoke from videos based on a single image sequence. Smoke is a significant artifact in images or videos because it can reduce the visibility in disaster scenes. Our proposed method for removing smoke involves two main processes: (1) the development of a smoke imaging model and (2) smoke removal using spatio-temporal pixel compensation. First, we model the optical phenomena in natural scenes including smoke, which is called a smoke imaging model. Our smoke imaging model is developed by extending conventional haze imaging models. We then remove the smoke from a video in a frame-by-frame manner based on the smoke imaging model. Next, we refine the appearance of the smoke-free video by spatio-temporal pixel compensation, where we align the smoke-free frames using the corresponding pixels. To obtain the corresponding pixels, we use SIFT and color features with distance constraints. Finally, in order to obtain a clear video, we refine the pixel values based on the spatio-temporal weightings of the corresponding pixels in the smoke-free frames. We used simulated and actual smoke videos in our validation experiments. The experimental results demonstrated that our method can obtain effective smoke removal results from dynamic scenes. We also quantitatively assessed our method based on a temporal coherence measure.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2020IMP0013/_p
Salinan
@ARTICLE{e104-a_6_876,
author={Shiori YAMAGUCHI, Keita HIRAI, Takahiko HORIUCHI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Video Smoke Removal from a Single Image Sequence},
year={2021},
volume={E104-A},
number={6},
pages={876-886},
abstract={In this study, we present a novel method for removing smoke from videos based on a single image sequence. Smoke is a significant artifact in images or videos because it can reduce the visibility in disaster scenes. Our proposed method for removing smoke involves two main processes: (1) the development of a smoke imaging model and (2) smoke removal using spatio-temporal pixel compensation. First, we model the optical phenomena in natural scenes including smoke, which is called a smoke imaging model. Our smoke imaging model is developed by extending conventional haze imaging models. We then remove the smoke from a video in a frame-by-frame manner based on the smoke imaging model. Next, we refine the appearance of the smoke-free video by spatio-temporal pixel compensation, where we align the smoke-free frames using the corresponding pixels. To obtain the corresponding pixels, we use SIFT and color features with distance constraints. Finally, in order to obtain a clear video, we refine the pixel values based on the spatio-temporal weightings of the corresponding pixels in the smoke-free frames. We used simulated and actual smoke videos in our validation experiments. The experimental results demonstrated that our method can obtain effective smoke removal results from dynamic scenes. We also quantitatively assessed our method based on a temporal coherence measure.},
keywords={},
doi={10.1587/transfun.2020IMP0013},
ISSN={1745-1337},
month={June},}
Salinan
TY - JOUR
TI - Video Smoke Removal from a Single Image Sequence
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 876
EP - 886
AU - Shiori YAMAGUCHI
AU - Keita HIRAI
AU - Takahiko HORIUCHI
PY - 2021
DO - 10.1587/transfun.2020IMP0013
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E104-A
IS - 6
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - June 2021
AB - In this study, we present a novel method for removing smoke from videos based on a single image sequence. Smoke is a significant artifact in images or videos because it can reduce the visibility in disaster scenes. Our proposed method for removing smoke involves two main processes: (1) the development of a smoke imaging model and (2) smoke removal using spatio-temporal pixel compensation. First, we model the optical phenomena in natural scenes including smoke, which is called a smoke imaging model. Our smoke imaging model is developed by extending conventional haze imaging models. We then remove the smoke from a video in a frame-by-frame manner based on the smoke imaging model. Next, we refine the appearance of the smoke-free video by spatio-temporal pixel compensation, where we align the smoke-free frames using the corresponding pixels. To obtain the corresponding pixels, we use SIFT and color features with distance constraints. Finally, in order to obtain a clear video, we refine the pixel values based on the spatio-temporal weightings of the corresponding pixels in the smoke-free frames. We used simulated and actual smoke videos in our validation experiments. The experimental results demonstrated that our method can obtain effective smoke removal results from dynamic scenes. We also quantitatively assessed our method based on a temporal coherence measure.
ER -