The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Kami mempersembahkan sistem pemaparan berasaskan video masa nyata menggunakan tatasusunan kamera rangkaian. Sistem kami terdiri daripada 64 kamera rangkaian komoditi yang disambungkan kepada satu PC melalui Ethernet gigabit. Untuk menghasilkan paparan novel berkualiti tinggi, sistem kami menganggarkan peta kedalaman per piksel yang bergantung pada paparan dalam masa nyata dengan menggunakan perwakilan berlapis. Algoritma pemaparan dilaksanakan sepenuhnya pada GPU, yang membolehkan sistem kami melaksanakan proses penangkapan dan pemaparan secara cekap sebagai saluran paip dengan menggunakan CPU dan GPU secara bebas. Menggunakan resolusi video input QVGA, sistem kami memaparkan video sudut pandangan bebas sehingga 30 bingkai sesaat, bergantung pada resolusi video output dan bilangan lapisan kedalaman. Hasil percubaan menunjukkan imej berkualiti tinggi yang disintesis daripada pelbagai adegan.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Yuichi TAGUCHI, Keita TAKAHASHI, Takeshi NAEMURA, "Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array" in IEICE TRANSACTIONS on Information,
vol. E92-D, no. 7, pp. 1442-1452, July 2009, doi: 10.1587/transinf.E92.D.1442.
Abstract: We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a gigabit Ethernet. To render a high-quality novel view, our system estimates a view-dependent per-pixel depth map in real time by using a layered representation. The rendering algorithm is fully implemented on the GPU, which allows our system to efficiently perform capturing and rendering processes as a pipeline by using the CPU and GPU independently. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 frames per second, depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E92.D.1442/_p
Salinan
@ARTICLE{e92-d_7_1442,
author={Yuichi TAGUCHI, Keita TAKAHASHI, Takeshi NAEMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array},
year={2009},
volume={E92-D},
number={7},
pages={1442-1452},
abstract={We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a gigabit Ethernet. To render a high-quality novel view, our system estimates a view-dependent per-pixel depth map in real time by using a layered representation. The rendering algorithm is fully implemented on the GPU, which allows our system to efficiently perform capturing and rendering processes as a pipeline by using the CPU and GPU independently. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 frames per second, depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.},
keywords={},
doi={10.1587/transinf.E92.D.1442},
ISSN={1745-1361},
month={July},}
Salinan
TY - JOUR
TI - Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array
T2 - IEICE TRANSACTIONS on Information
SP - 1442
EP - 1452
AU - Yuichi TAGUCHI
AU - Keita TAKAHASHI
AU - Takeshi NAEMURA
PY - 2009
DO - 10.1587/transinf.E92.D.1442
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E92-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2009
AB - We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a gigabit Ethernet. To render a high-quality novel view, our system estimates a view-dependent per-pixel depth map in real time by using a layered representation. The rendering algorithm is fully implemented on the GPU, which allows our system to efficiently perform capturing and rendering processes as a pipeline by using the CPU and GPU independently. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 frames per second, depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.
ER -