The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Untuk pemeriksaan menjadi praktikal, ia perlu memperkenalkan overhed rendah untuk aplikasi yang disasarkan. Sebagai cara untuk mengurangkan overhed pemeriksaan, kertas ini mencadangkan kaedah pemeriksaan kemungkinan, yang menggunakan pengekodan blok untuk mengesan kawasan memori yang diubah suai antara dua pusat pemeriksaan berturut-turut. Memandangkan teknik yang dicadangkan menggunakan pengekodan blok untuk mengesan kawasan yang diubah suai, kemungkinan mengasingkan wujud dalam perkataan yang dikodkan. Walau bagaimanapun, kertas ini menunjukkan bahawa kebarangkalian pengaliasan adalah hampir sifar apabila perkataan yang dikodkan 8-bait digunakan. Prestasi teknik yang dicadangkan dianalisis dan diukur dengan menggunakan eksperimen. Model analitik yang meramalkan overhed titik semak dibina terlebih dahulu. Dengan menggunakan model ini, saiz blok yang menghasilkan prestasi terbaik untuk program sasaran tertentu dianggarkan. Dalam kebanyakan kes, saiz blok sederhana, iaitu, 128 atau 256 bait, menunjukkan prestasi terbaik. Teknik yang dicadangkan juga telah dilaksanakan pada sistem berasaskan Unix, dan prestasinya telah diukur dalam persekitaran sebenar. Mengikut keputusan percubaan, teknik yang dicadangkan mengurangkan overhed sebanyak 11.7% dalam kes terbaik dan meningkatkan overhed sebanyak 0.5% dalam kes terburuk berbanding dengan pemeriksaan tambahan berasaskan halaman.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Hyochang NAM, Jong KIM, Sung Je HONG, Sunggu LEE, "Probabilistic Checkpointing" in IEICE TRANSACTIONS on Information,
vol. E85-D, no. 7, pp. 1093-1104, July 2002, doi: .
Abstract: For checkpointing to be practical, it has to introduce low overhead for the targeted application. As a means of reducing the overhead of checkpointing, this paper proposes a probabilistic checkpointing method, which uses block encoding to detect the modified memory area between two consecutive checkpoints. Since the proposed technique uses block encoding to detect the modified area, the possibility of aliasing exists in encoded words. However, this paper shows that the aliasing probability is near zero when an 8-byte encoded word is used. The performance of the proposed technique is analyzed and measured by using experiments. An analytic model which predicts the checkpointing overhead is first constructed. By using this model, the block size that produces the best performance for a given target program is estimated. In most cases, medium block sizes, i.e., 128 or 256 bytes, show the best performance. The proposed technique has also been implemented on Unix based systems, and its performance has been measured in real environments. According to the experimental results, the proposed technique reduces the overhead by 11.7% in the best case and increases the overhead by 0.5% in the worst case in comparison with page-based incremental checkpointing.
URL: https://global.ieice.org/en_transactions/information/10.1587/e85-d_7_1093/_p
Salinan
@ARTICLE{e85-d_7_1093,
author={Hyochang NAM, Jong KIM, Sung Je HONG, Sunggu LEE, },
journal={IEICE TRANSACTIONS on Information},
title={Probabilistic Checkpointing},
year={2002},
volume={E85-D},
number={7},
pages={1093-1104},
abstract={For checkpointing to be practical, it has to introduce low overhead for the targeted application. As a means of reducing the overhead of checkpointing, this paper proposes a probabilistic checkpointing method, which uses block encoding to detect the modified memory area between two consecutive checkpoints. Since the proposed technique uses block encoding to detect the modified area, the possibility of aliasing exists in encoded words. However, this paper shows that the aliasing probability is near zero when an 8-byte encoded word is used. The performance of the proposed technique is analyzed and measured by using experiments. An analytic model which predicts the checkpointing overhead is first constructed. By using this model, the block size that produces the best performance for a given target program is estimated. In most cases, medium block sizes, i.e., 128 or 256 bytes, show the best performance. The proposed technique has also been implemented on Unix based systems, and its performance has been measured in real environments. According to the experimental results, the proposed technique reduces the overhead by 11.7% in the best case and increases the overhead by 0.5% in the worst case in comparison with page-based incremental checkpointing.},
keywords={},
doi={},
ISSN={},
month={July},}
Salinan
TY - JOUR
TI - Probabilistic Checkpointing
T2 - IEICE TRANSACTIONS on Information
SP - 1093
EP - 1104
AU - Hyochang NAM
AU - Jong KIM
AU - Sung Je HONG
AU - Sunggu LEE
PY - 2002
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E85-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2002
AB - For checkpointing to be practical, it has to introduce low overhead for the targeted application. As a means of reducing the overhead of checkpointing, this paper proposes a probabilistic checkpointing method, which uses block encoding to detect the modified memory area between two consecutive checkpoints. Since the proposed technique uses block encoding to detect the modified area, the possibility of aliasing exists in encoded words. However, this paper shows that the aliasing probability is near zero when an 8-byte encoded word is used. The performance of the proposed technique is analyzed and measured by using experiments. An analytic model which predicts the checkpointing overhead is first constructed. By using this model, the block size that produces the best performance for a given target program is estimated. In most cases, medium block sizes, i.e., 128 or 256 bytes, show the best performance. The proposed technique has also been implemented on Unix based systems, and its performance has been measured in real environments. According to the experimental results, the proposed technique reduces the overhead by 11.7% in the best case and increases the overhead by 0.5% in the worst case in comparison with page-based incremental checkpointing.
ER -