The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Makalah ini mencadangkan seni bina cache novel yang sesuai untuk gabungan DRAM/logik LSI, yang dipanggil "cache saiz baris berubah secara dinamik (Cache D-VLS). " Cache D-VLS boleh mengoptimumkan saiz barisnya mengikut ciri program, dan cuba meningkatkan prestasi dengan mengeksploitasi lebar jalur memori pada cip yang tinggi pada gabungan DRAM/logik LSI dengan sewajarnya. Dalam penilaian kami, diperhatikan bahawa purata peningkatan masa capaian memori yang dicapai oleh cache D-VLS dipetakan langsung ialah kira-kira 20% berbanding cache dipetakan langsung konvensional dengan talian tetap 32 bait ini adalah lebih baik daripada peningkatan prestasi langsung bersaiz dua kali ganda -cache dipetakan.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Salinan
Koji INOUE, Koji KAI, Kazuaki MURAKAMI, "Dynamically Variable Line-Size Cache Architecture for Merged DRAM/Logic LSIs" in IEICE TRANSACTIONS on Information,
vol. E83-D, no. 5, pp. 1048-1057, May 2000, doi: .
Abstract: This paper proposes a novel cache architecture suitable for merged DRAM/logic LSIs, which is called "dynamically variable line-size cache (D-VLS cache). " The D-VLS cache can optimize its line-size according to the characteristic of programs, and attempts to improve the performance by exploiting the high on-chip memory bandwidth on merged DRAM/logic LSIs appropriately. In our evaluation, it is observed that an average memory-access time improvement achieved by a direct-mapped D-VLS cache is about 20% compared to a conventional direct-mapped cache with fixed 32-byte lines. This performance improvement is better than that of a doubled-size conventional direct-mapped cache.
URL: https://global.ieice.org/en_transactions/information/10.1587/e83-d_5_1048/_p
Salinan
@ARTICLE{e83-d_5_1048,
author={Koji INOUE, Koji KAI, Kazuaki MURAKAMI, },
journal={IEICE TRANSACTIONS on Information},
title={Dynamically Variable Line-Size Cache Architecture for Merged DRAM/Logic LSIs},
year={2000},
volume={E83-D},
number={5},
pages={1048-1057},
abstract={This paper proposes a novel cache architecture suitable for merged DRAM/logic LSIs, which is called "dynamically variable line-size cache (D-VLS cache). " The D-VLS cache can optimize its line-size according to the characteristic of programs, and attempts to improve the performance by exploiting the high on-chip memory bandwidth on merged DRAM/logic LSIs appropriately. In our evaluation, it is observed that an average memory-access time improvement achieved by a direct-mapped D-VLS cache is about 20% compared to a conventional direct-mapped cache with fixed 32-byte lines. This performance improvement is better than that of a doubled-size conventional direct-mapped cache.},
keywords={},
doi={},
ISSN={},
month={May},}
Salinan
TY - JOUR
TI - Dynamically Variable Line-Size Cache Architecture for Merged DRAM/Logic LSIs
T2 - IEICE TRANSACTIONS on Information
SP - 1048
EP - 1057
AU - Koji INOUE
AU - Koji KAI
AU - Kazuaki MURAKAMI
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E83-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2000
AB - This paper proposes a novel cache architecture suitable for merged DRAM/logic LSIs, which is called "dynamically variable line-size cache (D-VLS cache). " The D-VLS cache can optimize its line-size according to the characteristic of programs, and attempts to improve the performance by exploiting the high on-chip memory bandwidth on merged DRAM/logic LSIs appropriately. In our evaluation, it is observed that an average memory-access time improvement achieved by a direct-mapped D-VLS cache is about 20% compared to a conventional direct-mapped cache with fixed 32-byte lines. This performance improvement is better than that of a doubled-size conventional direct-mapped cache.
ER -