Row Buffer Management Policies in PCM Main Memory Systems

碩士 === 國立臺灣大學 === 資訊工程學研究所 === 100 === As Moore’s Law continues to hold true, the number of concurrently running applications has been increasing, and the capacity requirement of main memory has been much aggravated. For recent years, researchers have been studying new memory technologies that are e...

Full description

Bibliographic Details
Main Authors: Min-Zhong Lu, 呂敏中
Other Authors: Chia-Lin Yang
Format: Others
Language:en_US
Published: 2012
Online Access:http://ndltd.ncl.edu.tw/handle/19824215718227175522
Description
Summary:碩士 === 國立臺灣大學 === 資訊工程學研究所 === 100 === As Moore’s Law continues to hold true, the number of concurrently running applications has been increasing, and the capacity requirement of main memory has been much aggravated. For recent years, researchers have been studying new memory technologies that are envisioned to provide more memory capacity and lower power than the conventional DRAM. Among them, phase-change memory (PCM) has been considered as the best candidate to replace DRAM as main memory due to its non-volatility, byte-addressability, and superior scalability. Nevertheless, there have still been a number of drawbacks of PCM that need to be addressed in order to enjoy the full benefit of it. For example, the write latency of PCM is much longer than that of DRAM, which often poses as a performance bottleneck for PCM-based main memory system. In this thesis, I extend one previous work that proposed the multiple narrow row buffer organization for PCM, which was reported to be effective in mitigating the relatively poor performance of PCM; I propose two row buffer management policies and two intra-bank parallelism schemes to exploit the full potential of such organization. Simulation shows that the proposed row buffer management policies bring an average of 2.27% and a maximum of 2.51% performance improvement, and that the proposed intra-bank parallelism schemes bring a maximum of additional 67.5% performance improvement for memory intensive workloads. Moreover, intra-bank parallelism schemes are observed to provide a maximum of 12% performance improvement for less memory intensive workloads.