Cache-Adaptive Analysis

Memory efficiency and locality have substantial impact on the performance of programs, particularly when operating on large data sets. Thus, memory- or I/O-efficient algorithms have received significant attention both in theory and practice. The widespread deployment of multicore machines, however,...

Full description

Bibliographic Details
Main Authors: Bender, Michael A. (Author), Ebrahimi, Roozbeh (Author), Fineman, Jeremy T. (Author), Johnson, Rob (Author), Lincoln, Andrea (Author), McCauley, Samuel (Author), Demaine, Erik D (Contributor), Lynch, Jayson R. (Contributor)
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor)
Format: Article
Language:English
Published: Association for Computing Machinery (ACM), 2017-07-26T17:46:41Z.
Subjects:
Online Access:Get fulltext
LEADER 03793 am a22003733u 4500
001 110857
042 |a dc 
100 1 0 |a Bender, Michael A.  |e author 
100 1 0 |a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science  |e contributor 
100 1 0 |a Demaine, Erik D  |e contributor 
100 1 0 |a Lynch, Jayson R.  |e contributor 
700 1 0 |a Ebrahimi, Roozbeh  |e author 
700 1 0 |a Fineman, Jeremy T.  |e author 
700 1 0 |a Johnson, Rob  |e author 
700 1 0 |a Lincoln, Andrea  |e author 
700 1 0 |a McCauley, Samuel  |e author 
700 1 0 |a Demaine, Erik D  |e author 
700 1 0 |a Lynch, Jayson R.  |e author 
245 0 0 |a Cache-Adaptive Analysis 
260 |b Association for Computing Machinery (ACM),   |c 2017-07-26T17:46:41Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/110857 
520 |a Memory efficiency and locality have substantial impact on the performance of programs, particularly when operating on large data sets. Thus, memory- or I/O-efficient algorithms have received significant attention both in theory and practice. The widespread deployment of multicore machines, however, brings new challenges. Specifically, since the memory (RAM) is shared across multiple processes, the effective memory-size allocated to each process fluctuates over time. This paper presents techniques for designing and analyzing algorithms in a cache-adaptive setting, where the RAM available to the algorithm changes over time. These techniques make analyzing algorithms in the cache-adaptive model almost as easy as in the external memory, or DAM model. Our techniques enable us to analyze a wide variety of algorithms --- Master-Method-style algorithms, Akra-Bazzi-style algorithms, collections of mutually recursive algorithms, and algorithms, such as FFT, that break problems of size N into subproblems of size Theta(Nc). We demonstrate the effectiveness of these techniques by deriving several results: 1. We give a simple recipe for determining whether common divide-and-conquer cache-oblivious algorithms are optimally cache adaptive. 2. We show how to bound an algorithm's non-optimality. We give a tight analysis showing that a class of cache-oblivious algorithms is a logarithmic factor worse than optimal. 3. We show the generality of our techniques by analyzing the cache-oblivious FFT algorithm, which is not covered by the above theorems. Nonetheless, the same general techniques can show that it is at most O(loglog N) away from optimal in the cache adaptive setting, and that this bound is tight. These general theorems give concrete results about several algorithms that could not be analyzed using earlier techniques. For example, our results apply to Fast Fourier Transform, matrix multiplication, Jacobi Multipass Filter, and cache-oblivious dynamic-programming algorithms, such as Longest Common Subsequence and Edit Distance. Our results also give algorithm designers clear guidelines for creating optimally cache-adaptive algorithms. 
520 |a National Science Foundation (U.S.) (NSF grant CCF 1114809) 
520 |a National Science Foundation (U.S.) (NSF grant CCF 1217708) 
520 |a National Science Foundation (U.S.) (NSF grant CCF 1218188) 
520 |a National Science Foundation (U.S.) (NSF grant CCF 1314633) 
520 |a National Science Foundation (U.S.) (NSF grant CCF 1439084) 
520 |a Center for Massive Data Algorithmics (MADALGO) 
520 |a National Science Foundation (U.S.) (NSF grant IIS 1247726) 
520 |a National Science Foundation (U.S.) (NSF grant IIS 1251137) 
520 |a National Science Foundation (U.S.) (NSF grant CNS 1408695) 
546 |a en_US 
655 7 |a Article 
773 |t Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures - SPAA '16