1Learning Outcomes¶
Define hit rate, hit time, miss rate, and miss penalty.
Use the average memory access time (AMAT) formula to compare multi-level cache designs.
🎥 Lecture Video
Because performance is the major reason for a memory hierarchy, it is important to measure the time to service hits or misses. We therefore define the following terminology in Table 1:
Table 1:Key cache terminology
| Request Outcome | Rate | Time |
|---|---|---|
| Cache Hit | Hit rate: fraction of access that hit in the cache. | Hit time: time (latency) to access cache memory, including the time needed to determine whether the access is a hit or a miss. |
| Cache miss | Miss rate: 1 - hit rate. | Miss penalty: Time to replace a line with the corresponding line from a lower level of the memory hierarchy. |
Because the cache is smaller and built using faster memory parts, the hit time will be much smaller than the miss penalty, which includes the time to access the next level in the hierarchy.
2Average Memory Access Time¶
The time to access data for both hits and misses affects performance. Designers sometimes use average memory access time (AMAT) as a way to compare cache designs. From P&H 5.4:
Average memory access time is the average time to access memory considering both hits and misses and the frequency of different accesses.
We will use the following assumptions in this course:
On a cache miss, the total time to retrieve data is the sum of hit time plus miss penalty.
The miss rate of a lower-level cache (e.g., L2) is the fraction of misses from a higher-level cache (e.g., L1) that also miss in this lower-level cache.
The L1 and L2 cache design is 4 times as fast as the L1-only cache design!
3Preview: Cache Optimizations¶
We mentioned that AMAT is used to compare cache designs. The key performance hit to AMAT is miss rate. This can be measured over multiple program benchmarks, each with different memory access patterns.
In this section, we have seen one way to optimize cache performance by introducing multilevel caches to reduce miss penalty.
In this chapter, we will introduce the key principles of cache design. Then, with these design principles in mind, we revisit basic optimization techniques for improving cache performance.
Hashemi et al. “Learning Memory Access Patterns.” 2018 arXiV:1803.02329