Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

1Learning Outcomes

Because performance is the major reason for a memory hierarchy, it is important to measure the time to service hits or misses. We therefore define the following terminology in Table 1:

Table 1:Key cache terminology

Request OutcomeRateTime
Cache HitHit rate: fraction of access that hit in the cache.Hit time: time (latency) to access cache memory, including the time needed to determine whether the access is a hit or a miss.
Cache missMiss rate: 1 - hit rate.Miss penalty: Time to replace a line with the corresponding line from a lower level of the memory hierarchy.

Because the cache is smaller and built using faster memory parts, the hit time will be much smaller than the miss penalty, which includes the time to access the next level in the hierarchy.

2Average Memory Access Time

The time to access data for both hits and misses affects performance. Designers sometimes use average memory access time (AMAT) as a way to compare cache designs. From P&H 5.4:

Average memory access time is the average time to access memory considering both hits and misses and the frequency of different accesses.

AMAT=Hit Time+Miss Rate×Miss Penalty\text{AMAT} = \text{Hit Time} + \text{Miss Rate} \times \text{Miss Penalty}

We will use the following assumptions in this course:

The L1 and L2 cache design is 4 times as fast as the L1-only cache design!

3Preview: Cache Optimizations

We mentioned that AMAT is used to compare cache designs. The key performance hit to AMAT is miss rate. This can be measured over multiple program benchmarks, each with different memory access patterns.

In this section, we have seen one way to optimize cache performance by introducing multilevel caches to reduce miss penalty.

In this chapter, we will introduce the key principles of cache design. Then, with these design principles in mind, we revisit basic optimization techniques for improving cache performance.

Footnotes
  1. Hashemi et al. “Learning Memory Access Patterns.” 2018 arXiV:1803.02329