Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

1Learning Outcomes

In an earlier section, we explained why hardware costs make fully associative caches rather uncommon in modern processors. We now introduce the other end of the spectrum policy: a direct mapped cache. With this new cache, we consider again the cache design policies and walk through an example.

2Placement Policy

3Identification

Consider our visualization for a 16B, direct-mapped cache with 4B blocks in Figure 1.

"Cold direct-mapped cache table with valid and dirty bits and empty data contents."

Figure 1:A cold snapshot of a 16B direct-mapped cache with 4B blocks and a dirty bit for write-back.

On the surface, the direct mapped cache looks very similar to that of our fully associative cache. We discuss how the direct mapped placement policy shortens the tag width and impacts the identification procedure to determine a cache hit.

3.1Tag, Index, and Offset

The mapping of pretty much all direct-mapped caches is simple:

(Block address) modulo (number of blocks in cache)\text{(Block address) modulo (number of blocks in cache)}

Like before direct-mapped caches copy in data from memory at the granularity of blocks. We can then translate from byte address to block addresses.

As an example, we can connect the direct-mapped cache in Figure 1 to the 12-bit memory address in Figure 2.

"Direct-mapped address decomposition into fields: tag at bits 11 through 4, index at bits 3 through 2, and block-offset at bits 1 through 0."

Figure 2:For a direct-mapped cache, the memory address is split into three fields: the tag, the index, and the offset. For the cache in Figure 1, a 12-bit memory address is split into an 8-bit tag, a 2-bit index, and a 2-bit offset.

4Replacement Policy

5Write Policy

6Walkthrough

The following animation traces through four memory accesses to a 12-bit address space on our 16B direct-mapped cache with 4B blocks. Assume a write-back policy. Assume the cache starts out cold, like in Figure 1.

Figure 3:Warming up a direct-mapped cache.

Contrast this direct-mapped cache walkthrough with the one for fully associative caches:

7Direct Mapped: Hardware and Performance

Implementing a direct-mapped cache in hardware is much simpler than the fully associative cache.

"Hardware block diagram of a direct-mapped cache. A 32-bit address is broken into tag, index, and offset. Arrows connect the three fields of the address to where they are used in the memory space diagram to depict index selection, tag check, and data output path."

Figure 4:Hardware implementation of a direct-mapped cache.