Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

1Learning Outcomes

Ideally one would desire an indefinitely large memory capacity such that any particular ... word would be immediately available. ... We are ... forced to recognize the possibility of constructing a hierarchy of memories, each of which has greater capacity than the preceding but which is less quickly accessible.

Preliminary Discussion of the Logical Design of an Electronic Computing Instrument (1946)

2Memory Wall

While hardware performance has continued to improve, there is a persistent and increasing gap between the improvements in processor hardware and memory/device interconnects (the Processor-DRAM gap, or simply memory gap).[1] The “memory wall[2] places a significant limit on performance for many modern workloads, especially in AI.[3]

By designing a memory hierarchy, we can leverage smaller amounts of high-speed hardware without ballooning the cost of our architecture nor sacrificing data and storage capacity.

3The Memory Hierarchy, Revisited

Earlier, we assumed there were only two layers of our memory hierarchy: registers (on the CPU) and memory (DRAM is close, but on a separate chip). We now continue our earlier discussion of memory hierarchy.

"Pyramid of the memory hierarchy from CPU core, registers, and L1 through L3 cache at the narrow top, DRAM main memory in the middle, and SSD, flash, magnetic disks, and virtual memory toward the wide base, with notes on speed, cost, and capacity at each tier."

Figure 3:Great Idea 3: The Principle of Locality / Memory Hierarchy

The mismatch between processor and memory speeds (the “careful tango” described earlier leads us to add a new level: The memory cache, or cache for short. Caches are usually on the same chip as the CPU and fit into the memory hierarchy as follows:

There are additional levels lower than main memory: disk is a huge one (literally).

Data moves differently between different levels of the memory hierarchy:

To summarize, we aim for the illusion of a “very large and fast memory”:

If useful, we revisit Jim Gray’s analogy of data access time on registers, on the cache, in main memory, and on disk.

"Latency analogy chart mapping memory levels from registers through caches, RAM, disk, and tape to increasing nanosecond delays, paired with human-scale time and distance metaphors such as head versus campus, Sacramento, Pluto, and Andromeda."

Figure 2:Great Idea 3: The Principle of Locality / Memory Hierarchy

3.1Multi-Level Caches

You may have noticed that the memory hierarchy diagram contains multiple caches labeled Level 1, Level 2, and Level 3. A computer can have multiple caches, where each cache is a copy of data from lower in the memory hierarchy.

Consider Apple’s A14 bionic chip, which we introduced earlier:

"Labeled die photograph of a system-on-chip: major regions outlined for GPU, system cache, CPU clusters with L2 caches, neural engine, and DDR memory interfaces along the die edges."

Figure 2:Apple A14 Bionic Chip (sources: Wikipedia, TechInsights

The L2 cache is located on the integrated circuit, often adjacent to the CPU. The System Level Cache labeled in the diagram is likely a Level 3 cache, shared across multiple CPU cores.[5]

  1. L1 cache (L1$[6]): Usually directly embedded on the CPU, hence why it is not labeled in the above diagram.

  2. L2 cache (L2$): Located on the integrated circuit, often adjacent to the CPU.

    • Size: Tens or hundreds of MiB.

    • Hit Time: Few clock cycles

    • Miss rate: 10-20%

3.2Demo

To find out the sizes of different components of the memory hierarchy on a Linux-based machine, we can use df and sysctl. The following commands were run on a Mac OS X machine.

To determine disk size, use df. The default display is in blocks (e.g., lines); use the -h option for IEC prefixes (base-two), and the -H option for base-10 prefixes.

$ df -h
Filesystem        Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/disk3s1s1   460Gi    17Gi    38Gi    31%    427k  395M    0%   /
devfs            215Ki   215Ki     0Bi   100%     744     0  100%   /dev
...
$ df -H 
Filesystem        Size    Used   Avail Capacity iused ifree %iused  Mounted on
/dev/disk3s1s1    494G     18G     40G    31%    427k  395M    0%   /
devfs             220k    220k      0B   100%     744     0  100%   /dev
...

To determine cache size and memory size, use sysctl. Because this command lists all attributes of the system kernel, we pipe the output through grep to get what we want. The default unit is bytes for memory and caches.

$ sysctl -a | grep hw.memsize
hw.memsize: 25769803776
hw.memsize_usable: 25143640064
$ sysctl -a | grep "hw.l.*size"
hw.l1icachesize: 131072
hw.l1dcachesize: 65536
hw.l2cachesize: 4194304
Solution to Exercise 1 #

D.

4193402 B=2(log24193402) B=222 B=4220 B=4 MiB\begin{aligned} 4193402 \text{ B} &= 2^{(\log_2{4193402})} \text{ B} = 2^{22} \text{ B} \\ &= 4 \cdot 2^{20} \text{ B} = 4 \text{ MiB} \end{aligned}

4Storage

5Visuals: Memory Hierarchy

"Memory-hierarchy pyramid ordering storage levels by speed, size, and proximity to the processor. From the bottom layer of the pyramid to the top, the visual shows magnetic disks, solid-state memory (flash), and virtual memory as SSD and HDD drives; then random-access memory (RAM) and physical memory as a DRAM chip; and finally L3 through L1 caches, CPU cache, registers, and CPU core on the processor chip."

Figure 1:The memory hierarchy.

Written version coming soon, but not too soon.

For now, know that DRAM (the technology used for )

Footnotes
  1. Maurice Willes. “The memory gap and the future of high performance memories.” ACM SIGARCH 2001. DOI: 10.1145/373574.373576

  2. Wm A. Wulf, Sally A. McKee. “Hitting the Memory Wall: Implications of the Obvious.” ACM SIGARCH 1994. DOI: Wulf & McKee (1995)

  3. Amir Gholami et al. “AI and Memory Wall.” IEEE Micro Journal 2024. Extended version on arXiV. DOI: Gholami et al. (2024), arXiV:20403.14123/

  4. For now, know that virtual memory is a virtual to physical address mapping assisted by the hardware (translation lookaside buffer, or TLB).

  5. We don’t discuss L3 caches much in this course. See Wikipedia.

  6. The notation $ for cache is a Berkeley innovation. Not me :-)

References
  1. Wulf, Wm. A., & McKee, S. A. (1995). Hitting the memory wall. ACM SIGARCH Computer Architecture News, 23(1), 20–24. 10.1145/216585.216588
  2. Gholami, A., Yao, Z., Kim, S., Hooper, C., Mahoney, M. W., & Keutzer, K. (2024). AI and Memory Wall. IEEE Micro, 44(3), 33–39. 10.1109/mm.2024.3373763