Cache Memory


The Principle of Locality

Memory Hierarchy: Principles of Operation

Memory Hierarchy: Terminology

Cache RAM access time + Time to determine hit/miss

Typical Values

Block size 4 - 128 bytes
Hit time 1 - 4 cycles
Miss penalty 8 - 32 cycles (and increasing)
    (access time) (6-10 cycles)
    (transfer time) (2 - 22 cycles)
Miss rate 1% - 20%
Cache Size 1 KB - 256 KB

Five elements in cache design:

  1. Cache Size
  2. Block Size
  3. Mapping Function
  4. Replacement Algorithm
  5. Write Policy

Mapping functions

Direct Mapped

Cache Tag and Cache Index

Example: 1 KB Direct Mapped Cache with 32 Byte Blocks

Block Size Tradeoff

Fully Associative Cache

Set Associative Cache

Cache Block Replacement Policy

Cache Write Policy

Cache Coherency

Keeping the data in the cache and in the memory consistent is called cache coherency. For a single processor system, Write Back or Write Through achieves this. For multiprocessor machines sharing memory, things become more difficult. When one processor writes, not only the memory, but all the other processor caches must be updated. There are two main ways of achieving this:

Virtual Memory

Paging - logical address space of a process can be noncontiguous. Page table map logical to physical address space.

If the page is not in physical memory the access causes a page fault. The operating system can fetch the page from disk if necessary.

Implementation of paging

Page table is often large and must be kept in main memory.
This means that every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction.
The two memory access problem can be solved by the use of a page table cache called a translation look-aside buffer (TLB).
The TLB is usually fully associative, the TLB is designed in a similar manner to any other cache.