Cache block placement

Direct mapping

In this method, each successive cache entry is mapped to successive entries in the main memory. Once all cache lines are mapped to one memory address, they start looping again.

  • The LSBs of the address represent the index of the cache line.
  • The rest of the address bits are used as a tag to help differentiate other addresses that might also have the same LSBs.
    cache_direct_mapping.png
  • One more bit called the valid bit is also added. It is 1 if the data is valid and 0 if the data is invalid (during init, the cache might have invalid data).
  • The valid bit is set to 1, when data is moved into the cache.

Working

For a cache with k lines, when an m bit address is received from the CPU:

  • The last k bits are used to index into the cache and find the entry.
  • Then this entry's valid bit is checked. And if the tag bits of the cache entry, match the rest of the bits in the address (higher m - k bits) are the same, then its a hit.

Associative mapping

In this method, any address from the main memory can be stored in any cache line.
The entire address is used as a tag for the cache line.
So, each cache line must end up having a comparator to check tag bits with a given address, so it is more expensive.

Set associative mapping

In k-way set associative mapping, every k successive blocks in the cache are grouped into a set. Then we will use set number instead of block number in the address. Because each set has k blocks, we differentiate using tag.