đž Cache Mapping
Cache mapping determines how main memory blocks are placed into cache lines. Direct, associative, and set-associative mapping offer different trade-offs.
Mapping Techniques
Key Concept
Direct Mapping
Each memory block maps to exactly one cache line. Simple, fast, but causes conflict misses.
Associative
Any block can go to any line. No conflicts but needs comparator per line (expensive).
Set-Associative
Block maps to a set of N lines. Best trade-off: n-way reduces conflicts with reasonable cost.
đą Interactive Cache Simulator
Try It
Address Breakdown
High-order bits
Identifies which memory block is stored in this cache line
Middle bits
Selects which cache line (or set) to check
Low-order bits
Selects the specific byte within the cache block
Tag | Index | Offset
Code Example: Cache Simulator
python
class Cache:
def __init__(self, lines=8, block_size=4, mapping="direct"):
self.lines = [{"valid": False, "tag": 0} for _ in range(lines)]
self.block_size = block_size
self.mapping = mapping
self.hits = self.misses = 0
def access(self, addr):
offset = addr % self.block_size
if self.mapping == "direct":
idx = (addr // self.block_size) % len(self.lines)
tag = addr // (self.block_size * len(self.lines))
if self.lines[idx]["valid"] and self.lines[idx]["tag"] == tag:
self.hits += 1; return True
self.lines[idx] = {"valid": True, "tag": tag}
self.misses += 1; return False
elif self.mapping == "associative":
for line in self.lines:
if line["valid"] and line["tag"] == tag:
self.hits += 1; return True
lru = min(range(len(self.lines)), key=lambda i: self.lines[i].get("lru", 0))
self.lines[lru] = {"valid": True, "tag": tag, "lru": 0}
self.misses += 1; return FalseInterview Questions
Explain direct-mapped cache. What are its pros and cons?
In direct-mapped cache, each memory block maps to exactly one cache line determined by (block_address % number_of_lines). Pros: simple hardware, fast access (single comparison). Cons: conflict misses â when multiple frequently-used blocks map to the same line, they keep evicting each other, causing poor performance.
Compare fully associative vs set-associative cache.
Fully associative: any block to any line. No conflict misses but requires a comparator per line (expensive for large caches). Set-associative: each block maps to a set of N lines. 2-8 way set-associative gives 90%+ of the miss reduction of fully associative with far less hardware. Most modern L1 caches are 4-8 way set-associative.
How do you calculate tag, index, and offset from a memory address?
For a direct-mapped cache: offset = log2(block_size) bits; index = log2(num_lines) bits; tag = remaining high-order bits. Example: 64KB cache, 64B blocks, 32-bit address: offset = 6 bits, index = 10 bits (1024 lines), tag = 16 bits. The CPU extracts these fields and compares the tag to determine hit/miss.
What is a conflict miss and how does set-associative mapping reduce it?
A conflict miss occurs when the cache line is occupied by a different block that maps to the same line, even though other lines are empty. Set-associative mapping reduces conflict misses by providing N locations per set. When one block evicts another, there are N-1 alternative locations in the same set. Increasing associativity from 1 to 2 typically halves conflict misses.