Chapter 4 Cache Memory Characteristics of Computer Memory • • • • • • • • Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation Location • CPU • Internal • External Capacity • Word size —The natural unit of organisation • Number of words —or Bytes Unit of Transfer • Internal —Usually governed by data bus width • External —Usually a block which is much larger than a word • Addressable unit —Smallest location which can be uniquely addressed —Word internally —Cluster on M$ disks Access Methods (1) • Sequential —Start at the beginning and read through in order —Access time depends on location of data and previous location —e.g. tape • Direct —Individual blocks have unique address —Access is by jumping to vicinity plus sequential search —Access time depends on location and previous location —e.g. disk Access Methods (2) • Random —Individual addresses identify locations exactly —Access time is independent of location or previous access —e.g. RAM • Associative —Data is located by a comparison with contents of a portion of the store —Access time is independent of location or previous access —e.g. cache Memory Hierarchy • Registers —In CPU • Internal or Main memory —May include one or more levels of cache —―RAM‖ • External memory —Backing store Memory Hierarchy - Diagram Performance • Access time —Time between presenting the address and getting the valid data • Memory Cycle time —Time may be required for the memory to ―recover‖ before next access —Cycle time is access + recovery • Transfer Rate —Rate at which data can be moved Physical Types • Semiconductor —RAM • Magnetic —Disk & Tape • Optical —CD & DVD • Others —Bubble —Hologram Physical Characteristics • • • • Decay Volatility Erasable Power consumption Organisation • Physical arrangement of bits into words • Not always obvious —e.g. interleaved The Bottom Line • How much? —Capacity • How fast? —Time is money • How expensive? • Footprint —How much space it will take? —Not mentioned in text b/c, at the same chip area, larger footprint simply means more expensive Hierarchy List • • • • • • • • • Registers L1 Cache L2 Cache L3 Cache Main memory Disk cache Disk Optical Tape This hierarchy does not apply to all computers, e.g. • Expanded memory (in IBM PCs running DOS) So you want it fast? • It is possible to build a computer which uses only static RAM (SRAM, described later) • This would be very fast (10 ns access time, compared to about 60 ns for DRAM) • This would need no cache —How can you cache cache? • This would cost a lot and need a huge chip, b/c SRAM has larger footprint than DRAM. What makes caching possible: Locality of Reference During the execution of a program, memory references tend to cluster in relatively small areas of memory • e.g. loops Idea: copy those small areas into a smaller but faster memory – the cache! • Most memory operations will only need to access the cache (fast) • Transfers between cache and main memory are slow, but they are seldom executed • The average access time is practically equal to the cache access time! Cache • Small amount of fast memory • Sits between normal main memory and CPU • May be physically located on the CPU chip or right next to it on the motherboard (connected through a dedicated bus) Source: http://www.read.cs.ucla.edu/111/2007spring/notes/lec9 Cache and Main Memory Cache/Main Memory Structure Cache operation • • • • CPU requests contents of memory location Check cache for this data If present, get from cache (fast) If not present, read required block from main memory to cache • Then deliver from cache to CPU • Cache includes tags to identify which block of main memory is in each cache slot Cache Read Operation Typical Cache Organization Cache Design Problems • • • • • • • Addressing Size Mapping Function Replacement Algorithm Write Policy Block Size Number of Caches Cache Addressing Where does the cache sit in case of CPUs with virtual memory? Memory management unit - Wikipedia Virtual vs. physical cache trade-off memory management unit Cache Sizes L1 cache L2 cache L3 cache Mainframe Year of Introduction 1968 16 to 32 KB — — PDP-11/70 Minicomputer 1975 1 KB — — VAX 11/780 Minicomputer 1978 16 KB — — IBM 3033 Mainframe 1978 64 KB — — IBM 3090 Mainframe 1985 128 to 256 KB — — Intel 80486 PC 1989 8 KB — — Pentium PC 1993 8 KB/8 KB 256 to 512 KB — PowerPC 601 PC 1993 32 KB — — PowerPC 620 PC 1996 32 KB/32 KB — — PowerPC G4 PC/server 1999 32 KB/32 KB 256 KB to 1 MB 2 MB IBM S/390 G4 Mainframe 1997 32 KB 256 KB 2 MB IBM S/390 G6 Mainframe 1999 256 KB 8 MB — Pentium 4 2000 8 KB/8 KB 256 KB — 2000 64 KB/32 KB 8 MB — CRAY MTAb PC/server High-end server/ supercomputer Supercomputer 2000 8 KB 2 MB — Itanium PC/server 2001 16 KB/16 KB 96 KB 4 MB SGI Origin 2001 High-end server 2001 32 KB/32 KB 4 MB — Itanium 2 PC/server 2002 32 KB 256 KB 6 MB IBM POWER5 High-end server 2003 64 KB 1.9 MB 36 MB CRAY XD-1 Supercomputer 2004 64 KB/64 KB 1MB — Processor Type IBM 360/85 IBM SP Mapping Function 3 • • • techniques exist: Direct Associative Set-associative For all 3, the following example will be used: • Word size is 1 Byte • Cache is 64kB • Cache line is 4 Bytes — The cache has 16k (214) lines — A cache line can hold an integer • Main memory is 16MB — 24 bit address (224=16M) Direct Mapping from Cache to Main Memory b = 4 Bytes in our example What is m? Calculate! How many times is the main memory larger than the cache? Direct Mapping • Each block of main memory maps to only one cache line —i.e. if a block is in cache, it must be in one specific place • Each cache line maps to multiple main memory blocks —256 in our example! —What pattern do these 256 blocks have? Direct Mapping — What pattern do these 256 blocks have? — Yes, they are equally spaced every m blocks! Source: http://cs.njit.edu/~sohn/cs650/lec8.pdf Parking lot analogy Direct Mapping — What pattern do these 256 blocks have? — Yes, they are equally spaced every m blocks! • Use ―modulo‖ arithmetic: i = j mod m —i is cache line number —j is main memory block number —m is number of lines in cache Try it out on our example! What cache line does memory block 42000 go to? Direct Mapping Cache Line Table Cache line Main Memory blocks held 0 0, m, 2m, 3m…2s-m 1 1,m+1, 2m+1…2s-m+1 … m-1 m-1, 2m-1,3m-1…2s-1 We have covered pp. 111-126 in the text. Read all explanations in the text and thoroughly understand the modulo operation! 5-minute quiz 1. Explain in as few words as possible the relation between a block and a line 2. A cache has 2k (=211 = 2048) lines and its main memory has 1GB. The block size is 8 Bytes. — How many blocks does the main memory have? — How many times larger is the main memory than the cache? — If direct mapping is used, where does block 10,000 go in the cache? Heads up! Cache-related questions are always present in ETS and GRE exams for CS, CEng and EE! Direct Mapping Address Structure For the purpose of addressing the cache, the main memory address is split into 3 parts: • Least Significant w bits identify a unique word within the block/line (2w = b) • Most Significant s bits identify a unique memory block. They are further split into: • a cache line field of r bits (2r = # of lines in cache) • a tag field of the remaining s-r bits Tag t = s-r bits 8 Line identifier r bits 14 Word w bits 2 Direct Mapping Address Structure Tag t = s-r bits 8 Line identifier r bits Word w bits 14 In our example: • 24 bit address • w = 2 bit word identifier (22 = 4 Bytes in a block) • s = 22 bit block identifier — r = 14 bit line identifier — 8 bit tag (=22-14) Remember the 256 blocks of main memory that can map onto a given cache line? 28 = 256 (Coincidence?) No two blocks that map to the same line have the same Tag! 2 Direct Mapping Address Structure Tag t = s-r bits 8 Line identifier r bits 14 No two blocks in the same line have the same Tag! That’s why only the tag needs to be physically attached to the cache line. Word w bits 2 Direct Mapping Address Structure How does the cache controller know if a certain memory word (byte) is in the cache? • Good news: In binary, it’s very easy to perform ―j mod m‖ when m is a power of 2 • Find the line #using the ―middle‖ r bits of the address • Check tag of that line to see if the block is the correct one among the possible 256 that map there Direct Mapping Cache Organization Direct Mapping Example What should we call these rectangles? Well, their size is the size of the cache and they’re made up of blocks …so how about cache-size superblocks? Draw the very next cache-sized superblock (and its binary addresses) Direct Mapping pros & cons • Simple • Inexpensive • Fixed location for given block —If a program accesses 2 blocks that map to the same line repeatedly, cache misses are very high, a.k.a. thrashing Victim Cache • Lower miss penalty • Remember what was discarded —Already fetched —Use again with little penalty • Fully associative ... See below • Very small, 4 to 16 cache lines • Between a direct-mapped L1 cache and next memory level Associative Mapping A main memory block can load into any line of cache Parking lot analogy Associative Mapping • The main memory address is split between tag and word —Compared to direct mapping, the tag is now longer, as it includes the old line ID — # of lines in cache is independent of address format • Tag uniquely identifies each block of main memory • Every line’s tag is examined for a match — In parallel, for the sake of speed — Cache searching requires a lot of hardware expensive Fully Associative Cache Organization Associative Mapping Address Structure Tag 22 bit Word 2 bit • 22-bit tag stored with each 32-bit line of data • Compare tag field with tag entry in cache to check for hit • Least significant 2 bits of address identify which 8-bit word is required from the 32-bit line Extra-credit question With the numbers in our example, what is the total number of input lines to the Compare circuit? • Cache is 64kB • Cache line is 4 Bytes — The cache has 16k (214) lines • Main memory is 16MB — 24 bit address Is it even possible to build a fully associative cache controller of this size? Yes, but only by using memories with builtin hardware for parallel comparison both at the bit and word levels! Content-addressable memory - Wikipedia Associative Mapping Example Associative Mapping pros & cons • Simple to understand • Very expensive to implement the compare function • Flexibility to store blocks anywhere — Miss ratio can be improved using various replacement algorithms (later ...) — Miss ratio is lowest of all mappings, so we would choose it when the miss penalty is very high (e.g. weapons control systems) Set Associative Mapping • Cache is divided into a number of sets • Each set contains a number of lines • A given block maps to any line in a given set Source: http://cs.njit.edu/~sohn/cs650/lec8.pdf Set Associative Mapping Parking lot analogy Our example: • Cache is 64kB • Cache line is 4 Bytes — The cache has 16k (214) lines • Main memory is 16MB — 24 bit address Assume two-way associative cache: • The 214 lines are grouped in sets of 2 lines →213 sets →13 bit set number • Block number in main memory is modulo 213 • 000000, 00A000, 00B000, 00C000 … map to the same set K – Way Set-Associative Cache Organization Set Associative Mapping Address Structure Tag 9 bit Word 2 bit Set 13 bit • Use set field to determine cache set to look in • Compare tag field to see if we have a hit • e.g —Address number —1FF 7FFC —001 7FFC Tag 1FF 001 Data 12345678 1FFF 11223344 1FFF Set Problem 4.1 / 146 • 4-way set-associative cache • Main memory has 4k blocks • Each block has 128 words Show address format Lab week 4 • 4.2 • 4.3 • 4.5 • 4.17 • 4.18 • 4.19 Hint: Use formula on p.116 Two-Way Set-Associative Mapping Example We have covered pp. 126-134 in the text. Read all explanations in the text and thoroughly understand the examples! Figure 4.16 Varying Associativity over Cache Size 1.0 0.9 0.8 Hit ratio 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 1k 2k 4k 8k 16k 32k Cache size (bytes) direct 2-way 4-way 8-way 16-way 64k 128k 256k 512k 1M 5-minute quiz • How many sets are there in a — Direct-addressed cache — Fully-associative cache? • A 4-way set-associative cache has 2 k lines. How many sets does it have? • The main memory associated with the above cache has 2 MB, byte-level addressing and 8 Byte/block. —Derive the address structure —How long are the tags in the cache? Example continues … • A 4-way set-associative cache has 2 k lines. How many sets does it have? • The main memory associated with the above cache has 2 MB, byte-level addressing and 8 Byte/block. —Derive the address structure —How long are the tags in the cache? —Where in the cache is the word with address 1242AB (hex)? Extra-credit question From table 4.3, it seems that cache size hasn’t been following Moore’s Law Why not? Replacement Algorithms (1) Direct mapping • No choice, b/c each block only maps to only one line • Replace that line! Replacement Algorithms (2) Associative & Set Associative • Here there is a choice! —Exactly how many alternatives are there? • Algorithms are always implemented in the hardware, for speed: —Least Recently used (LRU) – E.g. in 2 way set associative which of the 2 blocks is LRU? —First in first out (FIFO) – Replace block that has been in cache longest —Least frequently used – Replace block which has had fewest hits —Random – Works surprisingly well! (And it’s simple!) Write Policy Why is writing to a cache trickier? • Must not overwrite a cache block unless main memory is up to date • A single CPU can have multiple caches (L1, L2 etc.) • Multiple CPUs may each have individual caches • I/O devices may address main memory directly Write through • All writes go to main memory as well as cache • Whenever a word is written to the cache, it is also written to main memory • Problems: —Other caches get out of sync (a.k.a. cache coherency problem) —Lots of traffic —Slows down writes Write back • Updates initially made in cache only • Update bit (a.k.a. dirty or used) for cache line is set when update occurs • If block is to be replaced, write to main memory only if dirty bit is set • Problems: —Other caches get out of sync —I/O must access main memory through cache Do we have this problem with write through? Example 4.3 / 137 • Which is ―better‖, write through or write back? • It depends ... Putting the write problem in perspective • In usual desktop applications, only 15% of memory references are writes (on average) —But can go up to 50% for HPC applications! • Remember Amdahl’s Law! Today we’ve covered pp.136-137 of the text. Cache coherency • Is a problem whenever multiple CPUs maintain caches (copies) of data from a shared memory resource • Critical in parallel computing applications: —Supercomputers and computer clusters —Multi-core CPUs Source: Cache coherence - Wikipedia Cache coherency approaches • Bus watching with write through —Snooping —Snarfing • Dedicated hardware to monitor and enforce coherency —Directory-based • Avoid problem altogether by making the shared memory non-cacheable More details in 17.3 Line Size • Retrieve not only desired word but a number of adjacent words as well • Increased block size will increase hit ratio at first — the principle of locality • But at some point the hit ratio will start decreasing, b/c larger blocks: — Reduce total number of blocks that fit in cache — Cause data to be overwritten shortly after being fetched — Each additional word is less local so less likely to be needed • There is an optimum, but it depends on computer and application: — 8 to 64 bytes for desktops and servers — 64 and 128 bytes for HPC Multilevel Caches • High logic density enables caches on chip —Faster than bus access —Frees bus for other transfers • Common to use both on and off chip cache —L1 on chip, L2 off chip, both SRAM —L2 access much faster than DRAM or ROM —L2 often uses separate data path —L2 may now be on chip (PentiumPro, a.k.a. "686") —Resulting in L3 cache – Bus access or now on chip… Remember Back-side bus! Which CPU had it first? Hit Ratio (L1 & L2) for 8 kB and 16 kB L1 How about L3 cache? • Initially (cca. 1995) it was off-chip SRAM —Alpha 21164 • Then it followed the trend, migrating on-chip • Today’s state of the art: quad-core CPUs with dedicated L1 and L2, and shared L3 (all on same chip) Source: Athlon II Or Phenom II: Does Your CPU Need L3 Cache? How about L3 cache? All implementations of L3 have high associativity (b/c it impacts all cores!), e.g. • Intel Core i5 and Core i7: — 32KB of 8-way associative L1 data cache and 32KB of 4way associative L1 instruction cache. — 256 kB of L2 cache 8-way set-associative — 8 MB of L3 cache 16-way associative • AMD Phenom II X4: — 2-way set-associative L1 cache, which offers lower latencies. To compensate for possible misses, it features twice the memory capacity: 64KB data and 64KB instruction cache. — 2 MB of L2 cache 8-way set-associative — 6 MB of L3 cache 48-way associative! Source: Athlon II Or Phenom II: Does Your CPU Need L3 Cache? Unified vs. Split Caches • One cache for data and instructions or two, one for data and one for instructions • Advantages of unified cache —Higher hit rate – Balances load of instruction and data fetch – Only one cache to design & implement • Advantages of split cache —Eliminates cache contention between instruction fetch/decode unit and execution unit – Important in pipelining CISC case study: Pentium 4 Cache • 80386 – no on chip cache • 80486 – 8k using 16 byte lines and four way set associative organization • Pentium (all versions) – two on chip L1 caches — Data & instructions • Pentium III – L3 cache added off chip • Pentium 4 — L1 caches – 8k bytes – 64 byte lines – four way set associative — L2 cache – – – – Feeding both L1 caches 256k 128 byte lines 8 way set associative — L3 cache on chip Intel Cache Evolution Problem Solution Processor on which feature first appears Add external cache using faster memory technology. 386 External memory slower than the system bus. Increased processor speed results in external bus becoming a bottleneck for cache access. Move external cache on-chip, operating at the same speed as the processor. 486 486 Internal cache is rather small, due to limited space on chip Add external L2 cache using faster technology than main memory Create separate data and instruction caches. Pentium Create separate back-side bus that runs at higher speed than the main (front-side) external bus. The BSB is dedicated to the L2 cache. Pentium Pro Contention occurs when both the Instruction Prefetcher and the Execution Unit simultaneously require access to the cache. In that case, the Prefetcher is stalled while the Execution Unit’s data access takes place. Increased processor speed results in external bus becoming a bottleneck for L2 cache access. Some applications deal with massive databases and must have rapid access to large amounts of data. The on-chip caches are too small. Move L2 cache on to the processor chip. Pentium II Add external L3 cache. Pentium III Move L3 cache on-chip. Pentium 4 Pentium 4 Block Diagram Schedules micro-ops based on data dependence and resources. May execute speculatively, trying to keep pipeline full. Fetches instructions from L2, decodes them into micro-ops, stores micro-ops in L1 What is out-of-order execution (OOE)? This new paradigm alters the von Neumann fetchexecute cycle: • • • • • • Instruction fetch. Instruction dispatch to an instruction queue. Instruction waits in queue until its input operands are available. Instruction is then allowed to leave the queue before earlier, older instructions. Instruction is issued to the appropriate functional unit and executed by that unit. Results are queued. Only after all older instructions have their results written back to the registers, this result is written back to the registers. More at Out-of-order execution - Wikipedia Why micro-operations? • Pentium instructions are long, complex, and don’t all have the same size. This prevents an efficient hardware implementation of the CPU’s Control Unit. • Idea: Convert instructions into RISC-like, simple instructions, called micro-ops. — Micro-ops have fixed, short length, making possible superscalar pipelining and scheduling — Performance is further improved by separating decoding from scheduling & pipelining • Micro-ops are stored in an L1 cache dedicated to instructions, a.k.a. instruction cache (split cache design!) Pentium 4 Data Caches • The other half of the L1 cache is dedicated to data, a.k.a. data cache • Data cache is write back — Write backs are scheduled whenever the internal bus is free, thus making L1 a sort of write buffer for L2 — Can be configured to write through • L1 cache controlled by 2 bits (in a CPU register) — CD = cache disable — NW = not write through — The combination CD = 0, NW = 1 is forbidden (WHY?) • There are also 2 Pentium instructions to invalidate (flush) cache and write back then invalidate • L2 and L3 are both 8-way set-associative — Line size 128 bytes RISC case study: ARM L1 Cache Core Cache Type Cache Size (kB) Cache Line Size (words) Associativity Location Write Buffer Size (words) ARM720T Unified 8 4 4-way Logical 8 ARM920T Split 16/16 D/I 8 64-way Logical 16 ARM926EJ-S Split 4-128/4-128 D/I 8 4-way Logical 16 ARM1022E Split 16/16 D/I 8 64-way Logical 16 ARM1026EJ-S Split 4-128/4-128 D/I 8 4-way Logical 8 Intel StrongARM Split 16/16 D/I 4 32-way Logical 32 Intel Xscale Split 32/32 D/I 8 32-way Logical 32 ARM1136-JF-S Split 4-64/4-64 D/I 8 4-way Physical 32 ARM Cache Organization • Small (8-32 B) FIFO write buffer —Enhances memory write performance —Between cache and main memory —Small c.f. cache —Data put in write buffer at processor clock speed —Processor continues execution —External write in parallel until empty —If buffer full, processor stalls —Data in write buffer not available until written – So keep buffer small ARM Cache and Write Buffer Organization Does ARM have only L1 cache? • Up to ARM v6, L2 was optional • Starting with ARM v7, L2 cache is part of the architecture • L3 cache has only partial support, see ARM Information Center Homework for Ch.4 – Due Thu, Oct 7 End-of-chapter problems: • 4 • 6 • 8 • 12 • 19 (Hint: Use formula from Ex. 4.1, p.116) • 25 Lab week 5 In Table 4.5 / 143 it is stated that the following combination of the cache control bits is invalid: CD = 0, NW = 1. Read the description of these bits carefully and explain why the combination is invalid. — Hint: What happens when a "dirty" line needs to be overwritten due to a cache miss? Lab week 5 Experimenting with the first cache simulator from the text website: http://williamstallings.com/COA/Animation/Links.html Lab week 5 • 4.14 Hint: Use an additional 2 bits per line to keep track of use. • 4.15 • 4.17 Denote by T the cache access time. Calculate the total time needed in both scenarios as a function of T, then take the ratio. • 4.19 Hint: Use formula on p.116
© Copyright 2026 Paperzz