Escolar Documentos
Profissional Documentos
Cultura Documentos
William Stallings
Copyright 2009
D.1 VICTIM CACHE...............................................................................................................2! D.2 SELECTIVE VICTIM CACHE ........................................................................................4! Incoming Blocks from Memory.....................................................................................4! Swap Between Direct-Mapped Cache and Victim Cache .............................................4!
Supplement to Computer Organization and Architecture, Eighth Edition William Stallings Prentice Hall 2009 ISBN-10: 0-13-607373-5 http://williamstallings.com/COA/COA8e.html
This appendix looks at two cache strategies mentioned in Chapter 4: victim cache and selective victim cache.
that a memory reference from the processor can do a parallel search of all entries in the associative cache to determine if the desired line is present. Figure D.3 suggests the operation of the victim cache. The data is arranged in such a way that the same line is never present in both the L1 cache and the victim cache at the same time. There are two cases to consider for managing the movement of data between the two caches: Case 1: Processor reference to memory misses in both the L1 cache and the victim cache. a. The required block is fetched from main memory (or the L2 cache if present) and placed into the L1 cache. b. The replaced block in the main cache is moved to the victim cache. There is no replacement algorithm. With a direct-mapped cache, the line to be replaced is uniquely determined. c. The victim cache can be viewed as a FIFO queue or, equivalently, a circular buffer. The item that has been in the victim cache the longest is removed to make room for the incoming line. The replaced line is written back the main memory if it is dirty (has been updated). Case 2: Processor reference to memory misses the direct-mapped cache but hits the victim cache. a. The block in the victim cache is promoted to the direct-mapped cache. b. The replaced block in the main cache is swapped to the victim cache. Note that with the FIFO discipline, the victim cache achieves true LRU (least recently used) behavior. Any reference to the victim cache pulls the referenced block out of the victim cache; thus the LRU block in the victim cache will, by definition, be the oldest one there. The term victim is used for the following reason. When a new block is brought into the L1 cache, the replacement algorithm chooses a line to be replaced. That line is the "victim" of the replacement algorithm.
D-3
Processor
L2 cache (optional)
Main memory
Address
tags
data
Address to next lower cache tag tag tag tag comparator comparator comparator comparator
Data from next lower cache one cache line of data one cache line of data one cache line of data one cache line of data LRU entry Fully-associative victim cache MRU entry
compartag ator
performed between the two blocks; no such interchange is performed otherwise. In both cases the block in the victim cache is marked as the most recently used. The prediction algorithm used in the selective victim cache scheme is referred to as the dynamic exclusion algorithm, proposed in [MAFA92]. Both [MAFA92] and [STIL94] have good descriptions of the algorithm. References JOUP90 Jouppi, N. "Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers." Proceedings, 17th Annual International Symposium on Computer Architecture, May 1990. MCFA92 McFarling, S. "Cache Replacement with Dynamic Exclusion." Proceedings, 19th Annual International Symposium on Computer Architecture, May 1992. STIL94 Stiliadis, D., and Varma, A. "Selective Victim Caching: A Method to Improve the Performance of Direct-Mapped Caches." Communications of the ACM, January 1987.
D-5