Escolar Documentos
Profissional Documentos
Cultura Documentos
Computer Architecture
Lecture 16: Virtual Memory
vm.1
Probability
of reference
0 Address Space 2
vm.2
1
Review: Levels of the Memory Hierarchy
Capacity Upper Level
Access Time Staging
Cost Xfer Unit faster
CPU Registers
100s Bytes Registers
<10s ns
Instr. Operands prog./compiler
1-8 bytes
Cache
K Bytes
10-100 ns
Cache
$.01-.001/bit cache cntl
Blocks 8-128 bytes
Main Memory
M Bytes Memory
100ns-1us
$.01-.001 OS
Pages 512-4K bytes
Disk
G Bytes
ms
Disk
-3 -4
10 - 10 cents user/operator
Files Mbytes
Tape Larger
infinite
sec-min Tape Lower Level
10 -6
vm.3
Virtual Memory
Protection
vm.4
2
Virtual Memory?
vm.5
reg
pages
frame
Paging Organization
virtual and physical address space partitioned into blocks of equal size
page frames
pages
vm.6
3
Address Map
V = {0, 1, . . . , n - 1} virtual address space n>m
M = {0, 1, . . . , m - 1} physical address space
0
Addr Trans Main Secondary
a Mechanism Memory Memory
a'
vm.7
Paging Organization
P.A. unit of
0 frame 0 1K 0 page 0 1K mapping
1024 1 1K Addr
1024 1 1K
Trans
MAP also unit of
7168 7 1K transfer from
virtual to
Physical physical
Memory
31744 31 1K memory
Virtual Memory
Address Mapping
10
VA page no. disp
Page Table
Page Table
Base Reg Access
V PA + actually, concatenation
index Rights
is more likely
into
page
table table located physical
in physical memory
memory address
vm.8
4
Address Mapping Algorithm
If V = 1
then page is in main memory at frame address stored in table
else address located page in secondary memory
Access Rights
R = Read-only, R/W = read/write, X = execute only
Page Fault: page not resident in physical memory, also causes a trap;
usually accompanied by a context switch: current process
suspended while page is fetched from secondary storage
vm.9
This makes cache access very expensive, and this is the "innermost
loop" that you want to go as fast as possible
vm.10
5
Virtual Address and a Cache
VA PA miss
Trans- Main
CPU Cache
lation Memory
hit
data
This makes cache access very expensive, and this is the "innermost
loop" that you want to go as fast as possible
TLBs
A way to speed up translation is to use a special cache of recently
used page table entries -- this has many names, but the most
frequently used is Translation Lookaside Buffer or TLB
TLB access time comparable to, though shorter than, cache access time
(still much less than main memory access time)
vm.12
6
Translation Look-Aside Buffers
Just like any other cache, the TLB can be organized as fully associative,
set associative, or direct mapped
TLBs are usually small, typically not more than 128 - 256 entries even on
high end machines. This permits fully associative
lookup on these machines. Most mid-range machines use small
n-way set associative organizations.
hit
VA PA miss
TLB Main
CPU Cache
Lookup Memory
Translation miss hit
with a TLB
Trans-
lation
data
vm.13
1/2 t t 20 t
Works because high order bits of the VA are used to look in the TLB
while low order bits are used as index into cache
vm.14
7
Overlapped Cache & TLB Access
assoc index
32 TLB Cache 1K
lookup
4 bytes
10 2
00
IF cache hit AND (cache tag = PA) then deliver data to CPU
ELSE IF [cache miss OR (cache tag = PA)] and TLB hit THEN
access memory with the PA from the TLB
ELSE do standard VA translation
vm.15
This usually limits things to small caches, large page sizes, or high
n-way set associative caches if you want a large cache
8
Fragmentation & Relocation
Fragmentation is when areas of memory space become unavailable for
some reason
Relocation: move program or data to a new region of the address
space (possibly fixing all the pointers)
Internal Fragmentation:
program is not an integral # of pages, part of the last page frame is
"wasted" (obviously less of an issue as physical memories get
larger)
occupied
0 1 . . . k-1
vm.17
-- the gap between processor speed and disk speed grow wider
vm.18
9
2-level page table
Seg 0 256 P0 1K D0 4K
. PA .
PA . .
Seg 1 . .
.
. P255 D1023
.
vm.19
vm.20
10
Page Replacement (Continued)
Not Recently Used:
Associated with each page is a reference flag such that
ref flag = 1 if the page has been referenced in recent past
= 0 otherwise
vm.21
An alternative is prefetching:
anticipate future references and load such pages before their
actual use
(One way to obtain effect of prefetching behavior is increasing the page size
vm.22
11
Summary
vm.23
12