Você está na página 1de 90

Memory

Management – Part 1
CPT212 – Design & Analysis of Algorithms
Learning Outcomes
• In this topic, we will learn about
–Memory management
–Sequential-fit methods

Teh Je Sen (2018)


Memory Management

Teh Je Sen (2018)


Computer Memory Basics
• Computer memory is organized into a
sequence of words
–Each word consists of 4, 8, or 16 bytes (machine-
dependent)
• Memory words are numbered from
0 𝑡𝑜 (𝑁 − 1), where 𝑁 is the number of
memory words available
• The number associated to each word is
known as a memory address

Teh Je Sen (2018)


Computer Memory Basics
• Memory can be viewed as a giant array of
memory words

• Memory management involves:


–Maintenance of free memory blocks
–Assigning memory blocks to programs
–Cleaning memory from unneeded blocks

Teh Je Sen (2018)


The Memory Heap
• The heap is a region of the main memory
–Portions of memory are dynamically allocated
• Dynamic allocation
–In some languages, operators such as new and
delete interact with the heap
• Heap is used when the amount of memory
required cannot be determined prior to
running the program

Teh Je Sen (2018)


The Memory Heap
• The memory manager places processes in
free spaces in the memory
–These processes are removed if the space is
needed or the process has completed
• Multiple memory allocations and
deallocations will occur throughout a
program
–Cause the heap to be divided into small pieces
of available memory sandwiched between
chunks of memory in use

Teh Je Sen (2018)


The Memory Heap
• Example:
Used memory

Heap:

Multiple pieces of available memory

Teh Je Sen (2018)


The Memory Heap
• This phenomena is known as external
fragmentation
• What is the problem of fragmentation?
–If there is a request to allocate 𝑛 bytes of memory
but there is not enough contiguous memory in
the heap, the request cannot be fulfilled
–Even though the amount of free memory space
could be larger than 𝑛 bytes

Teh Je Sen (2018)


The Memory Heap
• Another type of fragmentation: internal
fragmentation
–Occurs when allocated memory chunks are
larger than requested
–Leads to unused memory within the allocated
segments

Unused memory

Teh Je Sen (2018)


Memory Organization
• Memory organization:
–Utilize a linked list of all memory blocks
–List is updated after each block is requested or
returned
–Can be organized based on block size or block
address
• Doubly linked lists are used for efficiency
–Each block contains:
– Links for the previous and next block
– Size field
– Status field (available or reserved)

Teh Je Sen (2018)


Memory Organization
• Example

6 3 10 15 14 4
Link Link Link Link Link Link
Link Link Link Link Link Link

Teh Je Sen (2018)


Memory Allocation
• When there is a memory request
–Decide which block to allocate
–The decision will have an impact on
fragmentation
• Methods
–Sequential-fit
–Nonsequential-fit

Teh Je Sen (2018)


Sequential-fit Methods

Teh Je Sen (2018)


The Sequential-fit Methods
• As its name suggests, these methods search
for suitable blocks sequentially
• For sequential fit methods, all available
memory blocks are linked
• Steps
–Find a block whose size is larger than or the
same as the requested size
–Coalesce returned blocks with neighbouring
blocks
Combine into one

Teh Je Sen (2018)


The Sequential-fit Methods
• Methods: Allocate
the next
Allocates available
the a block block that
closest in is large
First-fit size Worst-fit enough

Allocates
the first Best-fit Next-fit
memory
block large
enough to Finds the largest block on the list
meet the so that the remaining parts are
request large enough to be used later on

Teh Je Sen (2018)


The Sequential-fit Methods
• Example

The pointer is currently pointing at the block starting


from the 34th KB.
If a memory request is 8KB, identify the blocks that
will be allocated if the first-fit, best-fit, next-fit and
worst-fit methods are used.

Teh Je Sen (2018)


The Sequential Methods
• Comparison of Methods

First-fit Best-Fit Next-fit Worst-Fit


Most efficient Optimal memory Similar efficiency Tries to prevent
utilization (Leave to first-fit fragmentation by
larger hole for preventing the
bigger processes) creation of small
blocks

Does not allocate Causes extensive Causes extensive Inefficient (have


space optimally fragmentation fragmentation to search whole
because it because it reaches list)
searches for the the end fast
closest match

Teh Je Sen (2018)


Exercise
• Allocate memory for the following requests
based on the corresponding sequential
methods
Current Location

3 12 25 29 34 47

3KB – Next-fit 8KB – First-fit 2KB – Worst-fit 2KB – Best-fit

Teh Je Sen (2018)


Exercise
• Allocate memory for the following requests
based on the corresponding sequential
methods
Current Location

01 3 12 15 23 25 29 34 47 49

3KB – Next-fit 8KB – First-fit 2KB – Worst-fit 2KB – Best-fit

Teh Je Sen (2018)


Problem
What is the problem with sequential fit
methods?

Inefficient for large


memory

Teh Je Sen (2018)


End of Part 1

Teh Je Sen (2018)


Memory
Management – Part 2
CPT212 – Design & Analysis of Algorithms
Learning Outcomes
• In this topic, we will learn about
–Nonsequential-fit methods
– Buddy Systems
–Garbage collection
–Memory hierarchies and caching
–External searching and sorting

Teh Je Sen (2018)


Nonsequential-fit Methods

Teh Je Sen (2018)


The Nonsequential-fit Methods
• Used for large memory
• General strategy
–Divide the memory into an arbitrary number of
lists
–Each list holds blocks of the same size
–Large blocks are split into smaller blocks to
satisfy requests, thus creating new lists

Teh Je Sen (2018)


The Non-sequential Fit Methods
• Adaptive exact-fit method
Based on the observation that the
number of sizes requested is
limited (Some sizes are more
popular)

Lists can be kept short if it can be


determined which sizes are the
most popular.

The technique dynamically creates


and adjusts storage block lists that
fit the requests exactly.

Teh Je Sen (2018)


The Non-sequential Fit Methods
• Adaptive exact-fit method
How it works:
–Maintain a size list of block lists of a particular
size
– This list consists of block sizes requested during the last
𝑇 allocations
–After a program has completed running, it
returns a block, 𝑏.
–This block will be added to the list
corresponding to its size
–If another request comes for a block with 𝑏’s size,
it will be taken from this list
–If the size does not exist in the size list, perform
sequential-fit method
Teh Je Sen (2018)
Example – Adaptive Exact-fit
Request or a block of size 7
Can be done immediately

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Program A requests a List of Block Lists
block 𝑏 with a size of 2KB Size
• Because no block list for Lastref
2KB exists, perform
sequential-fit (worst fit) Blocks

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Program B requests for a List of Block Lists
block of 4KB. Size
• Because no block list for Lastref
4KB exists, perform
sequential-fit (worst fit) Blocks

1 2

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Program C requests List of Block Lists
2KB. Size

• Program D requests Lastref


4KB. Blocks

1 2 3 4

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Programs A, B and C List of Block Lists
return the blocks after Size 2KB 4KB
completion in that Lastref
order Blocks

1 2 3 4

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Update block list List of Block Lists
Size 2KB 4KB

Lastref 1

Blocks

1 2 3 4

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Update block list List of Block Lists
Size 2KB 4KB

Lastref 3 2

Blocks

1 2 3 4

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Program D returns List of Block Lists
its memory block Size 2KB 4KB

Lastref 3 2

Blocks

1 2 3 4

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Update block list List of Block Lists
Size 2
2KB 4
4KB

Lastref 3 4

Blocks

1 2 3 4

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Program E requests for List of Block Lists
2KB Size 2
2KB 4
4KB
• Allocation is immediate Lastref 3 4
because the size-list has an
entry for 2KB Blocks

1 2 3 4

Teh Je Sen (2018)


Example – Adaptive Exact-fit
• Update block list List of Block Lists
Size 2
2KB 4
4KB

Lastref 13 4

Blocks

1 2 5 4

Teh Je Sen (2018)


The Non-sequential Fit Methods
• Exact-fit method
If no requests comes for a block in
a particular block list after 𝑇
allocations, the whole block list is
disposed.

Lists of infrequently used block


sizes are not maintained.

Thus, the list of block lists is kept


small to speed up the sequential
search of this list.

Teh Je Sen (2018)


Pseudocode
𝑡 = 0;
allocate(reqSize)
𝑡 + +;
if 𝑎 𝑏𝑙𝑜𝑐𝑘 𝑙𝑖𝑠𝑡 b1 𝑤𝑖𝑡ℎ reqSize 𝑏𝑙𝑜𝑐𝑘𝑠 𝑖𝑠 𝑜𝑛 sizeList
𝑙𝑎𝑠𝑡𝑟𝑒𝑓 𝑏1 = 𝑡;
𝑏 = ℎ𝑒𝑎𝑑 𝑜𝑓 𝑏𝑙𝑜𝑐𝑘𝑠 𝑏1 ;
Adaptive Exact-fit
if b 𝑤𝑎𝑠 𝑡ℎ𝑒 𝑜𝑛𝑙𝑦 𝑏𝑙𝑜𝑐𝑘 𝑎𝑐𝑐𝑒𝑠𝑠𝑖𝑏𝑙𝑒 𝑓𝑟𝑜𝑚 b1
𝑑𝑒𝑡𝑎𝑐ℎ b1 𝑓𝑟𝑜𝑚 sizeList;
else 𝑏 = 𝑠𝑒𝑎𝑟𝑐ℎ𝑀𝑒𝑚 𝑟𝑒𝑞𝑆𝑖𝑧𝑒 ;
𝑑𝑖𝑠𝑝𝑜𝑠𝑒 𝑜𝑓 𝑎𝑙𝑙 𝑏𝑙𝑜𝑐𝑘 𝑙𝑖𝑠𝑡𝑠 𝑜𝑛 sizeList 𝑓𝑜𝑟 𝑤ℎ𝑖𝑐ℎ 𝑡 − 𝑙𝑎𝑠𝑡𝑟𝑒𝑓 𝑏1 < 𝑇; Exact-fit

return b;

Teh Je Sen (2018)


Problem
What is the problem with these
nonsequential fit methods?
Causes fragmentation

Possible Solutions
Compact memory after a Liquidate and renew the
certain number of size-list after a certain
allocations/deallocations period

Teh Je Sen (2018)


Buddy Systems
A type of nonsequential method

Teh Je Sen (2018)


Buddy Systems
• Buddy systems are nonsequential memory
management methods
• Basic idea:
–Divide the memory into partitions (buddies) to
find the best fit
–Keep dividing until the best fit is found
–After deallocation, buddies that are free will be
coalesced (combined)

Teh Je Sen (2018)


Binary Buddy System
• Assume that storage consists of 𝟐𝒎
locations
• Block lengths are all powers of 2
• To find the best fit, the memory is split into
two halves (buddies)
• Each half is continuously divided (into
more buddies) until the best fit is found
• Any two buddies that are free after
deallocation will be coalesced

Teh Je Sen (2018)


Simple Example
• Assume that the memory locations are in
bytes (Maximum size is 28 = 256 bytes)
0 64 256

• A program requests for 15 bytes


• 256 is too big for 15 bytes, divide into two
halves (buddies)

Teh Je Sen (2018)


Simple Example
• The right half is stored in a list, avail[]
for future use
0 16 32 64 256

• 64 is still too big for 15 bytes, divide into


two halves
• 32 is still too big for 15 bytes, divide into
two halves
• 16 bytes are allocated for the program

Teh Je Sen (2018)


Binary Buddy System
Implementation
• An array avail[] is used to store the
head of doubly linked lists of available
blocks of the same size
• E.g.
–avail[1] stores the head of a list of available
blocks with the size of 21
–avail[2] stores the head of a list of available
blocks with the size of 22 = 4

Teh Je Sen (2018)


Binary Buddy System
Implementation
• Each free block in a buddy system has four
fields:
–Status (0/1 for free/reserved)
–Size
–Predecessor (in the linked list)
–Successor (in the linked list)
This block is
the head of
the list, thus has
no predecessor

Teh Je Sen (2018)


Binary Buddy System
Implementation
• An example of a reserved block is shown
below Size of 3 means
the overall size
of the block is
23 = 8

Status 1
indicates
the block is
reserved

Teh Je Sen (2018)


Binary Buddy System
Implementation
Setup Phase
size of memory = 2𝑚 for some 𝑚;
for 𝑖 = 0 to 𝑚 − 1
avail[i]=-1;
avail[m]= 𝑓𝑖𝑟𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑛 𝑚𝑒𝑚𝑜𝑟𝑦

Example (𝑚 = 7)

Teh Je Sen (2018)


Binary Buddy System
Implementation
Reservation Phase Round to the largest integer (ceiling
function). This ensures that the
optimal size block will be returned
reserve(𝑟𝑒𝑞𝑆𝑖𝑧𝑒)
𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒=⌈log 2 𝑟𝑒𝑞𝑆𝑖𝑧𝑒 ⌉;
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = min(𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒,…,𝑚) where avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]>-1;
if no such 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 exists
return failure;
𝑏𝑙𝑜𝑐𝑘 = avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒];
detach 𝑏𝑙𝑜𝑐𝑘 from list avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]
while (𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒 < 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒)
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 −-; Try to find best fit
𝑏𝑙𝑜𝑐𝑘 = left half of 𝑏𝑙𝑜𝑐𝑘;
insert buddy of 𝑏𝑙𝑜𝑐𝑘 in list avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]
return 𝑏𝑙𝑜𝑐𝑘;

Teh Je Sen (2018)


Example
• Let m=7 (Total size = 128)
• Setup
size of memory = 27 ;
for 𝑖 = 0 to 6
avail[i]=-1;
avail[7]= 𝑓𝑖𝑟𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑛 𝑚𝑒𝑚𝑜𝑟𝑦

Teh Je Sen (2018)


Example
• 18 locations are requested
reserve(18)
𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒= log 2 18 = 5;
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = min(5,6,7) where avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]>-1;
∴ 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = 7 because avail[5]=avail[6]=-1
if no such 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 exists
return failure;
𝑏𝑙𝑜𝑐𝑘 = avail[7];
detach 𝑏𝑙𝑜𝑐𝑘 from list avail[7]
Light grey area while (𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒 = 5 < availSize = 7)
indicates 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 − −;
the required ∴ 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = 6
memory 𝑏𝑙𝑜𝑐𝑘 = left half of 𝑏𝑙𝑜𝑐𝑘;
locations insert buddy of 𝑏𝑙𝑜𝑐𝑘 in list avail[6]

Teh Je Sen (2018)


Example
• 18 locations are requested
reserve(18)
𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒= log 2 18 = 5;
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = min(5,6,7) where avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]>-1;
∴ 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = 7 because avail[5]=avail[6]=-1

𝑏𝑙𝑜𝑐𝑘 = avail[7];
detach 𝑏𝑙𝑜𝑐𝑘 from list avail[7]
while (𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒 = 5 < availSize = 6)
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 − −;
∴ 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = 5
𝑏𝑙𝑜𝑐𝑘 = left half of 𝑏𝑙𝑜𝑐𝑘;
insert buddy of 𝑏𝑙𝑜𝑐𝑘 in list avail[5]

Teh Je Sen (2018)


Example
• 18 locations are requested
reserve(18)
𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒= log 2 18 = 5;
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = min(5,6,7) where avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]>-1;
∴ 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = 7 because avail[5]=avail[6]=-1

𝑏𝑙𝑜𝑐𝑘 = avail[7];
detach 𝑏𝑙𝑜𝑐𝑘 from list avail[7]
while (𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒 = 5 < availSize = 5)

return block;

Teh Je Sen (2018)


Example
• 14 locations are requested
reserve(14)
𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒= log 2 18 = 4;
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = min(4,5,6,7) where avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]>-1;
∴ 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = 5 because avail[4]=-1
if no such 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 exists
return failure;
𝑏𝑙𝑜𝑐𝑘 = avail[5];
detach 𝑏𝑙𝑜𝑐𝑘 from list avail[5]
while (𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒 = 4 < availSize = 5)
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 − −;
∴ 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = 4
𝑏𝑙𝑜𝑐𝑘 = left half of 𝑏𝑙𝑜𝑐𝑘;
insert buddy of 𝑏𝑙𝑜𝑐𝑘 in list avail[4]
return block;

Teh Je Sen (2018)


Task
• Modify the diagram  reserve(17)
reserve(𝑟𝑒𝑞𝑆𝑖𝑧𝑒)
𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒=⌈log 2 𝑟𝑒𝑞𝑆𝑖𝑧𝑒 ⌉;
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = min(𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒,…,𝑚) where avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]>-1;
if no such 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 exists
return failure;
𝑏𝑙𝑜𝑐𝑘 = avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒];
detach 𝑏𝑙𝑜𝑐𝑘 from list avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]
while (𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒 < 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒)
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 −-;
𝑏𝑙𝑜𝑐𝑘 = left half of 𝑏𝑙𝑜𝑐𝑘;
insert buddy of 𝑏𝑙𝑜𝑐𝑘 in list avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]
return 𝑏𝑙𝑜𝑐𝑘;

Teh Je Sen (2018)


Solution

reserve(17)
𝑟𝑜𝑢𝑛𝑑𝑒𝑑𝑆𝑖𝑧𝑒=⌈log 2 17 ⌉=5;
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 = min(5,6,7) where avail[𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒]>-1=6;
if no such 𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 exists
return failure;
𝑏𝑙𝑜𝑐𝑘 = avail[6];
detach 𝑏𝑙𝑜𝑐𝑘 from list avail[𝟔]
while (𝟓 < 𝟔)
𝑎𝑣𝑎𝑖𝑙𝑆𝑖𝑧𝑒 −-;
𝑏𝑙𝑜𝑐𝑘 = left half of 𝑏𝑙𝑜𝑐𝑘;
insert buddy of 𝑏𝑙𝑜𝑐𝑘 in list avail[𝟓]
return 𝑏𝑙𝑜𝑐𝑘;

Teh Je Sen (2018)


Coalescing Blocks
• For any returned blocks
–If buddy is free, combine into a larger block
–Check if the larger block’s buddy is free then
combine into a larger block
–Repeat until no more buddies available or entire
memory is combined

Teh Je Sen (2018)


Example
Buddy is free, coalesce the blocks

Teh Je Sen (2018)


Example
Buddy is free,
coalesce the
blocks

Buddy is free
again, coalesce
one more time

Teh Je Sen (2018)


Example
• Final

Teh Je Sen (2018)


Summary of Buddy Systems

Advantage: Disadvantage:
Causes internal fragmentation
due to the rounding process to
the nearest larger power of 2

Efficient in speed Can have external fragmentation


problems whereby there is
enough space to meet the
request, but because they belong
to different blocks, it cannot be
allocated.

Teh Je Sen (2018)


Garbage Collection

Teh Je Sen (2018)


Re-cap
• Dynamically created objects (using the new
command in C++ or Java) are allocated
memory in the heap
• An Out-of-Memory error can occur if it is
not possible to allocate objects to the heap

Teh Je Sen (2018)


Garbage Collection
• It is an algorithm that automatically
releases the heap memory when
–The program is idle
–Objects are no longer referenced by the program
–Memory resource is exhausted
• It collects unused cells and releases them
• Two phases
–Marking phase: Identify used cells
–Reclamation phase: Return unmarked cells to
memory pool. Can also perform heap
compaction

Teh Je Sen (2018)


Mark-and-Sweep
• A classical method for collecting garbage
• Mark
–Memory cells that are in use are marked by
traversing each linked structure
• Sweep
–Memory is swept to glean unused
cells and put them together in a
memory pool

Teh Je Sen (2018)


Marking
• Similar to a graph or tree traversal
–Start from a root node
–If the node is unmarked, mark it as true
–Move on to the nodes referenced by the current
node
–Repeat until all reachable nodes are visited

mark(𝑟𝑜𝑜𝑡)
if markedBit(𝑟𝑜𝑜𝑡) = 𝑓𝑎𝑙𝑠𝑒 then
markedBit(𝑟𝑜𝑜𝑡) = 𝑡𝑟𝑢𝑒
for each 𝑣 referenced by 𝑟𝑜𝑜𝑡
mark(𝑣)

Teh Je Sen (2018)


Sweeping
• Clears the heap memory for unreachable
objects
• Steps
–Objects marked as false are cleared from heap
–Reachable objects are marked as false for future
runs
sweep()
for each object 𝑝 in ℎ𝑒𝑎𝑝
if markedBit(𝑝) = true then
markedBit(𝑝) = 𝑓𝑎𝑙𝑠𝑒
else
heap.release(𝑝)

Teh Je Sen (2018)


Mark and Sweep
• After marking and releasing unused cells

• What problem can you see?


–Fragmentation exists
• How to solve this problem?
–Compaction – Move marked objects to the
beginning of the memory region

Teh Je Sen (2018)


Compaction
compact() Object pointed to by 𝑙𝑜
𝑙𝑜 = the bottom of the ℎ𝑒𝑎𝑝;
ℎ𝑖 = the top of the ℎ𝑒𝑎𝑝;
while (𝑙𝑜 < ℎ𝑖) // scan the entire heap
while markedBit(∗ 𝑙𝑜) = true
𝑙𝑜++;
while markedBit(∗ ℎ𝑖) = false
ℎ𝑖--;
markedBit(∗ ℎ𝑖) = false
*𝑙𝑜 = *ℎ𝑖;

• Downsides
–Increased garbage collection duration

Teh Je Sen (2018)


Example
• 𝑙𝑜 = the bottom of the ℎ𝑒𝑎𝑝;
• ℎ𝑖 = the top of the ℎ𝑒𝑎𝑝;

𝑙𝑜 ℎ𝑖

T T F T F F

Teh Je Sen (2018)


Example
• while (𝑙𝑜 < ℎ𝑖)
–while markedBit(∗ 𝑙𝑜) = true
– 𝑙𝑜++;
–while markedBit(∗ ℎ𝑖) = false
– ℎ𝑖--;

𝑙𝑜 ℎ𝑖

T T F T F F

Teh Je Sen (2018)


Example
• markedBit(∗ ℎ𝑖) = false
–*𝑙𝑜 = *ℎ𝑖;

𝑙𝑜 ℎ𝑖

T T F
T T
F F F

Teh Je Sen (2018)


Mark-Sweep-Compact Results

Teh Je Sen (2018)


Memory Hierarchy and
Caching

Teh Je Sen (2018)


Memory Hierarchy
• Computers have a hierarchy of different
kinds of memory
–Different size
–Different distance from CPU

Teh Je Sen (2018)


Memory Hierarchy
Cloud

Disk drives, Main memory


CD/DVD drives or core memory

Teh Je Sen (2018)


Memory Hierarchy
• Problems that can fit on main
Primary-level memory deal with:
–Cache memory
–Internal memory (10-100 times slower)
• Problems that cannot fit on main
memory deal with:
Secondary-level –Internal memory
–External memory (100,000-1,000,000
times slower)

Teh Je Sen (2018)


Caching and Blocking
• Operating system designers have
developed mechanisms for fast memory
access based on the following properties:
Temporal Locality Spatial Locality
If a program accesses a
If a program accesses a
particular memory location,
particular memory location,
there is an increased
there is an increased
likelihood that this location
likelihood that it will access
will be accessed again in the
other locations near this one.
future.

Teh Je Sen (2018)


Caching and Blocking
These strategies give rise to two
fundamental design choices: caching and
blocking
Temporal Locality Caching
If a program accesses a Transfer data from the
particular memory location, secondary level to the
there is an increased primary level when it is
likelihood that this location addressed with hopes that it
will be accessed again in the will be addressed again in
future. the future.

Teh Je Sen (2018)


Caching and Blocking
These strategies give rise to two
fundamental design choices: caching and
blocking
Spatial Locality Blocking
If a particular data stored in
If a program accesses a location 𝒍 is addressed,
particular memory location, bring a block of contiguous
there is an increased locations (which includes 𝑙)
likelihood that it will access from the secondary-level
other locations near this one. memory to the primary-
level memory.

Teh Je Sen (2018)


Virtual Memory
• When using caching and blocking, the
secondary-level memory acts as a virtual
memory space.
• Caching and blocking makes it seem like
the secondary-level memory is faster than
it really is.

Teh Je Sen (2018)


Example: Web Browsers
• Caching in Web Browsers
–Copies of web pages are stored in a cache
memory so that the pages can be retrieved
quicker
Browser requests
webpage, p

p is
No unchanged Yes
and in
Request for p over cache?
the internet and store Load p from
in cache. cache.

Teh Je Sen (2018)


Example: Web Browsers
• How to choose which pages to evict from
cache when it is full:
–First in, first out (FIFO) – Evict page that has
been in the cache the longest
–Least recently used (LRU) – Evict page whose
last request occurred the furthest in the past
–Random

Teh Je Sen (2018)


External Memory Searching
and Sorting

Teh Je Sen (2018)


External Memory Operations
• A large collection of items such a database
cannot fit in the main memory
–Stored in the external memory in disk blocks
• Transferring a block from external memory
to the main memory is known as a disk
transfer
• Need to minimize the number of disk
transfers for optimal efficiency

Teh Je Sen (2018)


External Memory Operations

External Memory External Memory


Searching Sorting

To efficiently search To efficiently sort


for a key in the data in the external
external memory, a memory, a multiway
map can be merge sort method
represented as a can be used.
multiway search tree
such as a B-tree

Teh Je Sen (2018)


End of Part 2

Teh Je Sen (2018)

Você também pode gostar