Você está na página 1de 7

Primitive Operations

Examples: Evaluating an expression


Assigning a value to a variable Indexing into an array Calling a method Returning from a Method

Assumed to take a constant amount of time in the RAM model

Stacks

When a method is called, the JVM pushes on the stack a frame containing
Local variables and return value Program counter, keeping track of the statement being executed

When a method ends, its frame is popped from the stack and control is passed to the method on top of the stack

Page-visited history in a Web browser Undo sequence in a text editor


matching tags in an HTML document Is similar to parentheses matching

if k = 1 then return (k, 0) else (i, j) = LinearFibonacci(k - 1) return (i +j, i)


**puzzle-recurssion pg19 scale tick-recurssion prog

Queues
size()

Waiting lists, bureaucracy; Access to shared resources (e.g., printer); round robin scheduler; memory allocation
return (N f + r) mod N isEmpty() return (f = r) enqueue(o) if size() = N 1 then throw FullQueueException else Q[r] o r (r + 1) mod N dequeue() if isEmpty() then throw EmptyQueueException else o Q[f] f (f + 1) mod N return o

Tree

Depth of a node: number of ancestors

Traversals : Preorder - print a structured document Postorder - compute space used by files in a directory and its subdirectories Binary tree associated with an arithmetic expression internal nodes: operators external nodes: operands Binary tree associated with a decision process internal nodes: questions with yes/no answer external nodes:
decisions

On the left (preorder); from below (inorder); on the right (postorder)

Heap-Order: key(v) key(parent(v)) insert and removeMin take O(log n) time Insertion upheap Deletion downheap bottom-up heap construction runs in O(n) time

Sorting
**see priorty queue 4 insertion / selection sort

Merge-Sort

Algorithm mergeSort(S, C) Input sequence S with n elements, comparator C Output sequence S sorted according to C if S.size() > 1 (S1, S2) partition(S, n/2) mergeSort(S1, C) mergeSort(S2, C) S merge(S1, S2) The height h of the merge-sort tree is O(log n)

The overall amount or work done at the nodes of depth i is O(n)


at each recursive call we divide in half the sequence, we partition and merge 2i sequences of size n/2i we make 2i+1 recursive calls

Thus, the total running time of merge-sort is O(n log n)

Quick-Sort
The worst case for quick-sort occurs when the pivot is the unique minimum or maximum element One of L and G has size n 1 and the other has size 0 The running time is proportional to the sum n + (n 1) + + 2 + 1 Thus, the worst-case running time of quick-sort is O(n2)

Consider a recursive call of quick-sort on a sequence of size s


Good call: the sizes of L and G are each less than 3s/4 Bad call: one of L and G has size greater than 3s/4 A call is good with probability 1/2 of the possible pivots cause good calls

For a node of depth i, we expect


i/2 ancestors are good calls The size of the input sequence for the current call is at most (3/4)i/2n For a node of depth 2log4/3n, the expected input size is one The expected height of the quick-sort tree is O(log n)

Therefore, we have

The amount or work done at the nodes of the same depth is O(n) Thus, the expected running time of quick-sort is O(n log n)
Algorithm inPlaceQuickSort(S, l, r) Input sequence S, ranks l and r Output sequence S with the elements of rank between l and r rearranged in increasing order if l r return i a random integer between l and r x S.elemAtRank(i) (h, k) inPlacePartition(x) inPlaceQuickSort(S, l, h 1) inPlaceQuickSort(S, k + 1, r)

Bucket-Sort
Analysis:

Phase 1: Empty sequence S by moving each entry (k, o) into its bucket B[k] Phase 2: For i = 0, , N 1, move the entries of bucket B[i] to the end of sequence S Phase 1 takes O(n) time Phase 2 takes O(n + N) time

Bucket-sort takes O(n + N) time

Lexicographic-Sort
(x1, x2, , xd) < (y1, y2, , yd) x1 < y1 x1 = y1 (x2, , xd) < (y2, , yd) Runs in O(dT(n)) time, where T(n) is the running time of stableSort

Radix-Sort

Specialization of lexicographic-sort that uses bucket-sort as the stable sorting algorithm in each dimension Applicable to tuples where the keys in each dimension I are integers in the range [0, N 1] Runs in time O(d( n + N)) Algorithm radixSort(S, N) Input sequence S of d-tuples such that (0, , 0) (x1, , xd) and (x1, , xd) (N 1, , N 1) for each tuple (x1, , xd) Output sequence S sorted in lexicographic order for i d downto 1 bucketSort(S, N)

BST: when key k to be removed is stored at a node v whose children are both internal, we find the internal node w that follows
v in an inorder traversal.

AVL Trees:
n(h): the minimum number of internal nodes of an AVL tree of height h. n(1) = 1 and n(2) = 2

For n > 2, an AVL tree of height h contains the root node, one AVL subtree of height n-1 and another of height n-2. n(h) = 1 + n(h-1) + n(h-2)

First step of insertion/deletion same as in BST Where to rebalance after a removal : first unbalanced node encountered while travelling up the tree from parent of
deleted node.

(2,4) Trees:

Node-Size Property: every internal node has at most 4 children Depth Property: all the external nodes have the same depth

For deletion from a non-leaf node: replace the entry with its inorder successor (or, equivalently, with its inorder predecessor) and delete the latter entry; in case of underflow: fusion/transfer; After a fusion, underflow may propagate to the parent. After a
transfer, no underflow occurs.

RB Trees: BST
Root Property: the root is black External Property: every leaf is black Internal Property: the children of a red node are black Depth Property: all the leaves have the same black depth

Insertion: while doubleRed(z) if isBlack(sibling(parent(z))) z restructure(z) return else { sibling(parent(z) is red } z recolor(z)

Hash Tables
HASH CODES :

Memory address: Integer cast:


Reinterpret the memory address of the key object as an integer (default hash code of all Java objects) Good in general, except for numeric and string keys

Reinterpret the bits of the key as an integer Suitable for keys of length less than or equal to the number of bits of the integer type (e.g., byte, short, int and float in Java)

Component sum:

Partition the bits of the key into components of fixed length (e.g., 16 or 32 bits) and we sum the components (ignoring overflows) Suitable for numeric keys of fixed length greater than or equal to the number of bits of the integer type (e.g., long and double in Java)

Polynomial accumulation:

Partition the bits of the key into a sequence of components of fixed length (e.g., 8, 16 or 32 bits) a0 a1 an1 and evaluate the polynomial p(z) = a0 + a1 z + a2 z2 + + an1zn1 at a fixed value z(eg.,33), ignoring overflows Polynomial p(z) can be evaluated in O(n) time using Horners rule: successively computed, each from the previous one in O(1) time p0(z) = an1 pi (z) = ani1 + zpi1(z) (i = 1, 2, , n 1)

We have p(z) = pn1(z)


COMPRESSION FUNCTIONS :

Division:

h2 (y) = y mod N; size N of hash table is usually chosen to be a prime h2 (y) = (ay + b) mod N; a and b are nonnegative integers such that a mod N 0

Multiply, Add and Divide (MAD):

Double hashing uses a secondary hash function d(k) and handles collisions by placing an item in the first available cell of the series (i + jd(k)) mod N for j = 0, 1, , N 1
Common choice : d(k) = q k mod q; q < N; q is a prime Worst case, searches, insertions and removals take O(n) time- when all the keys inserted into the map collide. Load factor = n/N Assuming random hash values, expected number of probes for an insertion with open addressing is 1 / (1 )

Expected running time of all the dictionary ADT operations in a hash table is O(1)

small databases, compilers, browser caches

Skip List
Each list Si contains the special keys + and Each list is a subsequence of the previous one, i.e., S0 S1 Sh List Sh contains only the two special keys

SEARCH key x:

start at the first position of the top list current position p, y key(next(p)) x = y: return element(next(p)) x > y: scan forward x < y: drop down

A simple cycle is a cycle formed from three or more distinct vertices in which no vertex is visited more than once along the path (except for the starting and ending vertex). Two different vertices are connected if there is a path between them. A subset of vertices S is said to be a connected component of G if there is a path from each vertex vi to any other distinct vertex vj of S. If S is the largest such subset, then it is called a maximal connected component. Degree of a vertex is the number of edges connected to it.

DFS and BFS runs in O(n + m) time provided the graph is represented by the adjacency list structure

Floor: x = the largest integer x do mid (low+high)/2 if A[mid] = k then return mid else if A[mid] > k then high mid-1 else low mid+1 while low <= high

Ceiling: x = the smallest integer x

Você também pode gostar