Escolar Documentos
Profissional Documentos
Cultura Documentos
Stack
push(object)
pop()
top()
size()
isEmpty()
Array
Use a int variable t to keep track of the index of the top.
push(object) adds the input to the array[t+1] (1)
pop() removes the object at array[t] and returns it (1)
top() returns the object at array[t] (1)
size() returns t+1 (1)
isEmpty() returns true if t is -1 (1)
Linked List
Use a node variable top to point to our last node.
push(object) adds the input to a node, makes the node point to the current top node, then makes
the top variable point to this node. (1)
pop() saves the tops object to a temporary node, makes the top variable point to the tops next
node then returns the temporary node. (1)
top() returns the top nodes object (1)
size() returns t+1 (1)
isEmpty() returns true if top is null (1)
Queue
enqueue(object)
dequeue()
front()
size()
isEmpty()
Array
Use two int variables f and r to track the front and rear. The rear index is the next empty entry.
enqueue(object) inserts the input into a[r] then r = (r+1)%n (1)
dequeue() removes the object at a[f] and returns it then increments f to (f+1)%n (1)
front() returns the object at a[f] (1)
size() returns (r-f+n)%n (1)
Linked List
Use two node variables front and rear to store entry and exit positions. als int variables f and r for
calculations.
enqueue(object) inserts the object at rear and increments r by 1 (1)
dequeue() removes the object at front and returns it and increments r by 1 (1)
front() returns object at front (1)
size() returns (r-f+n)%n (1)
isEmpty returns true if f=r (1)
Double-Ended Queues
Supports insertion and deletion from both the front and rear.
addFirst(e) (1)
removeFirst() (1)
getFirst() (1)
addLast(e) (1)
removeLast() (1)
getLast() (1)
size() (1)
isEmpty() (1)
Array
get(i) (1)
set(i,e) (1)
add(i,e) (n)
remove(i) (n)
size() (1)
isEmpty() (1)
If your array is full when add(i,e), you can replace the array with a bigger one.
Incremental strategy increases size by a constant c
Doubling strategy doubles the size
Linked List
set(p,e) (1)
addFirst(e) (1)
addLast(e) (1)
addBefore(p,e) (1)
addAfter(p,e) (1)
getFirst() (1)
getLast() (1)
remove(p) (1)
size() (n)
isEmpty() (1)
first()
last()
prev(p)
next(p)
set(p,e)
addBefore(p,e)
addAfter(p,e)
addFirst(e)
addLast(e)
remove(p)
indexOf(p) returns index of element at position p
size()
isEmpty()
Array
get(i) (1)
set(i,e) (1)
add(i,e) (n)
remove(i) (n)
atIndex(i) (1)
first() (1)
last() (1)
prev(p) (1)
next(p) (1)
set(p,e) (1)
addBefore(p,e) (n)
addAfter(p,e) (n)
addFirst(e) (n)
addLast(e) (1)
remove(p) (n)
indexOf(p) (1)
size() (1)
isEmpty() (1)
Linked List
get(i) (n)
set(i,e) (n)
add(i,e) (n)
remove(i) (n)
atIndex(i) (n)
first() (1)
last() (1)
prev(p) (1)
next(p) (1)
set(p,e) (1)
addBefore(p,e) (1)
addAfter(p,e) (1)
addFirst(e) (1)
addLast(e) (1)
remove(p) (1)
indexOf(p) (n)
size() (1)
isEmpty() (1)
Favorites List
acccess(e) returns the element and increments its access count
remove(e) removes the element
top(k) returns the k top elements in terms of access count
8 - Trees
General Tree
Terminology
Root: no parent
Internal node: at least one child
External node (leaf): no children
Ancestor: all parents, grandparents, ...
depth: number of ancestors
height: maximum depth
descendant: child, grandchild, ...
path: sequence of nodes between two nodes without cycle
subtree: tree consisting of one node and descendants
Ordered Tree
A tree is ordered if there is a linear defining for each children (i.e. first child, second child, ...)
ADT
element() returns object at position (1)
size() returns number of nodes (int) (1)
isEmpty() returns size()==0 (boolean) (1)
iterator() returns an iterator of all elements (n)
Preorder
Algorithm preOrder(v)
visit(v)
for each child w of v
preOrder(w)
Postorder
Algorithm postOrder(v)
for each child w of v
postOrder(w)
visit(v)
Binary Tree
Binary trees have at most two children, a left and a right child.
Proper tree nodes have either 2 or no children.
ADT methods
left(p) returns position of left child
right(p) returns position of right child
hasLeft(p) returns left(p)!=null
hasRight(p) return right(p)!=null
Inorder
Algorithm inOrder(v)
if hasLeft(v)
inOrder(left(v))
visit(v)
if hasRight(v)
inOrder(right(v))
Linked List
Node contains element variable, parent node, child(ren) node(s)
size (1)
isEmpty (1)
iterator (n)
positions (n)
replace (1)
root (1)
parent (1)
left (1)
right (1)
sibling (1)
children (1)
hasLeft (1)
hasRight (1)
isInternal (1)
isExternal (1)
isRoot (1)
remove (1)
insertLeft (1)
insertRight (1)
attach (1)
Array
Arrays contain elements in an index based order:
index 1 is root
index 2*p is left child
index 2*p+1 is right child
size (1)
isEmpty (1)
iterator (n)
positions (n)
replace (1)
root (1)
parent (1)
left (1)
right (1)
sibling (1)
children (1)
hasLeft (1)
hasRight (1)
isInternal (1)
isExternal (1)
isRoot (1)
insert(k,v)
removeMin()
min()
size()
isEmpty()
The Priority Queue contains an entry ADT with a key and a value and the following methods
getKey()
getValue()
compare(a,b) which returns an integer i such that i<0 if a<b, i=0 if a=b and i>0 if a>b
Sorting
We can sort using a priority queue by inserting all the elements of a sequence into a priority queue
and then reinserting those elements in order in a sequence with a number of removeMin() methods.
Doing so depends on our implementation. Here are the two different sorts that may occur
Selection Sort (If the sequence is unsorted and is sorted during remove min
during removeMin())
Algorithm SelectSort(A)
for i=0 to n-1 do
minInd = i
for j=i+1 to n-1 do
if(A[j]<=A[minInd])
minInd=j
end if
end for
temp = A[i]
A[i] = A[minInd]
A[minInd] = temp
end for
Best case = Worst case = On2
Unsorted List
size(), isEmpty() (1)
insert(e) (1)
min() (n)
removeMin() (n)
remove(e) (1)
replaceKey(e,k) (1)
replaceValue(e,v) (1)
Sorted List
size(), isEmpty() (1)
insert(e) (n)
min() (1)
removeMin() (1)
remove(e) (1)
replaceKey(e,k) (n)
replaceValue(e,v) (1)
Heap (When you need better time than O(n2) for sorting algorithms)
Insertions and Removals are done in logarithmic time.
Insertion
You insert to the right of your last node. Then you perform the upheap to restore the heap order.
Upheap is when you compare the node and its parent and swap them if the parent key is bigger
than the node key
Algorithm Upheap(n)
while(n.parent != null && n.parent.key > n.key)
swap n and n.parent
end while
Best Case O(1), Worst Case O(log2 (n))
Removal
Removal from a heap (removeMin()) returns the root and replaces it by the last node. It then per-
forms Downheap to restore heap-order.
Algorithm Downheap(n)
while(n.left != null && n.right !=null)
find smallest and swap n and smallest child
end while
Best Case O(1), Worst Case O(log2 (n))
Heap-Sort
It is performed by doing the operations of a priority queue sort but with a heap reducing the worst
case to O(n log2 (n))
Array
It is the same as array implementation of a binary tree.
Heap
size(), isEmpty() (1)
insert(e) (log n)
min() (1)
removeMin() (log n)
remove(e) (log n)
replaceKey(e,k) (log n)
replaceValue(e,v) (1)
10 - Map
Map
get(k) returns value of entry with key k or null (n)
put(k,v) enters value v at entry with key k return old value (n)
remove(k) remove element with key k from map and return value (n)
entrySet() return iterable collection of all key-value entries in map (n)
keySet() return iterable collection of all keys in map (n)
values() return iterable collection of all values in map (n)
size() return number of entries in map (1)
isEmpty() return size()==0 (1)
Ordered Map
firstEntry() smallest key or null (1)
lastEntry() largest key or null (1)
floorEntry(k) largest key k or null (log n)
ceilingEntry(k) smallest key k or null (log n)
Binary Search
to perform get, floorEntry and ceilingEntry we use the binary search algorithm (log n)
Array
get, floorEntry and ceilingEntry take O(log n) with binary search
n
put and remove take O(n) because they require us to shift (at worst) 2
entries
Hash function
Typical hash function converts the key to a positive integer called the hash code and then com-
presses it to fit the range of the hash table with a modulo n.
If your key is a string, use the unicode integer for each character and multiply them by a factor and
sum them. To do that start with hash = 0 then hash = g * hash + s.charAt(i)
If your key is a polynomial, use a fixed value of x.
Compression functions
One of them is the MAD (multiply-add-and-divide)
(ay+b) mod N or even [(ay+b)mod p] mod N where p is a prime number
prime numbers are good N
Collisions
You can either place a value in another location or make it so that one index can hold more than
one entry.
Linear Probing
If the entry at hashTable[k] is already taken, use k+1, k+2 and so on. We need an available item to
keep track of if we have ever inserted an item in specific entries.
Quadratic Probing
If the entry at hashTable[k] is already taken, use k + 11 , k + 22 and so on. We need an available item
Double Hashing
If a first hashing results in an table entry already taken, use a second hashing function (different) to
compute the increment. Thus we try h1 (k) then h1 (k) + h2 (k) and h1 (k) + 2 h2 (k) and so on.
12 - Sorts
Other Sort
selection-sort (slow, in-place, <1K) On2
insertion-sort (slow, in-place, <1K) On2
heap-sort (fast, in-place, 1K-1M) O(n log n)
Merge Sort (fast, sequential data access for huge data sets (> 1M data))
Merge sort divides, conquers and combines. It separates an array into two sub arrays. It does so
recursively until there is only one or two elements left, then it sorts them returning sorted subarrays.
It then sorts the two subarrays together until it reaches the original array sorted. O(n log n)
Bucket Sort
We place the nodes in an array of buckets linearly O(n). We then return them to a regular array in
order O(n + N). Thus the running time is O (n + N).
Radix Sort
We perform the bucket sort on all the different d-tuples of entries (if we sort by d variables)
O(d(n + N))
14 - Graphs
Graph - a set of vertices and edges
Directed - edges have a direction
In-degree: number of edges going to the vertex
Out-degree: number of edges going out the vertex
Strongly Connected: if there is a path for every two pair of vertices in the graph
Weakly Connected: if there is an undirected path for every two pair of vertices in the graph
Complete: if there is an edge for every two pair of vertices in the graph
Undirected - edges represent both directions
Degree of a vertex: number of edges containing that vertex
Connected: if for every pair of vertices there is a path from one to the other
Complete: if for every pair of vertices there is an edge from one to the other
Self-edge: an edge that links a vertex to it self
Path: a set of vertices such that every adjacent pair of vertices are connected with an edge in the
graph
Simple Path: repeats no vertices except in a cycle fashion
Cycle: a path where the first vertex is the same as the last vertex
Simple Cycle: Bot a cycle and a simple path
Path Length: number of edges in a path
Path Cost: sum of weights in a path
Trees: an undirected, acyclic and connected graph
Direct Acyclic Graph (DAG): directed graph with no directed cycles
If the number of edges is OV2 the graph is dense
If the number of edges is O(V) the graph is sparse
Graphs Traversal
Depth-First
Mark vertices as visited in a preorder fashion.
Breadth-First
You start with the first, then its neighbors, then their neighbors, etc. (Level-order)
Transitive Closure
Strong Connectivity: each vertex can reach all other vertices
Transitive Closure of a graph: is a graph that has all the edges and vertices of the original with extra
vertices if there is a directed path from u to v then there must be an edge from u to v.
Topological Sort
A list of nodes from a DAG where if there is a path from u to v, then u comes before v. To achieve
this we use the postorder variant of depth-first which consists of visiting all outgoing edges until
there are none left, then call recursively and then reversing it. If we have not visited all the vertices,
select a nonvertex and DFS. This differs from preorder DFS because we might visit successors from
lower level links.
Shortest Path