Big-O Cheat Sheet

1.7k
Tw eet

5.1k
Like

\$7.50 / wk
on Gittip.

Know Thy Complexities!
Hi there! This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science. When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them. Over the last few years, I've interviewed at several Silicon Valley startups, and also some bigger companies, like Yahoo, eBay, LinkedIn, and Google, and each time that I prepared for an interview, I thought to myself "Why oh why hasn't someone created a nice Big-O cheat sheet?". So, to save all of you fine folks a ton of time, I went ahead and created one. Enjoy!
Good Fair Poor

Searching
Space

open in browser PRO version

Are you a developer? Try out the HTML to PDF API

pdfcrowd.com

Algorithm

Data Structure

Time Complexity Average Worst
O(|E| + |V|)

Complexity Worst
O(|V|)

Depth First Search (DFS)

Graph of |V| vertices and |E| edges Graph of |V| vertices and |E| edges Sorted array of n elements Array Graph with |V| vertices and |E| edges

-

-

O(|E| + |V|)

O(|V|)

Binary search

O(log(n))

O(log(n))

O(1)

Linear (Brute Force) Shortest path by Dijkstra, using a Min-heap as priority queue Shortest path by Dijkstra, using an unsorted array as priority queue Shortest path by BellmanFord

O(n) O((|V| + |E|) log |V|)

O(n) O((|V| + |E|) log |V|)

O(1) O(|V|)

Graph with |V| vertices and |E| edges

O(|V|^2)

O(|V|^2)

O(|V|)

Graph with |V| vertices and |E| edges

O(|V||E|)

O(|V||E|)

O(|V|)

Sorting
Algorithm Data Structure Time Complexity Best Average Worst Worst Case Auxiliary Space Complexity Worst

open in browser PRO version

Are you a developer? Try out the HTML to PDF API

pdfcrowd.com

Quicksort Array O(n log(n)) O(n log(n)) O(n log(n)) O(n log(n)) O(n^2) O(n^2) O(n) Mergesort Array O(n log(n)) O(n log(n)) O(n log(n)) O(n^2) O(n) Heapsort Array O(n log(n)) O(1) Bubble Sort Insertion Sort Select Sort Bucket Sort Radix Sort Array O(n) O(1) Array O(n) O(n^2) O(n^2) O(1) Array Array O(n^2) O(n+k) O(n^2) O(n+k) O(n^2) O(n^2) O(1) O(nk) Array O(nk) O(nk) O(nk) O(n+k) Data Structures Data Structure Time Complexity Average Indexing Basic O(1) Space Complexity Worst Worst Search O(n) Search O(n) Insertion - Deletion - Indexing O(1) Insertion - Deletion O(n) open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com .

com .Array Dynamic Array SinglyLinked List DoublyLinked List Skip List O(1) O(n) O(n) O(n) O(1) O(n) O(n) O(n) O(n) O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n) O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n) O(n log(n)) Hash Table Binary Search Tree Cartresian Tree B-Tree Red-Black Tree Splay Tree AVL Tree - O(1) O(1) O(1) - O(n) O(n) O(n) O(n) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n) O(n) - O(log(n)) O(log(n)) O(log(n)) - O(n) O(n) O(n) O(n) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) - O(log(n)) O(log(n)) O(log(n)) - O(log(n)) O(log(n)) O(log(n)) O(n) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.

Heaps Heaps Time Complexity Extract Max O(1) Heapify Linked List (sorted) Linked List (unsorted) Binary Heap Binomial Heap Fibonacci Heap - Find Max O(1) Increase Key O(n) Insert O(n) Delete O(1) Merge O(m+n) - O(n) O(n) O(1) O(1) O(1) O(1) O(n) O(1) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(m+n) - O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) - O(1) O(log(n))* O(1)* O(1) O(log(n))* O(1) Graphs Node / Edge Management Adjacency list Storage Add Vertex Add Edge Remove Vertex O(|V| + Remove Edge O(|E|) Query O(|V|+|E|) O(1) O(1) O(|V|) open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com .

An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n). In other words. Theta requires both Big O and Omega. For example.SO [2] f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. tight[1] upper. an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd. the growth rate of f(x) is asymptotically proportional to g(n). big-oh is the most useful because represents the worstcase behavior. not tight growth equal[2] less than or equal[3] less than greater than or equal greater than [1] Big O is the upper bound. tightness unknown lower.com . [3] Same thing. while Omega is the lower bound. so that's why it's referred to as a tight bound (it must be both the upper and lower bound).|E|) Incidence list Adjacency matrix Incidence matrix O(|V|+|E|) O(|V|^2) O(|V| ⋅ |E|) O(1) O(|V|^2) O(|V| ⋅ |E|) O(1) O(1) O(|V| ⋅ |E|) O(|E|) O(|V|^2) O(|V| ⋅ |E|) O(|E|) O(1) O(|V| ⋅ |E|) O(|E|) O(1) O(|E|) Notation for asymptotic growth letter (theta) Θ (big-oh) O (small-oh) o (big omega) Ω (small omega) ω bound upper and lower. not tight lower. tightness unknown upper. Here the growth rate is no faster than g(n).

Google Calendar. while O(1) is the best complexity. O(n!) is the worst complexity which requires 720 operations for just 6 elements. Google Docs and more! Big-O Complexity Chart This interactive chart. shows the number of operations (y axis) required to obtain a result as the number of elements (x axis) increase.In short.com . if algorithm is __ then its performance is __ algorithm o(n) O(n) Θ(n) Ω(n) ω(n) performance <n ≤n =n ≥n >n Google Apps for Business www. created by our friends over at MeteorCharts.google. open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.com/apps/business Get Gmail for Business. which only requires a constant number of operations for any number of elements.

Eric Rowell Quentin Pleple Nick Dizazzo Michael Abed Adam Forsyth Are you a developer? Try out the HTML to PDF API open in browser PRO version pdfcrowd. 2.com .Contributors Edit these tables! 1. 3. 4. 5.

). open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd. For instance in a linear search algorithm. It is the upper bound for for the algorithm. the best case still has an upper time bound of O(1) (it takes constant time to find an element in index 0. to the novice.Y. However. This is the most beneficially piece of information to have about an algorithm but unfortunately it is usually very hard to find. Its will always take. the best and the worst case. your example should be modified to say something like "it takes at most 2\$ per mile" (linear). the list sorted but backwards. As the size of the problem grows (the array to be searched grows in size). but we have done this. Take your linear search. time. Simply this is a computational exercise to extract the empirical data. vs. n. worst case is when the list is completed out of order. O(n) is better the O(log(n))? In what way? 1024 vs 10 increments that a sort algorithm has to perform for instance? All in all this is good information but in its current state. i. this is in MHO so if I'm off base or incorrect then feel free to flame me like the fantastic four at a gay parade :) • Reply • Share › Luis Oleksandr • 2 months ago @Oleksandr You are confused. With this in mind. Theta is the upper and lower bound together. etc. at most. but big O and related concepts are used to bound the order (linear. • Reply • Share › Oleksandr1 Luis • a month ago You make a very poor assumption that because a specific value is given. while the worst case (the object is in the last index where we look) has an upper time bound of O(n) (it takes a number of steps of order equal to the problem size. Big O (Omicron) is the Worst Case Scenario.) with problem size. 135 dollars to get to New York.g.upper bound. until we find the object in the last index where we look.) of a function that describes how an algorithm grows (in space. This is almost pointless to have. Omega is the lower bound. There is another problem I do not like is the color scheme is sometimes wrong. Your example about the dollars states specific amounts (e. or another fixed position). you can thus understand how big O can be used both for. To be more appropriate. You can usually find that average for an algorithms efficiency by testing it in average case and worst cases together. honestly it needs to be taken with a grain of salt and fact check with a good algorithm book. say.. the third however gives you the constraint.com .e. exponential.." The first bit of information from Omega is essentially useless. for instance you would rather have Big O then Omega because it is exactly the same as say "it will take more than five dollars to get to N. etc. " at most 135 dollars").

forgive me. and do not have any relation with each other. Big O cannot be used for the best case scenario. this is a complete misunderstanding of Omega vs Omicron. It is in fact any polynomial function of my choice given its parameters and any amount of Lagrange constants which will produce a value of 135. that lead to the eventual misunderstanding. or upper-bound for best-case. (in total 9 different. I meant to say Linear Sort Algorithm. I • Reply • Share › see more Yavuz Yetim Oleksandr1 • a month ago @Oleksandr @Luis IMHO. These are orthogonal terms.than it must be a linear function. 135 being the Omicron value. As for the example. BUT it will never run more than 135 iterations. but backwards. I'm not sure why you don't understand a very clear analogy.. For example. This statement is open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd. On the linear search algorithm. each useful for a different use case.com . The main confusion is between the terms "case" and "bound". it will run more than five iterations (Omega). see Statement iii and Example (a) in the end). correct combinations. which has the worst can scenario when a list is fed to said algorithm in order. \$5 was the Omega value. you have a lower-bound for average-case. \$135 dollars was given as an upper bound..has useless/meaningless information) Here are the statements in this argument: Statement i) "The table is wrong in using Big-O notation for all columns". You should read up on this because this is very important. but --none-. I agree about what you said about linear search algorithm. but for you I change situation and values. Given function unknown. there are three different statements in this argument. I agree with Luis that the table is correct and not useless but also agree with Oleksandr that it's not complete (but again disagree that it is incomplete because of the mismatch between best/average case and big-O. or any such number I specify to be used in the example. . The point is that Big O is the upper bound of a function. In fact there are an infinite amount of Big O's for any elementary functions.

To be more appropriate. exponential. until we find the object in the last index where we look. your example should be modified to say something like "it takes at most 2\$ per mile" (linear). Your example about the dollars states specific amounts (e. average case or the best case.) with problem size. you can thus understand how big O can be used both for. Big-O notation is only a representation for a function. Therefore.com . This statement is false because the table is correct. " at most 135 dollars"). say. under Search. etc. time. while the worst case (the object is in the last index where we look) has an upper time bound of O(n) (it takes a number of steps of order equal to the problem size.g.Statement i) "The table is wrong in using Big-O notation for all columns". the best case still has an upper time bound of O(1) (it takes constant time to find an element in index 0. (fixed: wrong autocomplete of who I replied to) • Reply • Share › ericdrowell 1 Antoine Grondin 7 Mod tempire • 7 months ago I'll try to clarify that. see more • Reply • Share › Guest Oleksandr • 2 months ago @Oleksandr You are confused. With this in mind. the best and the worst case.) of a function that describes how an algorithm grows (in space. As the size of the problem grows (the array to be searched grows in size). would be more appropriate listed as Graph instead of Tree. One correct representation for this function is O(n). Take your linear search. but big O and related concepts are used to bound the order (linear. or another fixed position). Big-O notation does not have anything to do with the worst case. n.). Thanks! • Reply • Share › • 7 months ago I think DFS and BFS. etc. writing O(n) for a best-case entry is correct. Let's say the best-case run time for an algorithm for a given input of size n is exactly (3*n + 1). • Reply • Share › open in browser PRO version Are you a developer? Try out the HTML to PDF API pdfcrowd.