Você está na página 1de 38

Data Structures

1
Algorithms
• Efficiency
• Complexity

2
Algorithms

• Algorithms are stepwise solutions to


problems
• There may be more than one algorithm for a
particular problem

3
Efficiency
• Given several algorithms to solve the same problem,
which algorithm is “best”?
• Given an algorithm, is it feasible to use it at all? In
other words, is it efficient enough to be usable in
practice?
• How much time does the algorithm require?
• How much space (memory) does the algorithm require?
• In general, both time and space requirements depend on
the algorithm’s input (typically the “size” of the input).

4
Efficiency: measuring time

• Measure time in seconds?


+ is useful in practice
– depends on language, compiler, and processor.
• Count algorithm steps?
+ does not depend on compiler or processor
– depends on granularity of steps.
• Count characteristic operations? (e.g., arithmetic ops
in math algorithms, comparisons in searching
algorithms)
+ depends only on the algorithm itself
+ measures the algorithm’s intrinsic efficiency.
5
Complexity

• For many interesting algorithms, the exact number of


operations is too difficult to analyse mathematically.
• To simplify the analysis:
– identify the fastest-growing term
– neglect slower-growing terms
– neglect the constant factor in the fastest-growing term.
• The resulting formula is the algorithm’s time complexity.
It focuses on the growth rate of the algorithm’s time
requirement.
• Similarly for space complexity.

6
Example : analysis of power algorithms (1)

• Analysis of simple power algorithm


(counting multiplications):
No. of multiplications = n
Time taken is approximately proportional to
n.
Time complexity is of order n. This is
written O(n).

7
O-notation (1)
• We have seen that an O(log n) algorithm is
inherently better than an O(n) algorithm for large
values of n.
O(log n) signifies a slower growth rate than O(n).
• Complexity O(X) means “of order X”,
i.e., growing proportionally to X.
Here X signifies the growth rate,
neglecting slower-growing terms and constant
factors.

8
O-notation (2)
• Common time complexities:
O(1) constant time (feasible)
O(log n) logarithmic time (feasible)
O(n) linear time (feasible)
O(n log n) log linear time (feasible)
O(n2) quadratic time (sometimes feasible)

O(n3) cubic time (sometimes feasible)


O(2n) exponential time (rarely feasible)
9
Growth rates (1)
• Comparison of growth rates:

F(n)/n Log n n nlogn


5 3 5 15
10
4 10 40
100
7 100 700
1000
10 1000 10,000

10
Growth rates (2)
• Graphically:100 2n n2 n log n

80

60
n
40

20
log n

0
0 10 20 30 40 50
n
11
Efficiency and Complexity:
Summary
 Efficiency
 How much time or space is required
 Measured in terms of common basic operations

 Complexity
 How efficiency varies with the size of the task
 Expressed in terms of standard functions of n
 E.g. O(n), O(n2), O(log n), O(n log n)
12
Performance Analysis
• Two criteria are used to judge algorithms:
(i) time complexity (ii) space complexity.
• Space Complexity of an algorithm is the
amount of memory it needs to run to
completion.
• Time Complexity of an algorithm is the
amount of CPU time it needs to run to
completion.

13
Space Complexity
• Memory space S(P) needed by a program P,
consists of two components:
– A fixed part: needed for instruction space,
simple variable space, constant space etc.  c
– A variable part: dependent on a particular
instance of input and output data. 
Sp(instance)
• S(P) = c + Sp(instance)
14
Space Complexity: Example 1
1. Algorithm abc (a, b, c)
2. {
3. return a+b+b*c+(a+b-c)/(a+b)+4.0;
4. }
For every instance 3 computer words
required to store variables: a, b, and c.
Therefore Sp()= 3. S(P) = 3.

15
Space Complexity: Example 2
1. Algorithm Sum(a[], n)
2. {
3. s:= 0.0;
4. for i = 1 to n do
5. s := s + a[i];
6. return s;
7. }

16
Space Complexity: Example 2.
• Every instance needs to store array a[] & n.
– Space needed to store n = 1 word.
– Space needed to store a[] = n floating point
words (or at least n words)
– Space needed to store i and s = 2 words
• Sp(n) = (n + 3). Hence S(P) = (n + 3).

17
Time Complexity
• Time required T(P) to run a program P also
consists of two components:
– A fixed part: compile time which is
independent of the problem instance  c.
– A variable part: run time which depends on the
problem instance  tp(instance)
• T(P) = c + tp(instance)

18
Time Complexity
• How to measure T(P)?
– Measure experimentally, using a “stop watch”
 T(P) obtained in secs, msecs.
– Count program steps  T(P) obtained as a step
count.
• Fixed part is usually ignored; only the
variable part tp() is measured.

19
Time Complexity
• What is a program step?
– a+b+b*c+(a+b)/(a-b)  one step;
– comments  zero steps;
–while (<expr>) do  step count equal to
the number of times <expr> is executed.
–for i=<expr> to <expr1> do  step count
equal to number of times <expr1> is checked.

20
Methods to compute the step count

• Introduce variable count into programs


• Tabular method
– Determine the total number of steps contributed
by each statement
step per execution  frequency
– add up the contribution of all statements

21
Time Complexity: Example 1
Statements S/E Freq. Total

1 Algorithm Sum(a[],n) 0 - 0
2 { 0 - 0
3 S = 0.0; 1 1 1
4 for i=1 to n do 1 n+1 n+1
5 s = s+a[i]; 1 n n
6 return s; 1 1 1
7 } 0 - 0
2n+3 22
Time Complexity: Example 2
Statements S/E Freq. Total

1 Algorithm Sum(a[],n,m) 0 - 0

2 { 0 - 0

3 for i=1 to n do; 1 n+1 n+1

4 for j=1 to m do 1 n(m+1) n(m+1)

5 s = s+a[i][j]; 1 nm nm

6 return s; 1 1 1

7 } 0 - 0

2nm+2n+2 23
Tabular Method
Step count table for Program steps/execution

Statement s/e Frequency Total steps


float sum(float list[ ], int n) 0 0 0
{ 0 0 0
float tempsum = 0; 1 1 1
int i; 0 0 0
for(i=0; i <n; i++) 1 n+1 n+1
tempsum += list[i]; 1 n n
return tempsum; 1 1 1
} 0 0 0
Total 2n+3

24
Matrix Addition

p2: Step count table for matrix addition

Statement s/e Frequency Total steps

Void add (int a[ ][MAX_SIZE]‧‧‧) 0 0 0


{ 0 0 0
int i, j; 0 0 0
for (i = 0; i < row; i++) 1 rows+1 rows+1
for (j=0; j< cols; j++) 1 rows‧(cols+1) rows‧cols+rows
c[i][j] = a[i][j] + b[i][j]; 1 rows‧cols rows‧cols
} 0 0 0

Total 2rows‧cols+2rows+1

25
Exercise 1

• Program : Printing out a matrix


void print_matrix(int matrix[ ][MAX_SIZE], int rows, int
cols)
{
int i, j;
for (i = 0; i < row; i++) {
for (j = 0; j < cols; j++)
printf(“%d”, matrix[i][j]);
printf( “\n”);
}
}
26
Exercise 2
• *Program :Matrix multiplication function

void mult(int a[ ][MAX_SIZE], int b[ ][MAX_SIZE], int c[ ]


[MAX_SIZE])
{
int i, j, k;
for (i = 0; i < MAX_SIZE; i++)
for (j = 0; j< MAX_SIZE; j++) {
c[i][j] = 0;
for (k = 0; k < MAX_SIZE; k++)
c[i][j] += a[i][k] * b[k][j];
}
}
27
Exercise 3

• Program :Matrix product function

void prod(int a[ ][MAX_SIZE], int b[ ][MAX_SIZE], int c[ ][MAX_SIZE],

int rowsa, int colsb, int colsa)


{
int i, j, k;
for (i = 0; i < rowsa; i++)
for (j = 0; j< colsb; j++) {
c[i][j] = 0;
for (k = 0; k< colsa; k++)
c[i][j] += a[i][k] * b[k][j];
}
}

28
Exercise 4
• Program:Matrix transposition function

void transpose(int a[ ][MAX_SIZE])


{
int i, j, temp;
for (i = 0; i < MAX_SIZE-1; i++)
for (j = i+1; j < MAX_SIZE; j++)
SWAP (a[i][j], a[j][i], temp);
}
29
Performance Measurement
• Which is better?
– T(P1) = (n+1) or T(P2) = (n2 + 5).
– T(P1) = log (n2 + 1)/n! or T(P2) = nn(nlogn)/n2.
• Complex step count functions are difficult
to compare.
• For comparing, ‘rate of growth’ of time and
space complexity functions is easy and
sufficient.
30
2 3 n
log n n n log n n n 2
0 1 0 1 1 2
1 2 2 4 8 4
2 4 8 16 64 16
3 8 24 64 512 256
4 16 64 256 4096 65536
5 32 160 1024 32768 4294967296

31
• Some math …
– properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx - logby
logbxa = alogbx
logba= logxa/logxb
– properties of exponentials:
a(b+c) = aba c
abc = (ab)c
ab /ac = a(b-c)
b = a logab
bc = a c*logab

32
Important Series

33
Important Series
N
S ( N )  1  2    N   i  N (1  N ) / 2
i 1
N
N ( N  1)(2 N  1) N 3
• Sum of squares: 
i 1
i 
2

6

3
for large N

N
N k 1
• Sum of exponents: 
i 1
i 
k

| k 1|
for large N and k  -1

N
A N 1  1
• Geometric series: 
i 0
A 
i

A 1
– Special case when A = 2
• 20 + 21 + 22 + … + 2N = 2N+1 - 1 34
Big O Notation
• Big O of a function gives us ‘rate of growth’ of the
step count function f(n), in terms of a simple
function g(n), which is easy to compare.
• Definition: [Big O] The function f(n) = O(g(n))
(big ‘oh’ of g of n) iff there exist positive constants
c and n0 such that f(n) <= c*g(n) for all n, n>=n0.
See graph on next slide.
• Example: 3n+2 = O(n) because 3n+2 <= 4n for all
n >= 2. c = 4, n0 = 2.
35
Big O Notation

= n0

36
Big O Notation
• Example: 10n2+4n+2 = O(n2) because
10n2+4n+2 <= 11n2 for all n >=5.
• Example: 6*2n+n2 = O(2n) because 6*2n+n2
<=7*2n for all n>=4.
• Algorithms can be: O(1)  constant; O(log
n)  logrithmic; O(nlogn); O(n) linear;
O(n2)  quadratic; O(n3)  cubic; O(2n) 
exponential.

37
Big O Notation
• Now it is easy to compare time or space
complexities of algorithms. Which
algorithm complexity is better?
– T(P1) = O(n) or T(P2) = O(n2)
– T(P1) = O(1) or T(P2) = O(log n)
– T(P1) = O(2n) or T(P2) = O(n10)

38

Você também pode gostar