Você está na página 1de 16

WHAT IS A LOGARITHM?

A logarithm is the power to which a number must be raised in


order to get some other number (see Section 3of this Math
Review for more about exponents). For example, the base ten
logarithm of 100 is 2, because ten raised to the power of two is
100:
log 100 = 2

Ratios and
Proportions

Algebraic
Expression
s

Exponents

Logarithms

Glossary
and Links

because
102 = 100
This is an example of a base-ten logarithm. We call it a base ten logarithm
because ten is the number that is raised to a power. The base unit is the
number being raised to a power. There are logarithms using different base
units. If you wanted, you could use two as a base unit. For instance, the base
two logarithm of eight is three, because two raised to the power of three
equals eight:
log2 8 = 3
because
23 = 8
In general, you write log followed by the base number as a subscript. The
most common logarithms are base 10 logarithms and natural logarithms; they
have special notations. A base ten log is written
log
and a base ten logarithmic equation is usually written in the form:
log a = r
A natural logarithm is written
ln

and a natural logarithmic equation is usually written in the form:


ln a = r
So, when you see log by itself, it means base ten log. When you see ln, it
means natural logarithm (we'll define natural logarithms below). In this course
only base ten and natural logarithms will be used.
Base Ten Logarithms
We saw above that base ten logarithms are expressions in which the number
being raised to a power is ten. The base ten log of 1000 is three:
log 1000 = 3
103 = 1000
So far, we've worked with expressions that have whole numbers as solutions.
Here's one that does not. What is the log of 4?
log 4 = x
log 4 0.602
because
10

0.602

Natural Logarithms
Logarithms with a base of 'e' are called natural logarithms. What is 'e'?
'e' is a very special number approximately equal to 2.718. 'e' is a little bit like
pi in that it is the result of an equation and it's a big long number that never
ends. For those of you who have had calculus, you might remember that ex is
special because its derivative is itself. If you want to know more about 'e',
check any trigonometry text, such as page 234 of Ruud, W.L. and T.L.
Shell. Prelude to Calculus, 2nd ed. 1993. Boston: PWS Publishing Company.
Most scientific calculators have an 'e' button and an 'ln' button, so you don't
need to memorize the value of 'e'.

Logarithmic Rules
Just as exponents have some basic rules that make them easier to
manipulate (see Section 3: Exponents), so do logarithms. These rules apply to
all logarithms, including base 10 logarithms and natural logarithms. For
simplicity's sake, base ten logs are used in most of these rules:
1. br = a is the equivalent to logb a=r (This is the definition of a logarithm.)
2. log 0 is undefined.
3. log 1 = 0
4. log (P*Q) = log P + log Q
5. log (P/Q) = log P - log Q
6. log (Pt) = t *log P
7. 10(log a) = a (in the case of natural logarithms, e(ln a) = a)
8. log (10r) = r (in the case of natural logarithms, ln er = r)
9. log (1/a) = -log a
Let's take a closer look at each of these rules:

1. br = a is the equivalent of logb a=r. We've already looked at how this


works, but here's another example:
log 14 1.146
is the equivalent of
10

1.146

14

2. log 0 is undefined. It's not a real number, because you can never get zero
by raising anything to the power of anything else. You can never reach zero,
you can only approach it using an infinitely large and negative power.

3. log 1 = 0 means that the logarithm of 1 is always zero, no matter what


the base of the logarithm is. This is because any number raised to 0 equals 1.
Therefore, ln 1 = 0 also.
All the rest of the logarithmic rules are useful for solving complex equations, or
equations with unknowns.
4. log (P*Q) = log P + log Q means that if you take the logarithm of
two factors, it is the same as taking the logarithm of each factor, and adding
them together. For example:
log 6 =
log (2 * 3) =
log 2 + log 3
0.301 + 0.477 = 0.778
If you were using natural logarithms, it would look like this:
ln 6 =
ln (2 * 3) =
ln 2 + ln 3
0.693 + 1.099 = 1.792
(Note that the numerical value of the natural logarithm is different from that of
the base ten logarithm. That's because, in the second example, 1.792 is the
power to which 'e' must be raised to get 6, whereas, in the first example,
0.778 is the power to which 10 must be raised to get 6.)
If you have a variable as one of your factors, it would look like this:
log 2y = log 2 + log y
Let's say log 2y = 36 and solve for y:
log 2y = 36

log 2 + log y = 36
log y = 36 - log 2
log y = 36 - 0.301
log y = 35.699
y = 10

35.699

which is a really big number.

5. log (P/Q) = log P - log Q means that if you take a logarithm of one
number divided by another, it is the same as taking each logarithm separately,
and then subtracting the logarithm of the denominator from the logarithm of
the numerator.
For example:
log (3 / 2) =
log 3 - log 2
0.477 - 0.301 0.176
If you were using natural logarithms, it would look like this:
ln (3 / 2) =
ln 3 - ln 2
1.099 - 0.693 0.406
If you have a variable as one of your factors, it would look like this:
log (y/2) = log y - log 2
Let's say log (y/2) = 36 and solve for y:
log (y/2) = 36

log y - log 2 = 36
log y = 36 + log 2
log y = 36 + 0.301
log y = 36.301
y = 10

36.301

which is an even bigger number.

6. log (Pt) = t * log P means that the logarithm of a number raised to


some power, it is the same as multiplying the logarithm of that number by the
value of the power.
For example:
log (32) = 2 * log 3
2 * 0.477 = 0.954
It looks the same when you use natural logarithms, however, as in example
three the numerical value will be different.
ln (32) = 2 * ln 3
2 * 1.099 = 2.198
7. 10(log a) = a (or, in the case of natural logarithms, e(ln
a)
= a). Logarithms and exponents reverse each other.
For example:
10(log

3)

= 3

10(log

8)

= 8

e(ln

3)

= 3

e(ln

8)

= 8

If you raise a number to the power of a logarithm that has that number as its
base, it is equal to the number that you used in the logarithm.
8. log (10r) = r (in the case of natural logarithms, ln
er = r)Because logarithms and exponents reverse each other, this rule is
similar to rule number seven.
For example:
log (102) = 2
log (103) = 3
ln (e2) = 2
ln (e4) = 4
Any logarithm of its base number raised to some exponent is equal to that
exponent.

9. log (1/a) = -log a means that the logarithm of 1 divided by some


number is equal to the negative logarithm of that number. (This is the exactly
the opposite of the rule governing exponents where a number raised to a
negative number is equal to 1 divided by that number raised to that power.)
For example:
log (1/2) = - log 2 = -0.301
log (1/3) = - log 3 = -0.477
ln (1/2) = -ln 2 = -0.693
ln (1/3) = -ln 3 = -1.099

O(log(n))
The most common attributes of logarithmic running-time function are that:

the choice of the next element on which to perform some action is one of several
possibilities, and
only one will need to be chosen.

or

the elements on which the action is performed are digits of n

This is why, for example, looking up people in a phone book is O(log n). You don't need to
checkevery person in the phone book to find the right one; instead, you can simply divideand-conquer, and you only need to explore a tiny fraction of the entire space before you
eventually find someone's phone number.
Of course, a bigger phone book will still take you a longer time, but it won't grow as quickly
as the proportional increase in the additional size.

We can expand the phone book example to compare other kinds of operations
and their running time. We will assume our phone book has businesses (the "Yellow
Pages") which have unique names and people (the "White Pages") which may not have
unique names. A phone number is assigned to at most one person or business. We will also
assume that it takes constant time to flip to a specific page.
Here are the running times of some operations we might perform on the phone book, from
best to worst:

O(1) (worst case): Given the page that a business's name is on and the business
name, find the phone number.
O(1) (average case): Given the page that a person's name is on and their name,
find the phone number.
O(log n): Given a person's name, find the phone number by picking a random point
about halfway through the part of the book you haven't searched yet, then checking to
see whether the person's name is at that point. Then repeat the process about halfway
through the part of the book where the person's name lies. (This is a binary search for a
person's name.)
O(n): Find all people whose phone numbers contain the digit "5".
O(n): Given a phone number, find the person or business with that number.

O(n log n): There was a mix-up at the printer's office, and our phone book had all its
pages inserted in a random order. Fix the ordering so that it's correct by looking at the
first name on each page and then putting that page in the appropriate spot in a new,
empty phone book.
For the below examples, we're now at the printer's office. Phone books are waiting to be
mailed to each resident or business, and there's a sticker on each phone book identifying
where it should be mailed to. Every person or business gets one phone book.

O(n log n): We want to personalize the phone book, so we're going to find each
person or business's name in their designated copy, then circle their name in the book
and write a short thank-you note for their patronage.
O(n2): A mistake occurred at the office, and every entry in each of the phone books
has an extra "0" at the end of the phone number. Take some white-out and remove
each zero.
O(n n!): We're ready to load the phonebooks onto the shipping dock. Unfortunately,
the robot that was supposed to load the books has gone haywire: it's putting the books
onto the truck in a random order! Even worse, it loads all the books onto the truck, then
checks to see if they're in the right order, and if not, it unloads them and starts over.
(This is the dreaded bogo sort.)
O(nn): You fix the robot so that it's loading things correctly. The next day, one of your
co-workers plays a prank on you and wires the loading dock robot to the automated
printing systems. Every time the robot goes to load an original book, the factory printer
makes a duplicate run of all the phonebooks! Fortunately, the robot's bug-detection
systems are sophisticated enough that the robot doesn't try printing even more copies
when it encounters a duplicate book for loading, but it still has to load every original and
duplicate book that's been printed.

Big O complexity can be visualized with this graph:

The simplest definition I can give for Big-O notation is this:


Big-O notation is a relative representation of the complexity of an algorithm.
There are some important and deliberately chosen words in that sentence:

relative: you can only compare apples to apples. You can't compare an algorithm to
do arithmetic multiplication to an algorithm that sorts a list of integers. But a comparison
of two algorithms to do arithmetic operations (one multiplication, one addition) will tell
you something meaningful;
representation: Big-O (in its simplest form) reduces the comparison between
algorithms to a single variable. That variable is chosen based on observations or
assumptions. For example, sorting algorithms are typically compared based on
comparison operations (comparing two nodes to determine their relative ordering). This
assumes that comparison is expensive. But what if comparison is cheap but swapping
is expensive? It changes the comparison; and
complexity: if it takes me one second to sort 10,000 elements how long will it take
me to sort one million? Complexity in this instance is a relative measure to something
else.

Come back and reread the above when you've read the rest.
The best example of Big-O I can think of is doing arithmetic. Take two numbers (123456 and
789012). The basic arithmetic operations we learnt in school were:

addition;

subtraction;

multiplication; and

division.

Each of these is an operation or a problem. A method of solving these is called


an algorithm.
Addition is the simplest. You line the numbers up (to the right) and add the digits in a
column writing the last number of that addition in the result. The 'tens' part of that number is
carried over to the next column.
Let's assume that the addition of these numbers is the most expensive operation in this
algorithm. It stands to reason that to add these two numbers together we have to add
together 6 digits (and possibly carry a 7th). If we add two 100 digit numbers together we
have to do 100 additions. If we add two 10,000 digit numbers we have to do 10,000
additions.
See the pattern? The complexity (being the number of operations) is directly proportional
to the number of digits n in the larger number. We call this O(n) or linear complexity.
Subtraction is similar (except you may need to borrow instead of carry).
Multiplication is different. You line the numbers up, take the first digit in the bottom number
and multiply it in turn against each digit in the top number and so on through each digit. So
to multiply our two 6 digit numbers we must do 36 multiplications. We may need to do as
many as 10 or 11 column adds to get the end result too.
If we have two 100-digit numbers we need to do 10,000 multiplications and 200 adds. For
two one million digit numbers we need to do one trillion (10 12) multiplications and two million
adds.
As the algorithm scales with n-squared, this is O(n2) or quadratic complexity. This is a
good time to introduce another important concept:
We only care about the most significant portion of complexity.
The astute may have realized that we could express the number of operations as: n 2 + 2n.
But as you saw from our example with two numbers of a million digits apiece, the second

term (2n) becomes insignificant (accounting for 0.0002% of the total operations by that
stage).
One can notice that we've assumed the worst case scenario here. While multiplying 6 digit
numbers if one of them is 4 digit and the other one is 6 digit, then we only have 24
multiplications. Still we calculate the worst case scenario for that 'n', i.e when both are 6
digit numbers. Hence Big-O notation is about the Worst-case scenario of an algorithm

The Telephone Book


The next best example I can think of is the telephone book, normally called the White Pages
or similar but it'll vary from country to country. But I'm talking about the one that lists people
by surname and then initials or first name, possibly address and then telephone numbers.
Now if you were instructing a computer to look up the phone number for "John Smith" in a
telephone book that contains 1,000,000 names, what would you do? Ignoring the fact that
you could guess how far in the S's started (let's assume you can't), what would you do?
A typical implementation might be to open up to the middle, take the 500,000 th and compare
it to "Smith". If it happens to be "Smith, John", we just got real lucky. Far more likely is that
"John Smith" will be before or after that name. If it's after we then divide the last half of the
phone book in half and repeat. If it's before then we divide the first half of the phone book in
half and repeat. And so on.
This is called a binary search and is used every day in programming whether you realize it
or not.
So if you want to find a name in a phone book of a million names you can actually find any
name by doing this at most 20 times. In comparing search algorithms we decide that this
comparison is our 'n'.

For a phone book of 3 names it takes 2 comparisons (at most).

For 7 it takes at most 3.

For 15 it takes 4.

For 1,000,000 it takes 20.

That is staggeringly good isn't it?

In Big-O terms this is O(log n) or logarithmic complexity. Now the logarithm in question
could be ln (base e), log10, log2 or some other base. It doesn't matter it's still O(log n) just like
O(2n2) and O(100n2) are still both O(n2).
It's worthwhile at this point to explain that Big O can be used to determine three cases with
an algorithm:

Best Case: In the telephone book search, the best case is that we find the name in
one comparison. This is O(1) or constant complexity;
Expected Case: As discussed above this is O(log n); and
Worst Case: This is also O(log n).

Normally we don't care about the best case. We're interested in the expected and worst
case. Sometimes one or the other of these will be more important.
Back to the telephone book.
What if you have a phone number and want to find a name? The police have a reverse
phone book but such look-ups are denied to the general public. Or are they? Technically you
can reverse look-up a number in an ordinary phone book. How?
You start at the first name and compare the number. If it's a match, great, if not, you move
on to the next. You have to do it this way because the phone book is unordered (by phone
number anyway).
So to find a name given the phone number (reverse lookup):

Best Case: O(1);


Expected Case: O(n) (for 500,000); and
Worst Case: O(n) (for 1,000,000).

The Travelling Salesman


This is quite a famous problem in computer science and deserves a mention. In this
problem you have N towns. Each of those towns is linked to 1 or more other towns by a road
of a certain distance. The Travelling Salesman problem is to find the shortest tour that visits
every town.
Sounds simple? Think again.
If you have 3 towns A, B and C with roads between all pairs then you could go:

ABC

ACB

BCA

BAC

CAB

CBA

Well actually there's less than that because some of these are equivalent (A B C and
C B A are equivalent, for example, because they use the same roads, just in reverse).
In actuality there are 3 possibilities.

Take this to 4 towns and you have (iirc) 12 possibilities.

With 5 it's 60.

6 becomes 360.

This is a function of a mathematical operation called a factorial. Basically:

5! = 5 4 3 2 1 = 120

6! = 6 5 4 3 2 1 = 720

7! = 7 6 5 4 3 2 1 = 5040

25! = 25 24 2 1 = 15,511,210,043,330,985,984,000,000

50! = 50 49 2 1 = 3.04140932 1064

So the Big-O of the Travelling Salesman problem is O(n!) or factorial or combinatorial


complexity.
By the time you get to 200 towns there isn't enough time left in the universe to solve
the problem with traditional computers.
Something to think about.

Polynomial Time
Another point I wanted to make quick mention of is that any algorithm that has a complexity
ofO(na) is said to have polynomial complexity or is solvable in polynomial time.
O(n), O(n2) etc are all polynomial time. Some problems cannot be solved in polynomial time.
Certain things are used in the world because of this. Public Key Cryptography is a prime
example. It is computationally hard to find two prime factors of a very large number. If it
wasn't, we couldn't use the public key systems we use.
Anyway, that's it for my (hopefully plain English) explanation of Big O (revised).
t shows how an algorithm scales.
O(n2): known as Quadratic complexity

1 item: 1 second

10 items: 100 seconds

100 items: 10000 seconds

Notice that the number of items increases by a factor of 10, but the time increases by a
factor of 102. Basically, n=10 and so O(n2) gives us the scaling factor n2 which is 102.
O(n): known as Linear complexity

1 item: 1 second

10 items: 10 seconds

100 items: 100 seconds

This time the number of items increases by a factor of 10, and so does the time. n=10 and
so O(n)'s scaling factor is 10.
O(1): known as Constant complexity

1 item: 1 second

10 items: 1 second

100 items: 1 second

The number of items is still increasing by a factor of 10, but the scaling factor of O(1) is
always 1.
O(log n): known as Logarithmic complexity

1 item: 1 second

10 items: 2 seconds

100 items: 3 seconds

1000 items: 4 seconds

10000 items: 5 seconds

The number of computations is only increased by a log of the input value. So in this case,
assuming each computation takes 1 second, the log of the input n is the time required,
hence log n.
That's the gist of it. They reduce the maths down so it might not be exactly n2 or whatever
they say it is, but that'll be the dominating factor in the scaling.

Você também pode gostar