Você está na página 1de 27

Section 2

Limits
The fundamental concept in calculus is that
of a limit.
Limits at . What is the long term
behaviour of the function f ?

Limits at a point. What is the local


behavior of f near some value a R?

The formal definitions are similar, but the


technical calculations are simpler for the first
type of limit.
Limits at : some examples:
Rough definition. We say that f (x) has
limit L as x goes to if f (x) gets closer and
closer to L as x gets bigger and bigger.
3 2x2
Example. Find the limit of 2
as x goes
1+x
to .
Solution. Multiply top and bottom by x12 :

3 2x2 (3 2x2)/x2
2
= (1)
1+x (1 + x2)/x2
= (2)
(3)
as x gets very big.
Line (3) requires that it is OK to take the
limits of numerator and denominator
separately and then divide.
That is OK here, but not, for example in a
limit like
ln(1/x)
lim .
x ln(1/x2 )

We need rules to tell us what manipulations


are OK, and to prove these we need a better
definition.
Rough definition. We say that f (x) has
limit L as x goes to if f (x) gets closer and
closer to L as x gets bigger and bigger.
Problem. Closer and closer and bigger and
bigger are a bit hard to work with
mathematically.
We solve this problem by being more specific
about how close and how big.
Closeness is measured by the error term

error(x) = |f (x) L|.

Precise definition.
We say that f (x) has limit L as x goes to
if, given any error tolerance  > 0 you care to
specify, I can give you a number M (which
will depend on !) so that for every x > M ,

error(x) < .

Well write: f (x) L as x ,


or limx f (x) = L.
To show that limx f (x) = L using the
definition you need to give a recipe for finding
an M that works for different .
Typical mysterious proof:
x + sin x
Show that lim = 1.
x x
Solution. Choose any error tolerance  > 0.
For this error take M = 1/.
[Where did this come from???]
If x > M = 1/ then
x + sin x


error(x) =
1
x
=
=
<
<
= .
Thus, this choice of M works and so
x + sin x
lim = 1.
x x
The choice of M = 1/ came from me having
a careful look at error(x) before I got started
with the proof!
More helpful example.
2x
Show that lim
x x + x
= 2.

Pre-proof: Here L = 2 and so


2x


error(x) =
2
x+ x

2x 2x 2 x

=
x+ x
2


=

x+1

Now the hard bit. Given a number  > 0, how


big does x need to be to make sure that
error(x) is less than ?
You could try to solve this exactly.
Exercise: check that for x > 0:
2
2 2

<  x > 1 (= M ).
x+1 
Often this isnt even possible, and when it is,
it usually gives a really messy formula for M .
Better is to replace error(x) with a slightly
bigger but simpler expression, e.g. let
2
errorN ew (x) = .
x
For all x, error(x) < errorN ew (x), so if we can
ensure that errorN ew (x) < , then we also get
that error(x) < . But here the calculation is
easy. For x > 0,
 2
2 2 2
<  x > x > . (4)
x  
Choosing a good errorN ew (x) is a bit of an
artform. In an exam it wouldnt be any harder
than this example!
The proof would now look like:
Choose any error tolerance  > 0. From our
calculations above we know to choose
M = (2/)2. If x > M , then by (4),

error(x) errorN ew (x)


2
= <
x
2x
and hence lim = 2.
x x + x
Important notes:

The definition is useless for finding the


limit L!

The point behind this is not to use the


definition to the definition to do concrete
limit problems (except in emergencies!).

We want the definition to prove theorems


so that we can justify our methods of
finding limits.

Writing error(x) gets confusing if we have


more than one function, so well just
write |f (x) L| from now on.

Indeed, mean lecturers will just write the


incomprehensible
f (x) L as x if
 > 0 M (x > M ) = |f (x)L| < .

Never use the definition to do a limit in


an test/exam unless you are told to!
Theorem. Suppose that f (x) L as x
and that g(x) ` as x . Then
(f + g)(x) L + ` as x .
First question: How does the error we want
to control, |(f + g)(x) (L + `)|,
compare to the ones we know we can control

|f (x) L| and |g(x) `|?

That is, if

|f (x) L| < and |g(x) `| <


then

|(f + g)(x) (L + `)| <

Proof. As f (x) L as x we can find a


number Mf so that the f -error satisfies

|f (x) L| <
2
whenever x > Mf .
Similarly, as g(x) ` as x , we can find a
number Mg so that the g-error satisfies

|g(x) `| <
2
whenever x > Mg .
We want to make sure that both these hold
at the same time, so choose
M = max(Mf , Mg ).

If x > M , then x > Mf and x > Mg . Thus, for


such x, the error for (f + g) satisfies
|(f + g)(x) (L + `)| = |(f (x) L) + (g(x) `)|

<

This M works for this tolerance !


Thus (f + g)(x) L + ` as x .
Note: 1. The proof for f g is almost
identical.
2. Those for f g and f /g are similar, but
rather harder.
For f /g, you have that if lim f (x) = L and
x
lim g(x) = ` 6= 0, then
x

f (x) L
lim = .
x g(x) `

sin( x) + cos(x2)
Example. Let g(x) = 2 4
.
1+x+x +x
Find lim g(x) (if it exists).
x
Solution. Clearly

2 sin( x) + cos(x2) 2. For x > 0,
1 + x + x2 + x4 x
and so
g(x) .

2
It is easy to show that lim = 0 and that
x x
2
lim = 0, so surely lim g(x) = 0 too.
x x x
This is justified by
The Pinching Theorem. Suppose that for
all (sufficiently large) x,
f (x) g(x) h(x).
If lim f (x) = lim h(x) = L, then lim g(x)
x x x
also exists and equals L.
Homework: read the proof of this in section
2.4 of the purple notes.
Examples
Limits at a point
1 cos x
Let f (x) = 2
, x 6= 0.
x
What is the behaviour of the function like as
you get near to x = 0?
Sketch.

It looks like f (x) gets closer and closer to


as x gets closer and closer to 0.
Formal definition. Again the main idea is to
be more specific about the term closer and
closer.

Suppose that f (x) is defined for all x near a,


although not necessarily at a.

We say that lim f (x) = L if no matter how


xa
small an error tolerance  > 0 you specify, I
can specify a little interval around a, say
I = (a , a + ) so that

error(x) = |f (x) L| < 


whenever x I and x 6= a.
Notes.

1. Note the x 6= a Even if f (a) is defined,


we dont care what its value is.

2. Although the idea is the same, the


inequalities involved in using this
- definition to show something like
x2 + 3 1
lim 4 = are APPALLING!
x2 x 2 2
3. Again, you only use the definition for
exceptionally simple limits like
lim x = a.
xa
proving limit theorems (so you can do
most harder limits),
exceptionally hard limits where the
limit theorems dont work.
Exercises for the very keen:

1. Adapt the previous proof to show that

lim (f + g)(x) = lim f (x) + lim g(x)


xa xa xa
if the RHS limits both exist.

2. Let f : R R,

1,
2 if x = pq Q in simplest terms,
f (x) = q
0, if x 6 Q.
Show that lim f (x) = 0.
x1

3. Show that lim sin(1/x) doesnt exist, by


x0
showing that, no matter what L you
thought might be the limit, that even for
1 , you could never
a big value of  like 2
find a value of small enough so that
1
| sin(1/x) L| <
2
whenever |x 0| < and x 6= 0.
Exercises for Everyone:

1. Learn the limit theorems for the point


limit situation, such as:
(a) lim (f g)(x) = ( lim f (x))( lim g(x))
xa xa xa
(if the RHS makes sense!)
(b) the Pinching Theorem.

2. Have a go at using the definition to show


that
(a) lim 1 = 1. (Easy!)
x2
(b) lim x = 2. (Easy!)
x2
(c) lim x2 = 4. (Harder!)
x2
(See our earlier limit calculation!)

3. Think about how to use the limit laws to


prove that if p is a polynomial then for
every x0,
lim = p(x0).
xx0
You should be comfortable with things like
x3 1 (x 1)(x2 + x + 1)
lim = lim
x1 x2 1 x1 (x 1)(x + 1)
x2 + x + 1
= lim
x1 x+1
limx1 x2 + x + 1
=
limx1 x + 1
3
= .
2
|x3 1|
What about lim 2 ?
x1 x 1
|x3 1|
Let f (x) = 2 .
x 1
If x > 1 then
x3 1 x2 + x + 1 3
f (x) = 2 =
x 1 x+1 2
for x near 1.
If x < 1 then x3 1 is negative so
x3 1 x2 + x + 1 3
f (x) = 2 =
x 1 x+1 2
for x near 1.
In this case lim f (x) doesnt exist. The value
x1
of f (x) doesnt get closer and closer to a
single number as x gets closer and closer to 1.
On the other hand, if you only sneak up on 1
from the right, f (x) gets closer and closer to
3
. We say that f has a right hand limit at 1
2
and write
3
lim f (x) =
x1+ 2
Similarly, this f also has a left hand limit at
1:
3
lim f (x) =
x1 2
Of course, for some functions, one of these
limits may exist but not the other!
Theorem. Let f be defined on an open
interval containing a. Then lim f (x) exists if
xa
and only if

lim f (x) and lim f (x)


xa+ xa
both exist, and are equal.
Exercise: Write an - definition of a
one-sided limit.
Continuity
Definition: Suppose that f : (a, b) R and
that x0 (a, b). We say that f is continuous
at x0 if lim f (x) = f (x0).
xx0
We saw earlier that if p is a polynomial and
x0 R, then lim p(x) = p(x0).
xx0
Thus, every polynomial is continuous at every
point.
Note: We try very hard to avoid doing the
limit in this definition using s and s!
Theorem: Suppose that p and q are
p(x)
polynomials and that f (x) = when
q(x)
q(x) 6= 0. If x0 R and q(x0) 6= 0, then f is
continuous at x0.
Proof.
limxx0 p(x)
lim f (x) =
xx0 limxx0 q(x)

p(x0)
=
q(x0)

= f (x0)
and hence f is continuous at x0.

Thus, rational functions are continuous


where they are defined.
Well be interested in the continuity of
functions on an interval, rather than just at a
point.
Definition. Suppose that f : [a, b] R. We
say that f is continuous on [a, b] if

1. f is continuous at each point x0 (a, b),

2. lim f (x) exists and equals f (a), and


xa+

3. lim f (x) exists and equals f (b).


xb

Example. Let f : [0, 1] R, f (x) = x.
Then f is continuous on [0, 1]. Note that
here lim f (x) isnt even defined, so you really
x0
do need to use the one-sided limits at the
endpoints.

Hold on! How do we know that



lim x = x0?
xx0
The story so far . . .
lim f (x) and lim f (x) have precise formal
x xa
definitions in terms of the size of the error
|f (x) L|.
The formal definitions are needed to
prove theorems that enable us to find
limits more easily.
The formal definitions are only rarely used
to do concrete limit problems.
Only use the formal definition in a test or
exam if it says something like
Use the -M definition to prove . . .
These limit concepts are hard, so dont
panic if you are finding them difficult.
Dont confuse
The conceptual difficulty of
understanding the -M picture,
 > 0 M x > M |f (x) L| < .
The computational difficulty of finding
an M for each  > 0.

Você também pode gostar