The Power Rule says

provided that . The formula does not apply to

An antiderivative of would have to satisfy

But the Fundamental Theorem implies that if , then

Thus, plays the role of .

Define the * natural log* function
by

By construction, if ,

represents the area under from 1 to x:

But why is this called a *logarithm*?

You've probably seen * logarithms* used like
this:

It's not clear what has to do with raising numbers to powers.

Well, for one thing, has many properties you'd expect a logarithm to have. For example,

You'd expect the log of a product to equal the sum of the logs. If a and b are positive numbers, then

In the second integral, let , so , and . When , ; when , . So

In other words,

In similar fashion, you can verify that

Thus, there is some justification in calling a logarithm, because it has the same properties you'd expect logs to have.

It turns out that the whole story is backwards! When you discuss logs
as the opposite of powers, you are actually being a little sloppy. To
define the familiar logs (and exponentials) with mathematical
precision, what you *actually* do is to define
and its inverse *first*, as I've done above.
*Then* you define the other logs and powers using
and .

For example, if a and b are positive numbers, define

It's possible to check that logs base a, as I've just defined them, behave the way you'd expect logs to behave.

Here are some additional properties of .

First,

Therefore, the graph of is increasing for .

Moreover,

Therefore, the graph of is concave down for .

Next, consider the following picture:

The area under the curve from 1 to 4 is . It is greater than the sum of the areas of the three rectangles, so

If n is a positive integer, then

So if , then

Since n is an arbitrary positive integer, I can make arbitrarily large by making x sufficiently large. This proves that

Here's the graph of :

* Example.* The differentiation formula for
works together with the other differentiation rules
in the usual ways.

If I say that " " --- the derivative
of is --- then should be defined
wherever is defined. Therefore, it is not really correct to
say *without the qualification* that " ". For is defined
for , whereas is only defined for .

It turns out that the correct statement is:

For , this is the same as the old formula. For , , so

So the updated antiderivative formula is

You can omit the absolute value signs if the quantity inside is never negative. For example, it turns out that

(Deriving this formula requires an integration technique called * substitution*. However, you can check that it's
correct by differentiating to get .) I can omit the absolute values around the
" ", because is always positive.

* Example.*

* Example.* You can use *
logarithmic differentiation* to compute derivatives which are
difficult to compute in other ways. For example, suppose you want to
differentiate .

You can't use the Power Rule, because the exponent is x, not a {\it number}.

You can't use the rule (which you'll see
later), because the base is x, not a *number*.

Instead, take logs of both sides:

Differentiate implicitly:

Hence,

By a similar procedure, you can differentiate complicated products and quotients. For example, to differentiate , take logs of both sides:

To simplify, I used (in order) the following properties of logs:

Differentiate implicitly:

Hence,

* Example.* Compute
.

Let . Taking logs and bringing the power down, I get

Differentiate both sides, using the Chain Rule on the left and the Product Rule (and Chain Rule) on the right:

Multiply both sides by y to clear the fraction on the left, then substitute :

Copyright 2006 by Bruce Ikenaga