Calculus is over three hundred years old, but the modern approach via limits only dates to the early 1800's with Cauchy. Historically, the use of differentials antedates the "rigorous approach" we now take to the derivative.
For example, Carl Boyer  notes:
Increments and decrements, rather than rates of change, were the fundamental elements in the work leading to that of Leibniz, and played a larger part in the calculus of Newton than is usually recognized. The differential became the primary notion, and it was not effectively displaced as such until Cauchy, in the nineteenth century, made the derivative the basic concept.
A differential was regarded loosely as an infinitely small nonzero quantity. For example, here is a computation of the derivative of via differentials. Increment x by an infinitely small amount , which produces an infinitely small change in . This change is
Divide through by :
Since is infinitely small, I may neglect it:
This approach came to be regarded as imprecise. What does it mean for something to be "infinitely small"? How can something be "infinitely small", but not 0? What can you "neglect"?
Eventually, infinitely small quantities --- infinitesimals --- as well as infinitely large quantities were rehabilitated in the work of the American logician Abraham Robinson. Though calculus can be done using infinitesimals, it is standard practice to use limits and difference quotients instead.
The approach developed by Cauchy, which is more or less the approach that I'll use, makes no reference to "infinitely small" quantities. The definition
does not give separate meanings to and , so isn't a quotient. Nevertheless, people have tried to define and in sensible ways in order to make the quotient equal the derivative.
Here is one way to do this. Regard as an independent variable, and define . Then formally have . This has the following interpretation.
The tangent line at a point on has slope . If you move from x to , the tangent line rises by . On the other hand, the actual change in y is
If is small, . Hence,
This formula can be used to approximate from . I'll refer to this procedure as approximation by differentials, or the tangent line approximation.
Example. For , find: (a) The exact change in f if x goes from 1 to 1.01; (b) The approximate change in f if x goes from 1 to 1.01.
The exact change in f is .
The approximate change in f is . , so . Since ,
Example. Suppose and . Find: (a) The approximate change in f as x changes from 5 to 4.9; (b) The approximate value of .
The approximate change in f is . , while . Thus,
The approximate value of is given by
Example. A differentiable function has derivative . Approximate the change in f that results when x changes from 1 to 1.02.
, and . Hence, the approximate change in f is
Example. A function is defined implicitly by the equation
Approximate the change in y at the point , as x changes from 1 to 1.01.
First, compute by implicit differentiation:
Set , :
The change in x is . Therefore, the change in y is approximately
Example. Use differentials to approximate .
Let . I know that ; I'll use differentials to approximate the change in going from 1 to 1.01. First,
The change in x is
The actual value is 1.004988.
Example. If , then . This is a differential approximation for . For if , then . Ise the tangent at : . Then
Since is an independent variable, I may replace it with u, with the understanding that . This gives .
Notice that this implies that for .
Example. The side of a square is measured to be 10 light-years, with an error of 0.2 light-years. Use differentials to approximate the error in the area.
If s is the length of the side, the area is . Then , so . The error is approximated by
 Carl B. Boyer, The History of the Calculus and Its Conceptual Development. New York: Dover Publications, 1949. [ISBN 0-486-60509-4]
Bruce Ikenaga's Home Page
Copyright 2005 by Bruce Ikenaga