# Calculus’s Infinitesimal Problem (not so small after all)

When Newton and Leibniz worked-through Calculus, they allowed terms approaching zero to drop from the equation. Newton, for instance, contended that the terms “x” and “x+d” (where d was approaching zero) could be accepted as “equal.” This sort of thinking irritated the philosopher Berkeley very much– so much so that he even wrote a treatise against it (The Analyst: Discourse Addressed To An Infidel Mathematician). And many other philosphers and mathematicians were made at least uncomfortable by the idea of treating “almost zero” as something equivalent to zero. With this sort of fuzzy math, they argued, surely the solutions of Newton and Leibniz could not be truly correct! Or… even if one grants that their solutions give the correct answers, then we are still in need of a logical underpinning and justification as to WHY the answers are correct. Calculus, in other words, at the time of Leibniz and Newton, was “non-rigorous”– the boys’ logic was far from airtight. Yes, the math worked, but no one– including the two masterminds of Calculus themselves– could explain why.

Reading of this situation in Carl B. Boyer’s classic work, The History Of The Calculus And Its Conceptual Development, I began to wonder if we humans have EVER really been able to grasp this mathematical phenomenon we’ve been bumping up against for a couple of thousands years now… the infinite series.

In the case of Calculus, we deal with infinite serieses in which the components of the series grow smaller and smaller, with their summation tending closer and closer to some particular value. This value is known as “the Limit” of that infinite series.

Physicists, mathematicians, and others use the Limit routinely when dealing with dynamic situations in which there is a velocity or other rate of change in one value which is dependent upon some other changing value.

But, even today, do we “get” the Infinite Series? The two main idea of Calculus– finding slopes of curves (using the Derivative) and determining areas contained beneath curves (using the Integral) both make use fo the Infinite Series– and are, according to Boyer, “extrapolations beyond the thinkable.”

The human mind has always been thrown for a loop by the infinite (would this then be an “infinite loop”?). Addressing the consternation of philosophers when faced with the idea of the infinite straight line, Aristotle basically said it was nothing to get worked-up over. Geometrists, he said, “do not need the infinite, and they do not use it. They postulate only that the finite straight line may be produced as far as they wish.”

A similarly perplexing event occurs with the so-called Instantaneous Velocity (a.k.a. Instantaneous Rate Of Change), which is one of the things that a Derivative can tell us. This is a rate of change which occurs at an “instant”– a snapshot of frozen time and space.

For example, for a projectile problem, the Instantaneous Velocity would tell us the velocity of the projectile at a single point… but at a single point, there is no elapsed time, nor is there any movement distance-wise. Since velocity is the distance-covered-per-time– how can we have a velocity with no distance and no time?

Boyer calls Instantaneous Velocity “an intellectual– not an empirical– concept,” adding that it is “a purely numerical notion.”

Here’s another perplexing situation… We accept as the final sum of an Infinite Series the Limit it approaches. But how can the sum of an Infinite Series be said to EQUAL its Limit if said sum is thought to never actually REACH its Limit? We are left with the cognitively dissonating situation in which the sum of the Infinite Series both does and does NOT equal its Limit.

Boyer credits Gregory Of St. Vincent as being the first person to explicitly state that an Infinite Series “defines in itself a magnitude which may be called the Limit.” However, even Greg seemed to be saying two things at once. At one point, he calls the Limit “the terminus of the progression.” But elsewhere he states that the end of series “does NOT attain, even if continued to infinity”– instead, the sum of the series will approach that terminus “more closely than any given interval.”

Consider the Integral, which gives us the area beneath a curve by dividing the area into an infinite series of rectangles and taking their Limit (or the value that the sum of the areas of those rectangles approaches). In this way, the area beneath a curve can be said to match the sum of the infinite series of rectangles contained beneath it. But to the man known to mathematical scholars simply as The Calculator (Richard Suiseth), this was a logical fallacy… One cannot compare a finite value (the area) to an infinity (the series of rectangles).

As for Newton and Leibniz, the men who brought these matters to a head (infinities have been perplexing men since at least the time of Zeno’s Paradoxes thousands of years ago)– they shrugged their shoulders at objections that their method zeroed-out infinities; they kept right on calculating and producing wonderfully precise, realworld answers. John (is that anglicized?) Bernoulli justified dropping infinitesimal values by declaring that any quantity diminished or increased by an infinitely small quantity is neither diminished nor increased.

And yet, to my understanding, when performing the operations of Calculus, you have to be careful as to WHEN you allow these practically-zeroes to actually become zero… For example, if you’re dealing the derivative, the change-in-y per change-in-x… if you allow the change-in-x to equal zero right off the bat, then you’ll be stuck with an disallowed fraction.

Leibniz did attempt once or twice to talk himself out of the infinite corner. He once stated that if infinitesimals bother people so much, they should just think of them– not as representing the infinitely small– but as standing for merely “incomparable” values. Furthermore, they can rest assured in the knowledge that any error produced by dropping this incomparable value would “be of no consequence” for it would be “less than any given magnitude.” As someone once said, any undetectable error is no error.

This problem of infinity has led modern mathematicians to a bizarre (to my mind) solution. Unable to find any logical way out of the situation, mathematicians simply redefined their terms. Specifically, they have redefined what a “number” is.

To start with, as to this annoying problem of whether the sum approached by an Infinite Series of diminishing values (the Limit) does or does not ever truly EQUAL its Limit, mathematicians simply decided that it would be better if it did. And to make certain of it, they re-defined “number” such that, the infinite sequence IS the number.

Boyer gives us the example of the infinitely repeating number 0.99999… Under the new definition of number, the question, Does it ever reach one?, “is without logical meaning.” The infinite series 0.99999… IS “one.”

Basically, when mathematicians were faced with the choice of maintaining the internal logic of their system or of jettisoning reality, they chose to jettison reality. Number became divorced from geometrical quantity. “It is not magnitude which is basic, but order,” writes Boyer. Today, “mathematics is neither a description of nature nor an explanation of its operation.”

My oh my… Poor Galileo was sooo wrong! Instead of describing nature, says Boyer, mathematics has become “merely the symbolic logic of possible relations” determining “what conclusions will follow logically from given premises.”

As Boyer describes it, it sounds as if the old-fashioned concept of numbers (with clearly defined “edges”), has been replaced by a fuzzy series.

Considering the number “2,” Boyer states that its magnitude is no longer its “essential characteristic.” The essence of the number 2 is “its place in the ordered aggregate of real numbers.”

[You may want to stop reading here… From here down, I just bitch about not being able to follow Boyer’s explanations of what a “number” is under the modern definition]…

Notice here that he speaks of the ordered “aggregate.” This confuses me. Sounds as if he’s saying that the number “2” carries an infinite baggage train behind it, one made up of all the numbers behind it.

Does he mean, for example, that if we are solving “3 minus 2” that we are technically subtracting “2, 1, 0, -1, etc and all numbers between them” from “3, 2, 1, 0, -1, etc and all numbers between them”, leaving only the segment from 2 to 3 as the answer? and calling it “one?”

Does he mean that if the number “3” were signified on a number-line, then all the line to the left of, and including, the mark “3” would be shaded?

In another place, Boyer describes the number “square root of two” as “the ordered aggregate of all rational numbers whose squares are less than two.” Here, it sounds as if “square roots” form a FAMILY of numbers unto themselves. Additionally, it doesn’t help me much in his definition of the “number” square-root-of-two, he uses the word “number.” It’s a very poor definition which uses the supposedly defined word as part of its own definition.

Boyer states that Bertrand Russell believed that an Irrational Number (endless decimal) is a segment of a Rational Number, and that (quoting Boyer): “according to this view, there is no need to create the Irrational Numbers.”

I confess, this explanation leaves me as ignorant of the modern technical definition of “number” as I was before Boyer’s book.

Advertisements