Mathematical Proof and the Principles of Mathematics/History/After Euclid

For 2000 years after Euclid, while mathematics as a whole advanced a great deal, progress in terms of the axiomatic method was very slow. The Greek culture began to decline and eventually was subsumed into the Roman Empire. The Romans weren't as interested in geometry as their predecessors, and were even less interested in its logical structure. When Rome fell most of Europe fell into a dark age, but progress in mathematics continued in the Arab world and on the Indian subcontinent. (It should be mentioned that mathematics developed independently in other parts of the world, but lack of communication meant that this knowledge wasn't to play an important part in our story.)

Perhaps the most important advance in practical mathematics during this period was the invention of decimal arithmetic. This entered Europe as scientific knowledge began to trickle in from the Arabs, but only fully displaced the existing Roman numeral system when the printing press came into use.

Conception of numbers edit

While knowledge of geometry was expanded at this time, a bigger change came in how numbers were understood. It's difficult to imagine a world where the concept of number is different that the one we're familiar with today. But if you look at the numerical parts of The Elements it's easy to tell from both its structure and phrasing that Euclid lived in such a world.

First, the Greeks had a very different notion of ratio than we have. To them, a ratio only existed between two like quantities, for example two lengths. The idea as defining speed as the ratio of distance to time would make no sense to them. The statement that the Greeks knew about irrational numbers is only partially true. They new about incommensurable ratios which correspond to irrational numbers in the modern way of thinking, but they didn't consider these ratios to be numbers. They did have division in the sense of the opposite of multiplication, but this was a different concept to them than evaluating a ratio.

More subtly, they thought of numbers as multiples of a given unit. This unit might be divided into a number of smaller units, from which you might get factional numbers of the original unit. But numbers themselves were always whole when expressed in terms of a sufficiently small division of the original unit. In modern terminology, numbers to them were always rational. Since two numbers have the same type, Euclid did consider it possible to have a ratio between them, but this wasn't really the same as dividing one number by another.

They didn't have concepts for zero or negative numbers. In fact it's even doubtful whether they considered one a true number since they thought of number as synonymous with multitude.

Symbolism edit

Another innovation during this period was the invention of symbols to represent mathematical concepts. Most of the notation in use today dates from the Renaissance. For example:

  • The equals sign (=) was invented by Robert Recorde in 1557
  • The plus and minus signs (+ and −) are first found in 1489 in a book by Johannes Widmann.

New types of numbers edit

Negative numbers and zero were imported into Europe from India via the Arabs. Despite qualms about whether or not they represented any real concept and questions about what to do it the answer to a problem came out negative, they were accepted because they made algebra much simpler and convenient. Not that they didn't misbehave on occasion. For example zero was fine as long as you stuck to addition, subtraction and multiplication, but what about an expression like (in modern notation) a/0? Did it represent the infinite?

As for negative numbers, they were fine in terms of the operations of arithmetic, once people figured out the rather strange rule that a negative times a negative is positive. But trouble came when you tried to compare them in size. It was a stretch but it was possible to accept a negative as meaning something less than nothing. But if -1<0<1, then 1/-1 must be >1 and then surely that meant -1 = 1/-1 > 1 > -1 which is impossible.

It was no good appealing to intuition for help with these issues since zero and negatives were not intuitive to start with. And Euclid was no help since he dealt only with positive numbers and ratios. So the usual practice was to close one's eye's, think of England, and carry on until you got the answer. Even with the apparent problems the answer you got was probably correct, which introduced the grandest paradox of all: How can incorrect methods produce correct results?

The new types of numbers kept coming though. By combining negatives with square roots you got expressions like √-1, which may be less familiar now than the so called "real" numbers, but at the time it was not much worse that admitting negatives in the first place. Again there was some misbehavior, for example -1 = (√-1)×(√-1) = √((-1)×(-1)) = √1 = 1. But again the convenience factor and the fact the results you got were still correct meant that such issues were ignored. In this case it was their use in solving cubic equations. The method of solving them involved finding square roots and in many cases the quantity under the radical was negative. Despite this, when the calculations were carried through and a solution was found it checked with the original equation.

With the new methods of computation, especially the introduction of decimal fractions, the distinction between a ratio and quotient was disappearing. The concept of a quantify as the combination of a number and a unit of measurement was approaching its modern form. But this meant that the geometric interpretation of irrational numbers was no longer valid, and therefore their logical basis, so carefully constructed by the Greeks, no longer held. The invention of logarithms introduced irrationals with no geometric or algebraic basis, but were nevertheless extremely useful.

The Analyst edit

The invention of calculus used yet another new kind of number, infinitesimals for Leibniz's formulation and fluxions for Newton's. Fluxions and infinitesimals are related but different concepts, but they are both used to pull off a kind of slight of hand. To find an instantaneous rate of change or the slope of a tangent ot a curve, you compute the quotient of two very small quantities. One way to view what Newton and Leibniz did is that they simplified the result of this division using the assumption that the two quantities where, in fact, zero. So in one part of a calculation the quantities where not zero, but in another part of the calculation they were zero. For infinitesimals, Leibniz imagined the quantities were infinitely small but somehow not zero. That way you could take them to be not zero when you did the division, but then take them to too small to make a difference in the final result later on. Fluxions did the same kind of thing in a different way. To compute areas, infinitesimals were used in an even more suspicious way. A region was divided into an infinite number of infinitely thin strips, then the areas of the strips were added to compute the area. Despite the philosophical difficulties, this approach produced the correct answers and was much easier than earlier methods. As with the other innovations, the usefulness of calculus lead most to overlook the rather its questionable foundation.

But for many this was the last straw and open criticism of the way calculus was formulated began to appear. Perhaps the most prominent critic was George Berkeley who, in the early 18th century, published a pamphlet called The Analyst. Berkeley seems to have been motivated to write this by a mathematician who described his arguments for the existence of God as being less than rigorous. Berkeley's response was in the form of a tu quoque argument; mathematics itself, and calculus in particular, did not live up to the standard of rigor the mathematician required of Berkeley.

It was becoming clear that the infrastructure of mathematics was in serious need of repair. But it was not clear how the problems could be fixed and the issue remained for over a century.