Précis of epistemology/Logical principles
A theory can be identified with the set of all its principles (axioms and definitions) or with the set of all its theorems, because its theorems are the logical consequences of its principles.
Logical principles precisely determine the logical consequence relation. They thus provide the means to formulate all the theories. Logic can even be considered a theory of all theories. It is the most fundamental tool for all theorists. But it is not enough to make good theories, because it only teaches how to reason correctly, and one can reason correctly with bad principles. Logic shows how to make all the theories, but by itself it does not teach to recognize the good theories.
A reasoning is logical when all its affirmations, except its premises, are obvious logical consequences of the affirmations that precede them. In this way a logical reasoning proves that its conclusion is a logical consequence of its premises. Logical principles are fundamental rules that determine all obvious relations of logical consequence, and from there all relations of logical consequence.
Logical consequence and logical possibility edit
The relation of logical consequence relationship can be defined from logical possibility:
C is a logical consequence of the premises P if and only if there is no logically possible world such that C is false and the P are true.
A logical consequence can not be false if the premises are true. The relation of logical consequence necessarily leads from truth to truth.
To define a logically possible world, we give ourselves fundamental properties and relations and a set of individuals to whom we can attribute these properties and relations. A statement is atomic when it affirms a fundamental property of an individual or a fundamental relation between several individuals. An atomic statement can not be decomposed into smaller statements. Any set of atomic statements determines a logically possible world for which they are all true and the only true atomic statements (Keisler 1977). A set of atomic statements is never contradictory because atomic statements do not contain negation.
The definition of the relation of logical consequence from the concept of logically possible world makes it possible to justify rationally all the logical principles. The definition of a logically possible world by a set of atomic statements is therefore the foundation of all logic.
The truth of compound statements edit
Statements about a logically possible world are composed of atomic statements with logical connectors. The main logical connectors are the negation not, the disjunction or, the conjunction and, the conditional if then, the universal quantifier for all x, or every x is such that, and the existential quantifier there exists an x such that.
When a statement is composed from atomic statements with logical connectors, its truth depends only on the logically possible world considered, because the truth of a compound statement depends only on the truth of the statements from which it is composed.
The truth of statements composed with negation, disjunction, conjunction, and the conditional is determined with truth tables:
|p or q
|p and q
|if p then q
The expression if then is commonly understood with the implicit meaning of a necessary consequence. If p then q means that for one reason or another, q is a necessary consequence of p. The truth table of the conditional gives it a much broader meaning: never p without q. For example If the Earth is motionless then 2 + 2 = 5 is a true statement, according to this truth table. It means the Earth is never motionless without 2 + 2 = 5. Since the Earth is never motionless this statement is always true.
The truth of statements composed with the universal and existential quantifiers is determined by the following two rules:
For all x, p(x) is true when all the statements p(i) obtained from p(x) by substituting a name of an individual i at all occurrences of x in p(x) are true, and false otherwise.
There exists an x such that p(x) is true when at least one statement p(i) obtained from p(x) by substituting a name of an individual i at all occurrences of x in p(x) is true, and false otherwise.
For these two rules to be applied, the domain of individuals with whom we form atomic statements must be determined. This is a problem for set theories, because we cannot determine the domain of all sets.
In the statements For all x, E(x) or There exists an x such that E(x) the variable x is bound by the quantifier For all x or There exists an x such that. A variable is free in a statement when it is not bound.
First-order logic only allows quantifiers over a domain of individuals. We can also quantify over the domain of all concepts (properties and relations) and thus define second-order logic. But it suffices to consider concepts as individuals to reformulate second-order logic within the framework of first-order logic. This is why first-order logic is the most fundamental and the only one considered in this chapter.
Negation, conjunction, disjunction, conditional, and existential and universal quantifiers are the most fundamental logical connectors. But a few others are also important: the biconditional if and only if, the exclusive disjunction, or the alternative, either or, the Sheffer connector neither nor ...
|p if and only if q
The biconditional is very common use. In particular, the definitions are formulated with a biconditional: the defined expression is true if and only if the defining expression is also true.
|either p or q
To distinguish it from exclusive disjunction, ordinary disjunction is said to be inclusive: p or q or both.
|neither p nor q
The fundamental rules of deduction edit
All relations of logical consequence can be produced with a small number of fundamental rules of deduction from trivial logical consequences, obviously tautological, which are given by the rule of repetition:
Any premise included in a list P of premises is a logical consequence of the premises P.
For each logical connector there are two fundamental deduction rules, an elimination rule and an introduction rule (Gentzen 1934, Fitch 1952). Logic looks like a building game. One composes and decomposes statements by introducing and eliminating logical connectors.
We complete these rules with the rule of transitivity of logical consequences:
If C is a logical consequence of the premises Q and if all the premises Q are logical consequences of the premises P then C is a logical consequence of the premises P.
The fundamental rules of deduction are intuitively obvious, as soon as one understands the concepts of logical consequence and possibility, and the determination of the truth of compound statements from that of atomic statements. One can rigorously prove the truth of these intuitions, with the definition of the relation of logical consequence from the concept of logically possible world.
The rule of repetition, the rule of transitivity and the fundamental rules of deduction can be considered as the principles of logical principles, because they are sufficient to justify all the other logical principles.
We will show later that three (or even two) logical connectors are sufficient to define all the others. Six (or even four) fundamental rules of deduction are therefore sufficient to produce all relations of logical consequence, with the repetition rule and the transitivity rule. One can choose for example negation, conjunction and the universal quantifier as fundamental logical connectors. All the deduction rules for the other logical connectors can then be derived from the six rules of these three fundamental connectors.
The statements of a theory are constructed with its fundamental concepts (properties or relations), the names of individuals and the logical connectors. A name of an individual is a constant or a variable and it can be constructed with functions. x+y for example is a name of an individual constructed with the addition function and the variables x and y. A constant is a name of an individual which belongs to it in its own right. A variable is a somewhat paradoxical name. It is an individual's name without naming any particular individual. It is used to name any individual without specifying which one, in a certain domain.
Logical rules say that a statement is a logical consequence of other statements. When they contain free variables (of individual, property, relation, function, statement or of finite list of statements) they are true if and only if they are true in all cases where the free variables are replaced by constants.
The rule of particularization
If i is an individual then S(i) is a logical consequence of For all x, S(x).
x can be any variable of individual. i can be any name of an individual: a constant, a variable, or a compound expression. S(i) is the statement obtained from S(x) by substituting i for all occurrences of x in S(x).
This rule is the most important of all logic, because the power of reasoning comes from the laws with which we reason. Whenever we apply a law to an individual, we learn what the law teaches us and reveal the power of reasoning it gives us.
The rule of generalization
If S(x) is a logical consequence of the premises P and if x is a variable of individual which is not mentioned in these premises then For all x, S(x) is a logical consequence of the same premises.
In this rule as in the following ones, P is a finite list of statements.
An example of the use of this rule is the philosophical, or Cartesian, I. One can say I without making any particular hypothesis on the individual so named. Therefore all that is said about oneself can be applied to all individuals. If, for example, one has proven I cannot doubt that I doubt when I doubt without making any particular assumption about oneself, one can deduce No one can doubt that one doubts when one doubts.
The detachment rule
B is a logical consequence of the two premises A and If A then B.
The rule of hypothesis incorporation
If B is a logical consequence of the premises P and A, then If A then B is a logical consequence of the premises P.
The principle of reduction to absurdity
If B and not B are both logical consequences of the premises P and A, then not A is a logical consequence of the premises P.
The rule of double negation suppression
A is a logical consequence of not not A.
The rule of analysis
A and B are both logical consequences of the unique premise A and B.
The synthesis rule for conjunction
A and B is a logical consequence of the two premises A and B.
The rule of thesis weakening
A or B and B or A are both logical consequences of A.
The synthesis rule for disjunction
If C is a logical consequence of premises P and A and a logical consequence of premises P and B then C is a logical consequence of premises P and A or B.
The rule of direct proof of existence
If i is an individual, then There exists an x such that S(x) is a logical consequence of S(i).
i can be any name of an individual: a constant, a variable, or a compound expression. S(x) is a formula obtained by substituting x for some, not necessarily all, occurrences of i in S(i). x must be a variable of individual which is not mentioned in S(i).
The rule for introducing a new constant
If c is a new constant of individual then S(c) is a logical consequence of There exists an x such that S(x).
c should not be mentioned in the preceding axioms or formulas. S(c) is obtained from S(x) by substituting c for all occurrences of x in S(x).
A remark about the logic of functions: the functions of a theory can always be represented by relations. For example a function f with one argument can be represented by the binary relation R: Rxy if and only if f(x)=y. A function f with two arguments can be represented by a ternary relation R: Rxyz if and only if f(x,y)=z. The same goes, of course, for functions which have more arguments. Functions are also called operators. By replacing functions with the relations they define, one can always associate with a structure defined with functions an equivalent structure defined only with relations. This is why it is not necessary to mention functions in the definition of logically possible worlds. We can do without functions and reason only with a logic of relations. But it is often more convenient to reason with a theory which allows functions. The preceding rules are formulated in such a way that they are valid both for a pure logic of relations and for a logic of functions. The only difference is in the formation of the names of individuals. If we have no functions, the names of individuals are variables or fundamental constants. We can even do without fundamental constants by representing them by properties: the constant c is represented by the property P: Px if and only if x=c which is only true of c. If we proceed in this way, the individuals are always named with variables.
Reasoning without hypothesis and the logical laws edit
The fundamental rules of deduction can be applied even if no hypothesis is made at the beginning. The rule of hypothesis incorporation and the principle of reduction to absurdity make it possible to pass from a reasoning under hypothesis to a reasoning without hypothesis.
The conclusions of a reasoning without hypothesis are universal logical truths, always true whatever the interpretation of the concepts they mention, except for the interpretation of logical connectors. They are called logical laws, or tautologies.
Some examples of logical laws:
Pure tautology: if p then p
Since p is a logical consequence of p according to the repetition rule, if p then p is a logical law according to the rule of hypothesis incorporation.
The principle of non-contradiction: not (p and not p)
p and not p are both logical consequences of p and not p according to the rule of analysis, not (p and not p) is therefore a logical law according to the principle of reduction to absurdity.
The law of the excluded middle: p or not p
A statement p whose meaning is completely determined is necessarily true or false. There is no third possibility.
To present a proof, it is always necessary to specify the hypotheses on which a consequence depends. The rule of shift to the right makes it possible to conveniently present formal proofs: when we introduce a new hypothesis, we shift it to the right. A consequence only depends on the hypotheses that precede it above it or on its left, but not on the hypotheses on its right.
Suppose that the law of the excluded middle may be false:
- (1) Hypothesis: not (p or not p)
- (2) Hypothesis: p
- (3) p or not p according to (2) and the rule of thesis weakening.
- (4) not (p or not p) according to (1) and the rule of repetition.
- (5) not p according to (2), (3), (4) and the principle of reduction to absurdity.
- (6) p or not p according to (5) and the rule of thesis weakening.
- (7) not (p or not p) according to (1) and the rule of repetition.
(8) not not (p or not p) according to (1), (6), (7) and the principle of reduction to absurdity.
p or not p according to (8) and the rule of double negation suppression.
The fundamental alternative: either p or not p
It is the conjunction of the principle of non-contradiction and the law of the excluded middle. Any statement which has a completely determined meaning is true or false, but not both. When a statement is both true and false, or neither true nor false, its meaning is not completely determined: it is true in one sense, false in another, or it is neither true nor false because nothing allows to decide it.
A law discovered by the Stoics: if (if not p then p) then p
For example: if everything is false then something is true (since it would be true that everything is false), therefore something is true.
- (1) Hypothesis: if not p then p
- (2) Hypothesis: not p
- (3) p according to (1) and (2) and the detachment rule.
- (4) not p according to (2) and the repetition rule.
- (5) not not p according to (2), (3), (4) and the principle of reduction to absurdity.
- (6) p according to (5) and the rule of double negation suppression.
if (if not p then p) then p according to (1), (6) and the rule of incorporation of a hypothesis.
All the deduction rules, fundamental or derived, can be translated into logical laws, because if C is a logical consequence of the premises P then if the conjunction of the P's then C is a logical law. For example, if (A and if A then B) then B is a logical law that translates the detachment rule.
The derivation of logical consequences edit
The fundamental rules of deduction suffice to derive all relations of logical consequence and all logical laws. It is the completeness theorem of first-order logic, proved by Kurt Gödel, in his doctoral dissertation (Gödel 1929, which reasons on a different but equivalent formal system). The fundamental rules of deduction are therefore a complete solution to the old problem, posed but not resolved by Aristotle, of finding a list of all the logical principles.
Let us show for example that If A then C is a logical consequence of If A then B and If B then C.
(1) Hypotheses: If A then B, If B then C
- (2) Hypothesis: A
- (3) B according to (1), (2) and the detachment rule.
- (4) C according to (1), (3) and the detachment rule.
If A then C according to (2), (4) and the rule of hypothesis incorporation.
Another example is the contraposition rule: if not q then not p is a logical consequence of if p then q.
(1) Hypothesis: if p then q
- (2) Hypothesis: not q
- (3) Hypothesis: p
- (4) q according to (1), (3) and the detachment rule.
- (5) not q according to (2) and the repetition rule.
- (6) not p according to (3), (4), (5) and the principle of reduction to absurdity.
if not q then not p according to (2), (6) and the rule of hypothesis incorporation.
The interdefinability of logical connectors edit
The logical connectors can be defined from each other. For example, the existential quantifier can be defined from the universal quantifier and the negation:
There exists an x such that p means that it is false that all x is such that not p, otherwise formulated, not (for all x not p).
We can also adopt the opposite definition:
For all x, p means that it is false that there exists an x such that not p, that is, not (there is an x such that not p) .
In the same way we can define the disjunction starting from the conjunction, or the opposite:
p or q means not (not p and not q)
p and q means not (not p or not q)
The conditional can be defined from conjunction or or from disjunction:
If p then q means not (p and not q)
If p then q also means q or not p
The biconditional if and only if can be defined from the conditional and the conjunction:
p if and only if q means (if p then q) and (if q then p)
It can also be defined from the other connectors:
p if and only if q means (p and q) or (not p and not q)
p if and only if q means not ( (p and not q) or (not p and q) )
One could also introduce the logical connector neither nor and define all the other connectors from it:
not means neither p nor p
p and q means neither not p nor not q
p or q means not (neither p nor q)
If p then q means not (neither not p nor q)
p if and only if q means neither (p and not q) nor (not p and q)
Why does reasoning enable us to acquire knowledge? edit
When a reasoning is logical, the conclusion can not provide more information than those already given by the premises. Otherwise the reasoning is not logical, because the conclusion could be false when the premises are true. Logical conclusions are always reformulations of what is already said in the premises. In fact many arguments tell us nothing because the conclusion is only a repetition of the premises, in a slightly different form. We then say that they are tautological. They are variations on the theme "it is so because it is so."
In the precise sense defined by logicians, tautologies are logical laws, the laws which are always true regardless of the interpretation given to their words (logical connectors excepted). When a reasoning is logical, the statement 'if the premises then the conclusion' is always a tautology, as defined by logicians.
Conclusions are only repeating what was already said in the premises. A reasoning must be tautological to be logical. But then why do we reason? It seems that a reasoning has nothing to teach us.
The power of reasoning comes from the general principles on which it is based. If we reduce logic to elementary propositional calculus (founded on all the logical principles except those with the universal and existential quantifiers), a logic in which statements are never general, because we do not have the universal quantifier, then yes, the tautological character of our reasoning is usually pretty obvious. When it is not, it is only because our logical intuitions are limited. The propositional calculus serves us especially to rephrase our assertions. This can be very useful, because understanding depends on formulation, but this does not explain why reasoning enables us to know what we do not already know.
A statement is a law when it can be applied to many particular cases. It can always be formulated in the following way:
For all x in D, S(x)
In other words :
For all x, if x is in D then S(x)
D is the scope of the law. S(x) is a statement about x.
All statements of the form S(i), where i is the name of an element of D and S(i) is the statement obtained from S(x) by substituting everywhere i to x, are obvious logical consequences of the law. S(i) is a special case of the law.
When we learn a law, we know at the beginning only one or a few special cases. We can not think at all special cases because they are too numerous. Whenever we apply a known law to a special case which we have not thought of before, we learn something.
A law is like a condensed information. In one statement it determines a wealth of information on all the special cases to which it can be applied. When we reason with laws, what we discover is not said in the premises, it is only involved implicitly. Reasoning enables us to discover all that laws can teach us.
Justification of logic edit
We recognize a logical reasoning by verifying that it complies with logical principles. But how do we recognize the logical principles? How do we know they are good principles? How to justify them ? Are we really sure that they always lead to true conclusions from true premises?
With the principles of definition of the truth of compound statements, one can prove that our logical principles are true, in the sense that they always lead from truths to truth. For example, one only has to reason on the truth table of the conditional to prove the truth of the detachment rule.
A skeptic might object that these justifications of logical principles are worthless because they are circular. When we reason on logical principles to justify them, we use the same principles that we have to justify. If our principles were false, they would prove falsehoods and so they could prove their own truth. That logical principles enable us to prove their truth does not therefore really prove their truth, since false principles could do the same.
This objection is not conclusive. We just have to look at the suspected circular proofs to be convinced of their validity, simply because they are excellent and irrefutable. No doubt is allowed because everything is clearly defined and proven. A skeptic can point out correctly that such proofs can convince only those who are already converted. But in this case it is not difficult to be converted, because logical principles just formulate what we already know when we reason correctly.
The circularity of logical principles is particularly apparent for the particularization rule:
For every statement S(x) and every individual i S(i) is a logical consequence of for all x, S(x). (1)
For example, If Socrates is a man then Socrates is mortal is a logical consequence of For all x, if x is a man then x is mortal. (2)
To pass from (1) to (2), the particularization rule has been applied twice to itself. The statement S(x) is particularized in If x is a man then x is mortal, the individual i is particularized in Socrates.
The paradox of Lewis Caroll edit
Thanks to the detachment rule, we can deduce B from A and if A then B. A more complete rule should therefore be that we can deduce B from A, if A then B and the detachment rule. But this rule is not yet complete. A more complete rule, but still incomplete, is that we can deduce B from A, if A then B, the detachment rule and the rule that tells us that we can deduce B from A, if A then B and the detachment rule. But there must be another rule that tells us that we can apply the previous rule, and so on to infinity (Carroll 1895).
If the detachment rule was itself an hypothesis that must be mentioned in our proof, and from which our conclusions are deduced, then our reasoning could never begin, because a second rule would be needed to justify the deductions from the detachment rule, then a third which justifies deductions from the second, and so on to infinity. But logical laws are not hypotheses. We always have the right to adopt them as premises, without any other justification except that they are logical laws, because they can not be false, because they can not lead us to error.
The logic of identity edit
The binding problem and the diversity of names of the same being edit
The problem of the binding of concepts (are two concepts true of the same individual or of different individuals?) is solved by identifying the individuals to whom concepts are attributed. But the diversity of the names of the same being poses a problem: when two concepts are attributed one to x, the other to y, are they bound because they are attributed to the same individual or not? If x = y they are bound, if x is different from y they are not bound.
x=y means that x and y are names of the same being. We need the relation of identity when we can not conclude from the diversity of names to the diversity of beings because the same being can be named in many ways.
Knowing the diversity of the names of the same being can teach us a great deal about it when nouns are compound expressions. Aristotle is the best student of Plato means Aristotle = The best student of Plato. "The best student of Plato" is one of the many names of Aristotle.
"The best student of" is the name of a function that associates a teacher with his or her best student. In a general way, we name all beings by giving them simple names and names composed with functions.
The fundamental rules of the logic of identity edit
Knowing that x=y means that x and y are names of the same being, the principles of reflexivity of identity x=x, of symmetry, if x=y then y=x, and of transitivity, if x=y and y=z then x=z are true by definition, like the principle of indiscernibility of identicals:
If x=y, all that is true of x is also true of y.
If E(x) and x=y then E(y)
for any statement E(x) about x.
The principle of indiscernibility of identicals makes it possible to prove the principle of transitivity. By replacing E(z) with w=z we get:
If w=x and x=y then w=y
It can also be used to deduce the principle of symmetry from the principle of reflexivity, by replacing E (z) by z=x:
If x=x and x=y then y=x
If x=y then y=x
x=x can be understood in two ways: a being is always identical to itself, or a name x must always name the same being.
The identity of individuals in naturally possible worlds edit
When we reason about the possibilities that are available to us, we reason about the naturally possible arrangements of actual beings, including ourselves. We therefore reason on different possible worlds that contain the same beings. The same individuals exist virtually in many possible worlds.
When one argues about absolute possibilities, there is not much sense in identifying the same individual in different worlds. For example, if one reason on two different possible material universes, there is no sense in saying that a point or a particle of one is identical to a point or a particle of the other. And although I imagine that I could have other destinies, the other versions of me are never really me. I am not responsible for their virtual acts.
A natural being exists in only one naturally possible world. For us, this world is the actual world. But the nature of a natural being is determined by its natural properties, and the nature of natural properties is determined by their place in all naturally possible worlds. That is why the nature of a natural being is determined by its place in all naturally possible worlds even if a natural being exists in a single naturally possible world.
A reasoning on the same individual in several naturally possible worlds can always be replaced by a reasoning about different individuals who have the same natural properties (Lewis 1986, but his theory of possible worlds is different).
The identity of properties and relations edit
A property or a natural relation is determined by its place in all naturally possible worlds, therefore by its place in a system of axioms which defines the laws of Nature.
More generally, a property or a theoretical relation is determined by its place in a system of axioms which defines a theory.
Two natural properties that are true of the same beings in all naturally possible worlds occupy the same place. They are therefore essentially the same property. The same goes for natural relations. We have therefore justified the principle of extensionality of properties and natural relations:
Two properties or natural relations are identical if and only if they are true of the same beings in all naturally possible worlds.
The same principle of extensionality is obtained for theoretical properties and relations:
Two properties or theoretical relations are identical if and only if they are true of the same beings in all the models of the theory, that is to say in all the logically possible worlds such that its axioms are true.
Isomorphisms and the identity of structures edit
When we speak of similarity between two individuals, we mean that some of the properties that are attributed to one can be attributed to the other. When we speak of similarity between two systems, the expression 'what is true of one is equally true of the other' can receive a more subtle meaning. We mean that there exists a projection f which makes it possible to replace the individuals x of the first system by individuals f(x) of the second system, in such a way that true statements about the first system are replaced by true statements about the second system. Such a projection is called in mathematics a morphism, or an isomorphism if it is bijective, to say that the two systems have the same form, or the same structure.
The current use of the concept of structure is ambiguous. The structure designates sometimes the object, the system, sometimes its property. Structures have a structure. From a logical point of view, a structure as an object is a logically possible world or part of such a world. A structure as a property can be defined from the equivalence relation x has the same structure as y. This equivalence relation can be defined with the concept of isomorphism:
Two structures (or two systems) have the same structure if and only if they are isomorphic.
An isomorphism between two structures E and F is a bijective function f which replaces the individuals of E by individuals of F so that all fundamental properties and relations are retained. Formally:
If P is a fundamental property, for all x in E, x has the property P if and only if f(x) has the property P.
If R is a fundamental binary relation, for all x and all y in E, xRy if and only if f(x)Rf(y)
The same goes for the fundamental relations between more terms.
(A relation between the elements of E and the elements of F defines an application of E into F when each element of E is connected to a single element of F. An application of E into F is bijective when each element of F is connected to a single element of E. In other words, a bijective function is an application whose inverse is also an application.)
An isomorphism between two structures makes it possible to transform all true statements about one into true statements about the other, by replacing everywhere x by f(x). When two structures are isomorphic, they are models of the same theories. Any system of axioms of true one is necessarily true of the other.
A complex natural being is a natural structure, defined with natural properties and relations. Two isomorphic complex natural beings are essentially similar, naturally indiscernible. They have the same natural properties. Everything that is naturally possible with one is naturally possible with the other. The nature of a complex natural being is its structure. Two isomorphic complex natural beings have the same nature.
The concept of isomorphism is often defined in a more general way. The bijective function f is allowed to replace not only individuals but also properties and relations, always in such a way that true statements on one system are replaced by true statements on another system. When the similarity between systems is defined in this way, it is commonly said that similar systems are analogous and that the projection f is an analogy. An isomorphism can be defined as a bijective analogy.
We can also define the concept of structure in a more general way:
Two structures have the same structure if and only if they are models of the same theory.
With this second definition, a structure as a property is determined by the axioms of a theory. More precisely, systems of different axioms define the same structure when they have the same models, when any model of one is a model of the other.
A theory is categorical when all its models are isomorphic. The fundamental structures of mathematics, the set of natural numbers and the set of real numbers in particular, are determined with categorical theories. A categorical theory forbids any contingency. There is essentially one logically possible world that obeys its principles. The laws of Nature do not determine a categorical theory of Nature. They leave room for contingency.
When a theory is not categorical, different, non-isomorphic, structures or systems may have the same structure, as defined by the theory. For example, we can say of all vector spaces that they have a vector space structure.
Symmetries are automorphisms edit
An automorphism of a structure E is an internal isomorphism, an isomorphism from E to E.
Every structure has a trivial automorphism, the identity-function defined by id(x)=x.
A structure is symmetrical when it has at least one non-trivial automorphism.
A non-trivial automorphism is a symmetry of a structure.
The automorphisms of a structure form a group, in the algebraic sense, because the inverse of an automorphism is an automorphism and because the composite of two automorphisms is also an automorphism.
The group of all the automorphisms of a structure is also called the group of its symmetries. For example, the group of symmetries of a circle, or a disc, is the group of rotations around their center and reflections with respect to a diameter.
When there is an automorphism g such that y=g(x), x and y are essentially indistinguishable within the structure, in the sense that any truth on one can be transformed into an equivalent truth on the other. .
The equivalence class, or orbit, of an element x of a symmetric structure is the set of y such that y=g(x) where g is an automorphism of the structure.
An equivalence class is a set of elements that are essentially indistinguishable within the structure. For example, all the points of a circle are in the same equivalence class because there is nothing on the circle to distinguish them. All the points of a disc at the same distance from the center are also in the same equivalence class, but different concentric circles are different equivalence classes, because the points are distinguished by their distance to the center.
A structure is symmetrical when it contains distinct but essentially indistinguishable elements, because their properties and their relations within the structure determine distinct but equivalent places.
A natural structure is perfectly symmetrical when it contains naturally indiscernible elements such that their relations within the structure give them equivalent places.
A natural structure is imperfectly symmetrical when it contains naturally very similar elements such that their relations within the structure give them equivalent or almost equivalent places.
When a structure contains many constituents, the more symmetrical it is, the easier it is to know it, because we know all the symmetrical parts as soon as we know one.
Mathematical knowledge edit
All mathematical knowledge can be considered as knowledge about the logically possible worlds.
A theory is consistent, or non-contradictory, or coherent, when the contradictions p and not p are not logical consequences of its axioms. Otherwise it is inconsistent, contradictory, incoherent, absurd.
A true theory of a logically possible world is necessarily consistent, since contradictions are false in all logically possible worlds.
A consistent theory is true of at least one logically possible world. This is Gödel's completeness theorem. If we find a theory that is necessarily false, that is to say false in all the logically possible worlds, without it being possible to prove that its axioms lead to a contradiction, it would show that our logic is incomplete, that it would not be sufficient to prove all the necessary logical truths. But Gödel proved in his doctoral thesis that our logic is complete (Gödel 1929).
We develop mathematical knowledge by reflecting on our own words. The logically possible worlds are defined by words, with sets of atomic statements. To know these worlds is to know the words that define them. Mathematical worlds are nothing more than what we define. Nothing is hidden because they are our work. We can know everything about them because we determine what they are.
Is mathematical truth invented or discovered?
Both, because inventing is always discovering a possibility.
When we invent, we change the actual but we do not change the space of all possibilities. What is possible is possible whatever we do. We often act to make accessible what was previously less accessible, but it is never about making the impossible possible, we are only changing the possibilities relative to our current situation. When we make the possible impossible, these are still relative possibilities. The space of absolute possibilities, whether logical or natural, does not depend on us.
When we develop mathematical knowledge we discover a possibility of speech.
We acquire mathematical knowledge about finite structures by reasoning on our own words, because these structures are defined with finite sets of atomic statements.
Knowledge about infinite mathematical structures is more difficult to understand. They are defined with infinite sets of atomic statements. We know these infinite sets from their finite definition. Two processes are fundamental to define infinite sets:
- Recursive constructions
We give ourselves initial elements and rules which make it possible to generate new elements from the initial elements or from already generated elements. For example, we can start from the single initial element 1 and use the rule of generating (x + y) from x and y. The infinite set is then defined by saying that it is the unique set that contains all the initial elements and all the elements generated by a finite number of applications of the rules: (1+1), ((1+1)+1), ((1+1)+(1+1)) ...
- The definition of the set of all subsets
As soon as a set x is defined, the power set axiom allows us to define the unique set that contains all the sets included in x. If x is an infinite set, the set of parts of x is an even larger infinite set.