| ||

| ||

Directeur de recherche, Professor
I remember having red
the proceedings of a workshop, which took place in Israel around the
seventies and which was devoted to the role of formal logic in
philosophy. The workshop was chaired by Bar-Hillel who, in his opening
speech, launched a challenge to the participants (who included, among
others, Hintikka, Stenius, Montague, Dummett, Soames, Katz and many
others) to evaluate what according to him was one of the greatest
scandals in the intellectual history of humankind: has anybody ever seen
a piece of natural language reasoning validated by an argument in formal
logic? And who was to be blamed for that? I think that the worries
expressed by philosophers and logicians then are more or less the same
that concern us today. Chomsky criticized Bar-Hillel in the fifties for
trying to introduce formal logical relations in linguistics. He used
more or less the same arguments that Quine used against Carnap earlier
on: formal languages and formal relations do not stand by themselves,
and thereby their role cannot but be very limited. Nevertheless, I think
all the parties agreed then as they agree nowadays that relating an
argument or an assertion to a formal schematism (one of the many formal
methods used in philosophy) fill in the gaps left over by the
ambiguities, vagueness and vagaries of informal reasoning in natural
language. I shall give just two examples among many.
Take a classical
philosophical problem like that of truth. Tarski's work on the concept
of truth in formalized languages is a beautiful piece of philosophy.
Many logicians before him, Frege, Wittgenstein and Ramsey, fought with
the same problem, that is, with the problem of defining truth. Frege and
Wittgenstein could only commit themselves to assert the truth of a
proposition amounts to nothing else than the assertion of the
proposition itself. When they tried to say more (the status of concepts,
objects, or the picture theory of language, etc), that was not meant to
be taken as a full-fledged semantical theory, but only as some kind of
elucidations, or nonsense. Ramsey almost gave an inductive definition
of truth, but he finally gave up, seeing no hope to regiment natural
languages in such a structural way that would make possible the
application of an inductive (compositional) definition. Tarski agreed
with him on this point, but found, nevertheless, a family of languages
for which that kind of definition could work. Those are the formalized
languages, that is, the interpreted languages abstracted from those used
by mathematicians for a long time, like the language of arithmetic or
the language of the calculus of classes, coming in pairs with an
underlying theory with which they had grown together. Once he isolated
them into the object of his considerations, thus literally transforming
them into object languages, he could apply his inductive method and give
a truth-definition for many of them in the metalanguage. Moreover, he
could produce a beautiful result to the effect that the metalanguage had
to be stronger than the object language, and thus show that the truth
predicate cannot be defined in the object languages themselves. The
benefit of Tarski's formal method are undeniable: its setting
(formalized languages) could allow the application of a method
(compositionality) that led both to a positive and negative result. No
wonder that logicians after him tried to get English look like a formal
language! His negative result bothered, of course, many. Kripke showed
how one can get the truth-predicate definable in the object languages
themselves, provided it is partially interpreted. In the late eighties
Hintikka was convinced that this result could be improved. I was working
with him on the so-called IF languages and in the same time I did some
work with Joukko Vaananen on the model theory of branching quantifiers.
The idea of putting all these things together into a truth-definability
result for IF-languages came quite naturally. Results like this one have ramifications of all sorts and connect to other kinds of formal methods. Since the time of Hilbert, conservativity arguments have been devised in order to avoid ontological commitments to abstract entities. Hilbert introduced a distinction between real statements (involving only finitistically meaningful entities ) and ideal statements. He was concerned with the status of highly abstract mathematical entities like those figuring in Cantor's set theory. His goal was to show that if we add to a mathematical theory containing only real statement a theory containing ideal statements and if the latter were conservative over the former (it did not prove real statements that the former theory could not prove), then one would not need to bother about the ontological status of the abstract entities. Their role would be purely heuristic. The same idea has been applied over and over again in philosophy. For instance, conservativity arguments have been used to show that mathematics is conservative over physics and that a deflationist theory of truth (i.e. a theory which asserts that everything we need to know about truth is contained in the axioms of the form "It is true that A if and only if A) is conservative over almost any theory. Conservativity is thus supposed to be an indicator of how substantial the ontological commitment over a particular notion is. Roughly, if after adopting the notion in question, I cannot prove more statements than I could prove before endorsing it, then there is a clear sense in which the notion in question is ontologically thin. Using this measure, Tarski's theory of truth is ontologically more committing than disquotational theories and truth in IF languages is certainly more committing than Tarskian truth. True enough, Hilbert’s program did not work in the end, neither did Field's fictionalist program. But it is important, I think, to be able to show in what conditions and in what sense something works or doesn't work. |