Features and problems of logic

print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Share
Share to social media
URL
https://mainten.top/topic/philosophy-of-logic
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

Related Topics:
logic

Three areas of general concern are the following.

Logical semantics

For the purpose of clarifying logical truth and hence the concept of logic itself, a tool that has turned out to be more important than the idea of logical form is logical semantics, sometimes also known as model theory. By this is meant a study of the relationships of linguistic expressions to those structures in which they may be interpreted and of which they can then convey information. The crucial idea in this theory is that of truth (absolutely or with respect to an interpretation). It was first analyzed in logical semantics around 1930 by the Polish-American logician Alfred Tarski. In its different variants, logical semantics is the central area in the philosophy of logic. It enables the logician to characterize the notion of logical truth irrespective of the supply of nonlogical constants that happen to be available to be substituted for variables, although this supply had to be used in the characterization that turned on the idea of logical form. It also enables him to identify logically true sentences with those that are true in every interpretation (in “every possible world”).

The ideas on which logical semantics is based are not unproblematic, however. For one thing, a semantical approach presupposes that the language in question can be viewed “from the outside”; i.e., considered as a calculus that can be variously interpreted and not as the all-encompassing medium in which all communication takes place (logic as calculus versus logic as language).

Furthermore, in most of the usual logical semantics the very relations that connect language with reality are left unanalyzed and static. Ludwig Wittgenstein, an Austrian-born philosopher, discussed informally the “language-games”—or rule-governed activities connecting a language with the world—that are supposed to give the expressions of language their meanings; but these games have scarcely been related to any systematic logical theory. Only a few other attempts to study the dynamics of the representative relationships between language and reality have been made. The simplest of these suggestions is perhaps that the semantics of first-order logic should be considered in terms of certain games (in the precise sense of game theory) that are, roughly speaking, attempts to verify a given first-order sentence. The truth of the sentence would then mean the existence of a winning strategy in such a game.

Limitations of logic

Many philosophers are distinctly uneasy about the wider sense of logic. Some of their apprehensions, voiced with special eloquence by a contemporary Harvard University logician, Willard Van Quine, are based on the claim that relations of synonymy cannot be fully determined by empirical means. Other apprehensions have to do with the fact that most extensions of first-order logic do not admit of a complete axiomatization; i.e., their truths cannot all be derived from any finite—or recursive (see below)—set of axioms. This fact was shown by the important “incompleteness” theorems proved in 1931 by Kurt Gödel, an Austrian (later, American) logician, and their various consequences and extensions. (Gödel showed that any consistent axiomatic theory that comprises a certain amount of elementary arithmetic is incapable of being completely axiomatized.) Higher-order logics are in this sense incomplete and so are all reasonably powerful systems of set theory. Although a semantical theory can be built for them, they can scarcely be characterized any longer as giving actual rules—in any case complete rules—for right reasoning or for valid argumentation. Because of this shortcoming, several traditional definitions of logic seem to be inapplicable to these parts of logical studies.

These apprehensions do not arise in the case of modal logic, which may be defined, in the narrow sense, as the study of logical necessity and possibility; for even quantified modal logic admits of a complete axiomatization. Other, related problems nevertheless arise in this area. It is tempting to try to interpret such a notion as logical necessity as a syntactical predicate; i.e., as a predicate the applicability of which depends only on the form of the sentence claimed to be necessary—rather like the applicability of formal rules of proof. It has been shown, however, by Richard Montague, an American logician, that this cannot be done for the usual systems of modal logic.

Logic and computability

These findings of Gödel and Montague are closely related to the general study of computability, which is usually known as recursive function theory (see mathematics, foundations of: The crisis in foundations following 1900: Logicism, formalism, and the metamathematical method) and which is one of the most important branches of contemporary logic. In this part of logic, functions—or laws governing numerical or other precise one-to-one or many-to-one relationships—are studied with regard to the possibility of their being computed; i.e., of being effectively—or mechanically—calculable. Functions that can be so calculated are called recursive. Several different and historically independent attempts have been made to define the class of all recursive functions, and these have turned out to coincide with each other. The claim that recursive functions exhaust the class of all functions that are effectively calculable (in some intuitive informal sense) is known as Church’s thesis (named after the American logician Alonzo Church).

One of the definitions of recursive functions is that they are computable by a kind of idealized automaton known as a Turing machine (named after Alan Mathison Turing, a British mathematician and logician). Recursive function theory may therefore be considered a theory of these idealized automata. The main idealization involved (as compared with actually realizable computers) is the availability of a potentially infinite tape.

The theory of computability prompts many philosophical questions, most of which have not so far been answered satisfactorily. It poses the question, for example, of the extent to which all thinking can be carried out mechanically. Since it quickly turns out that many functions employed in mathematics—including many in elementary number theory—are nonrecursive, one may wonder whether it follows that a mathematician’s mind in thinking of such functions cannot be a mechanism and whether the possibly nonmechanical character of mathematical thinking may have consequences for the problems of determinism and free will. Further work is needed before definitive answers can be given to these important questions.