print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Share
Share to social media
URL
https://mainten.top/topic/applied-logic
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

External Websites

Inductive reasoning means reasoning from known particular instances to other instances and to generalizations. These two types of reasoning belong together because the principles governing one normally determine the principles governing the other. For pre-20th-century thinkers, induction as referred to by its Latin name inductio or by its Greek name epagoge had a further meaning—namely, reasoning from partial generalizations to more comprehensive ones. Nineteenth-century thinkers—e.g., John Stuart Mill and William Stanley Jevons—discussed such reasoning at length.

The most representative contemporary approach to inductive logic is by the German-born philosopher Rudolf Carnap (1891–1970). His inductive logic is probabilistic. Carnap considered certain simple logical languages that can be thought of as codifying the kind of knowledge one is interested in. He proposed to define measures of a priori probability for the sentences of those languages. Inductive inferences are then probabilistic inferences of the kind that are known as Bayesian.

If P(—) is the probability measure, then the probability of a proposition A on evidence E is simply the conditional probability P(A/E) = P(A & E)/ P(E). If a further item of evidence E* is found, the new probability of A is P(A/E & E*). If an inquirer must choose, on the basis of the evidence E, between a number of mutually exclusive and collectively exhaustive hypotheses A1, A2, …, then the probability of Ai on this evidence will be P(Ai/E) = [P(E(Ai) P(Ai)] / [P(E/A1) + P(E/A2) + …] This is known as Bayes’s theorem.

Relying on it is not characteristic of Carnap only. Many different thinkers used conditionalization as the main way of bringing new information to bear on beliefs. What was peculiar to Carnap, however, was that he tried to define for the simple logical languages he was considering a priori probabilities on a purely logical basis. Since the nature of the primitive predicates and of the individuals in the model are left open, Carnap assumed that a priori probabilities must be symmetrical with respect to both.

If one considers a language with only one-place predicates and a fixed finite domain of individuals, the a priori probabilities must determine, and be determined by, the a priori probabilities of what Carnap called state-descriptions. Others call them diagrams of the model. They are maximal consistent sets of atomic sentences and their negations. Disjunctions of structurally similar state-descriptions are called structure-descriptions. Carnap first considered an even distribution of probabilities to the different structure-descriptions. Later he generalized his quest and considered an arbitrary classification schema (also known as a contingency table) with k cells, which he treated as on a par. A unique a priori probability distribution can be specified by stating the characteristic function associated with the distribution. This function expresses the probability that the next individual belongs to the cell number i when the number of already-observed individuals in the cell number j is nj. Here j = 1,2,…,k. The sum (n1 + n2 + …+ nk) is denoted by n.

Carnap proved a remarkable result that had earlier been proposed by the Italian probability theorist Bruno de Finetti and the British logician W.E. Johnson. If one assumes that the characteristic function depends only on k, ni, and n, then f must be of the form ni + (λ/k)/ n + λ where λ is a positive real-valued constant. It must be left open by Carnap’s assumptions. Carnap called the inductive probabilities defined by this formula the λ-continuum of inductive methods. His formula has a simple interpretation. The probability that the next individual will belong to the cell number i is not the relative frequency of observed individuals in that cell, which is ni/n, but rather the relative frequency of individuals in the cell number i in a sample in which to the actually observed individuals there is added an imaginary additional set of λ individuals divided evenly between the cells. This shows the interpretational meaning of λ. It is an index of caution. If λ = 0, the inquirer follows strictly the observed relative frequencies ni/n. If λ is large, the inquirer lets experience change the a priori probabilities 1/k only very slowly.

This remarkable result shows that Carnap’s project cannot be completely fulfilled, for the choice of λ is left open not only by the purely logical considerations that Carnap is relying on. The optimal choice also depends on the actual universe of discourse that is being investigated, including its so-far-unexamined part. It depends on the orderliness of the world in a sense of order that can be spelled out. Caution in following experience should be the greater the less orderly the universe is. Conversely, in an orderly universe, even a small sample can be taken as a reliable indicator of what the rest of the universe is like.

Carnap’s inductive logic has several limitations. Probabilities on evidence cannot be the sole guides to inductive inference, for the reliability such of inferences may also depend on how firmly established the a priori probability distribution is. In real-life reasoning, one often changes prior probabilities in the light of further evidence. This is a general limitation of Bayesian methods, and it is in evidence in the alleged cognitive fallacies studied by psychologists. Also, inductive inferences, like other ampliative inferences, can be judged on the basis of how much new information they yield.

An intrinsic limitation of the early forms of Carnap’s inductive logic was that it could not cope with inductive generalization. In all the members of the λ-continuum, the a priori probability of a strict generalization in an infinite universe is zero, and it cannot be increased by any evidence. It has been shown by Jaakko Hintikka how this defect can be corrected. Instead of assigning equal a priori probabilities to structure-descriptions, one can assign nonzero a priori probabilities to what are known as constituents. A constituent in this context is a sentence that specifies which cells of the contingency table are empty and which ones are not. Furthermore, such probability distinctions are determined by simple dependence assumptions in analogy with the λ-continuum. Hintikka and Ilkka Niiniluoto have shown that a multiparameter continuum of inductive probabilities is obtained if one assumes that the characteristic function depends only on k, ni, n, and the number of cells left empty by the sample. What is changed in Carnap’s λ-continuum is that there now are different indexes of caution for different dimensions of inductive inference.

These different indexes have general significance. In the theory of induction, a distinction is often made between induction by enumeration and induction by elimination. The former kind of inductive inference relies predominantly on the number of observed positive and negative instances. In a Carnapian framework, this means basing one’s inferences on k, ni, and n. In eliminative induction, the emphasis is on the number of possible laws that are compatible with the given evidence. In a Carnapian situation, this number is determined by the number e of cells left empty by the evidence. Using all four parameters as arguments of the characteristic function thus means combining enumerative and eliminative reasoning into the same method. Some of the indexes of caution will then show the relative importance that an inductive reasoner is assigning to enumeration and to elimination.