An alternative interpretation of probability

verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

In ordinary conversation the word probability is applied not only to variable phenomena but also to propositions of uncertain veracity. The truth of any proposition concerning the outcome of an experiment is uncertain before the experiment is performed. Many other uncertain propositions cannot be defined in terms of repeatable experiments. An individual can be uncertain about the truth of a scientific theory, a religious doctrine, or even about the occurrence of a specific historical event when inadequate or conflicting eyewitness accounts are involved. Using probability as a measure of uncertainty enlarges its domain of application to phenomena that do not meet the requirement of repeatability. The concomitant disadvantage is that probability as a measure of uncertainty is subjective and varies from one person to another.

According to one interpretation, to say that someone has subjective probability p that a proposition is true means that for any integers r and b with r/(r + b) < p, if that individual is offered an opportunity to bet the same amount on the truth of the proposition or on “red in a single draw” from an urn containing r red and b black balls, he prefers the first bet, while, if r/(r + b) > p, he prefers the second bet.

An important stimulus to modern thought about subjective probability has been an attempt to understand decision making in the face of incomplete knowledge. It is assumed that an individual, when faced with the necessity of making a decision that may have different consequences depending on situations about which he has incomplete knowledge, can express his personal preferences and uncertainties in a way consistent with certain axioms of rational behaviour. It can then be deduced that the individual has a utility function, which measures the value to him of each course of action when each of the uncertain possibilities is the true one, and a “subjective probability distribution,” which expresses quantitatively his beliefs about the uncertain situations. The individual’s optimal decision is the one that maximizes his expected utility with respect to his subjective probability. The concept of utility goes back at least to Daniel Bernoulli (Jakob Bernoulli’s nephew) and was developed in the 20th century by John von Neumann and Oskar Morgenstern, Frank P. Ramsey, and Leonard J. Savage, among others. Ramsey and Savage stressed the importance of subjective probability as a concomitant ingredient of decision making in the face of uncertainty. An alternative approach to subjective probability without the use of utility theory was developed by Bruno de Finetti.

The mathematical theory of probability is the same regardless of one’s interpretation of the concept, although the importance attached to various results can depend very much on the interpretation. In particular, in the theory and applications of subjective probability, Bayes’s theorem plays an important role.

For example, suppose that an urn contains N balls, r of which are red and b = Nr of which are black, but r (hence b) is unknown. One is permitted to learn about the value of r by performing the experiment of drawing with replacement n balls from the urn. Suppose also that one has a subjective probability distribution giving the probability f(r) that the number of red balls is in fact r where f(0) +⋯+ f(N) = 1. This distribution is called an a priori distribution because it is specified prior to the experiment of drawing balls from the urn. The binomial distribution is now a conditional probability, given the value of r. Finally, one can use Bayes’s theorem to find the conditional probability that the unknown number of red balls in the urn is r, given that the number of red balls drawn from the urn is i. The result isRatio.

This distribution, derived by using Bayes’s theorem to combine the a priori distribution with the conditional distribution for the outcome of the experiment, is called the a posteriori distribution.

The virtue of this calculation is that it makes possible a probability statement about the composition of the urn, which is not directly observable, in terms of observable data, from the composition of the sample taken from the urn. The weakness, as indicated above, is that different people may choose different subjective probabilities for the composition of the urn a priori and hence reach different conclusions about its composition a posteriori.

To see how this idea might apply in practice, consider a simple urn model of opinion polling to predict which of two candidates will win an election. The red balls in the urn are identified with voters who will vote for candidate A and the black balls with those voting for candidate B. Choosing a sample from the electorate and asking their preferences is a well-defined random experiment, which in theory and in practice is repeatable. The composition of the urn is uncertain and is not the result of a well-defined random experiment. Nevertheless, to the extent that a vote for a candidate is a vote for a political party, other elections provide information about the content of the urn, which, if used judiciously, should be helpful in supplementing the results of the actual sample to make a prediction. Exactly how to use this information is a difficult problem in which individual judgment plays an important part. One possibility is to incorporate the prior information into an a priori distribution about the electorate, which is then combined via Bayes’s theorem with the outcome of the sample and summarized by an a posteriori distribution.

The law of large numbers, the central limit theorem, and the Poisson approximation

The law of large numbers

The relative frequency interpretation of probability is that if an experiment is repeated a large number of times under identical conditions and independently, then the relative frequency with which an event A actually occurs and the probability of A should be approximately the same. A mathematical expression of this interpretation is the law of large numbers. This theorem says that if X1, X2,…, Xn are independent random variables having a common distribution with mean μ, then for any number ε > 0, no matter how small, as n → ∞,Equation.

The law of large numbers was first proved by Jakob Bernoulli in the special case where Xk is 1 or 0 according as the kth draw (with replacement) from an urn containing r red and b black balls is red or black. Then E(Xk) = r/(r + b), and the last equation says that the probability that “the difference between the empirical proportion of red balls in n draws and the probability of red on a single draw is less than ε” converges to 1 as n becomes infinitely large.

Insofar as an event which has probability very close to 1 is practically certain to happen, this result justifies the relative frequency interpretation of probability. Strictly speaking, however, the justification is circular because the probability in the above equation, which is very close to but not equal to 1, requires its own relative frequency interpretation. Perhaps it is better to say that the weak law of large numbers is consistent with the relative frequency interpretation of probability.

The following simple proof of the law of large numbers is based on Chebyshev’s inequality, which illustrates the sense in which the variance of a distribution measures how the distribution is dispersed about its mean. If X is a random variable with distribution f and mean μ, then by definition Var(X) = Σi(xi − μ)2f(xi). Since all terms in this sum are positive, the sum can only decrease if some of the terms are omitted. Suppose one omits all terms with |xi − μ| < b, where b is an arbitrary given number. Each term remaining in the sum has a factor of the form (xi − μ)2, which is greater than or equal to b2. Hence, Var(X) ≥ b2 Σ′ f(xi), where the prime on the summation sign indicates that only terms with |xi − μ| ≥ b are included in the sum. Chebyshev’s inequality is this expression rewritten asEquation.

This inequality can be applied to the complementary event of that appearing in equation (11), with b = ε. The Xs are independent and have the same distribution, E[n−1(X1 +⋯+ Xn)] = μ and Var[(X1 +⋯+ Xn)/n] = Var(X1)/n, so thatEquation.

This not only proves equation (11), but it also says quantitatively how large n should be in order that the empirical average, n−1(X1 +⋯+ Xn), approximate its expectation to any required degree of precision.

Suppose, for example, that the proportion p of red balls in an urn is unknown and is to be estimated by the empirical proportion of red balls in a sample of size n drawn from the urn with replacement. Chebyshev’s inequality with Xk = 1{red ball on the kth draw} implies that, in order that the observed proportion be within ε of the true proportion p with probability at least 0.95, it suffices that n be at least 20 × Var(X1)/ε2. Since Var(X1) = p(1 − p) ≤ 1/4 for all p, for ε = 0.03 it suffices that n be at least 5,555. It is shown below that this value of n is much larger than necessary, because Chebyshev’s inequality is not sufficiently precise to be useful in numerical calculations.

Although Jakob Bernoulli did not know Chebyshev’s inequality, the inequality he derived was also imprecise, and, perhaps because of his disappointment in not having a quantitatively useful approximation, he did not publish the result during his lifetime. It appeared in 1713, eight years after his death.