Conditional expectation and least squares prediction

verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

An important problem of probability theory is to predict the value of a future observation Y given knowledge of a related observation X (or, more generally, given several related observations X1, X2,…). Examples are to predict the future course of the national economy or the path of a rocket, given its present state.

Prediction is often just one aspect of a “control” problem. For example, in guiding a rocket, measurements of the rocket’s location, velocity, and so on are made almost continuously; at each reading, the rocket’s future course is predicted, and a control is then used to correct its future course. The same ideas are used to steer automatically large tankers transporting crude oil, for which even slight gains in efficiency result in large financial savings.

Given X, a predictor of Y is just a function H(X). The problem of “least squares prediction” of Y given the observation X is to find that function H(X) that is closest to Y in the sense that the mean square error of prediction, E{[YH(X)]2}, is minimized. The solution is the conditional expectation H(X) = E(Y|X).

In applications a probability model is rarely known exactly and must be constructed from a combination of theoretical analysis and experimental data. It may be quite difficult to determine the optimal predictor, E(Y|X), particularly if instead of a single X a large number of predictor variables X1, X2,… are involved. An alternative is to restrict the class of functions H over which one searches to minimize the mean square error of prediction, in the hope of finding an approximately optimal predictor that is much easier to evaluate. The simplest possibility is to restrict consideration to linear functions H(X) = a + bX. The coefficients a and b that minimize the restricted mean square prediction error E{(YabX)2} give the best linear least squares predictor. Treating this restricted mean square prediction error as a function of the two coefficients (a, b) and minimizing it by methods of the calculus yield the optimal coefficients: b̂ = E{[XE(X)][YE(Y)]}/Var(X) and â = E(Y) − b̂E(X). The numerator of the expression for b̂ is called the covariance of X and Y and is denoted Cov(X, Y). Let Ŷ = â + b̂X denote the optimal linear predictor. The mean square error of prediction is E{(Y − Ŷ)2} = Var(Y) − [Cov(X, Y)]2/Var(X).

If X and Y are independent, then Cov(X, Y) = 0, the optimal predictor is just E(Y), and the mean square error of prediction is Var(Y). Hence, |Cov(X, Y)| is a measure of the value X has in predicting Y. In the extreme case that [Cov(X, Y)]2 = Var(X)Var(Y), Y is a linear function of X, and the optimal linear predictor gives error-free prediction.

There is one important case in which the optimal mean square predictor actually is the same as the optimal linear predictor. If X and Y are jointly normally distributed, the conditional expectation of Y given X is just a linear function of X, and hence the optimal predictor and the optimal linear predictor are the same. The form of the bivariate normal distribution as well as expressions for the coefficients â and b̂ and for the minimum mean square error of prediction were discovered by the English eugenicist Sir Francis Galton in his studies of the transmission of inheritable characteristics from one generation to the next. They form the foundation of the statistical technique of linear regression.

The Poisson process and the Brownian motion process

The theory of stochastic processes attempts to build probability models for phenomena that evolve over time. A primitive example appearing earlier in this article is the problem of gambler’s ruin.

The Poisson process

An important stochastic process described implicitly in the discussion of the Poisson approximation to the binomial distribution is the Poisson process. Modeling the emission of radioactive particles by an infinitely large number of tosses of a coin having infinitesimally small probability for heads on each toss led to the conclusion that the number of particles N(t) emitted in the time interval [0, t] has the Poisson distribution given in equation (13) with expectation μt. The primary concern of the theory of stochastic processes is not this marginal distribution of N(t) at a particular time but rather the evolution of N(t) over time. Two properties of the Poisson process that make it attractive to deal with theoretically are: (i) The times between emission of particles are independent and exponentially distributed with expected value 1/μ. (ii) Given that N(t) = n, the times at which the n particles are emitted have the same joint distribution as n points distributed independently and uniformly on the interval [0, t].

As a consequence of property (i), a picture of the function N(t) is very easily constructed. Originally N(0) = 0. At an exponentially distributed time T1, the function N(t) jumps from 0 to 1. It remains at 1 another exponentially distributed random time, T2, which is independent of T1, and at time T1 + T2 it jumps from 1 to 2, and so on.

Examples of other phenomena for which the Poisson process often serves as a mathematical model are the number of customers arriving at a counter and requesting service, the number of claims against an insurance company, or the number of malfunctions in a computer system. The importance of the Poisson process consists in (a) its simplicity as a test case for which the mathematical theory, and hence the implications, are more easily understood than for more realistic models and (b) its use as a building block in models of complex systems.