Law of large numbers
Part of a series on Statistics 
Probability theory 


In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.
The LLN is important because it "guarantees" stable longterm results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. It is important to remember that the LLN only applies (as the name indicates) when a large number of observations are considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy).
Examples
For example, a single roll of a fair, sixsided dice produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability. Therefore, the expected value of a single die roll is
According to the law of large numbers, if a large number of sixsided dice are rolled, the average of their values (sometimes called the sample mean) is likely to be close to 3.5, with the precision increasing as more dice are rolled.
It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable, the expected value is the theoretical probability of success, and the average of n such variables (assuming they are independent and identically distributed (i.i.d.)) is precisely the relative frequency.
For example, a fair coin toss is a Bernoulli trial. When a fair coin is flipped once, the theoretical probability that the outcome will be heads is equal to 1/2. Therefore, according to the law of large numbers, the proportion of heads in a "large" number of coin flips "should be" roughly 1/2. In particular, the proportion of heads after n flips will almost surely converge to 1/2 as n approaches infinity.
Though the proportion of heads (and tails) approaches 1/2, almost surely the absolute (nominal) difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number, approaches zero as the number of flips becomes large. Also, almost surely the ratio of the absolute difference to the number of flips will approach zero. Intuitively, expected absolute difference grows, but at a slower rate than the number of flips, as the number of flips grows.
History
The Italian mathematician Gerolamo Cardano (1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials.^{[1]} This was then formalized as a law of large numbers. A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli.^{[2]} It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his Ars Conjectandi (The Art of Conjecturing) in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoulli's Theorem". This should not be confused with the principle in physics with the same name, named after Jacob Bernoulli's nephew Daniel Bernoulli. In 1837, S.D. Poisson further described it under the name "la loi des grands nombres" ("The law of large numbers").^{[3]}^{[4]} Thereafter, it was known under both names, but the "Law of large numbers" is most frequently used.
After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev,^{[5]} Markov, Borel, Cantelli and Kolmogorov and Khinchin, who finally provided a complete proof of the LLN for arbitrary random variables.^{[6]} These further studies have given rise to two prominent forms of the LLN. One is called the "weak" law and the other the "strong" law, in reference to two different modes of convergence of the cumulative sample means to the expected value; in particular, as explained below, the strong form implies the weak.^{[6]}
Forms
Two different versions of the law of large numbers are described below; they are called the strong law of large numbers, and the weak law of large numbers. Both versions of the law state that – with virtual certainty – the sample average
converges to the expected value
where X_{1}, X_{2}, ... is an infinite sequence of i.i.d. Lebesgue integrable random variables with expected value E(X_{1}) = E(X_{2}) = ...= µ. Lebesgue integrability of X_{j} means that the expected value E(X_{j}) exists according to Lebesgue integration and is finite.
An assumption of finite variance Var(X_{1}) = Var(X_{2}) = ... = σ^{2} < ∞ is not necessary. Large or infinite variance will make the convergence slower, but the LLN holds anyway. This assumption is often used because it makes the proofs easier and shorter.
The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, see Convergence of random variables.
Weak law
The weak law of large numbers (also called Khintchine's law) states that the sample average converges in probability towards the expected value^{[7]}^{[proof]}
That is to say that for any positive number ε,
Interpreting this result, the weak law essentially states that for any nonzero margin specified, no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin.
Convergence in probability is also called weak convergence of random variables. This version is called the weak law because random variables may converge weakly (in probability) as above without converging strongly (almost surely) as below.
Strong law
The strong law of large numbers states that the sample average converges almost surely to the expected value^{[8]}
That is,
The proof is more complex than that of the weak law.^{[9]} This law justifies the intuitive interpretation of the expected value (for Lebesgue integration only ) of a random variable when sampled repeatedly as the "longterm average".
Almost sure convergence is also called strong convergence of random variables. This version is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). The strong law implies the weak law but not vice versa, when the strong law conditions hold the variable converges both strongly (almost surely) and weakly (in probability) . However the weak law may hold in conditions where the strong law does not hold and then the convergence is only weak (in probability) .
There are different views among mathematicians whether the two laws could be unified to one law, thereby replacing the weak law.^{[10]}
However the strong law conditions could not be proven to hold same as the weak law to date.
The strong law of large numbers can itself be seen as a special case of the pointwise ergodic theorem.
Moreover, if the summands are independent but not identically distributed, then
provided that each X_{k} has a finite second moment and
This statement is known as Kolmogorov's strong law, see e.g. Sen & Singer (1993, Theorem 2.3.10).
Differences between the weak law and the strong law
The weak law states that for a specified large n, the average is likely to be near μ. Thus, it leaves open the possibility that happens an infinite number of times, although at infrequent intervals.
The strong law shows that this almost surely will not occur. In particular, it implies that with probability 1, we have that for any ε > 0 the inequality holds for all large enough n.^{[11]}
The strong law does not hold in the following cases, but the weak law does^{[12]}^{[13]} ^{[14]}
1. Let x be exponentially distributed random variable with parameter 1, the transformation with the following expected value:
2. Let x be geometric distribution with probability 0.5, the transformation with the following expected value:
3.
^{[15]} ^{[16]}
Uniform law of large numbers
Suppose f(x,θ) is some function defined for θ ∈ Θ, and continuous in θ. Then for any fixed θ, the sequence {f(X_{1},θ), f(X_{2},θ), …} will be a sequence of independent and identically distributed random variables, such that the sample mean of this sequence converges in probability to E[f(X,θ)]. This is the pointwise (in θ) convergence.
The uniform law of large numbers states the conditions under which the convergence happens uniformly in θ. If^{[17]}^{[18]}
 Θ is compact,
 f(x,θ) is continuous at each θ ∈ Θ for almost all x’s, and measurable function of x at each θ.
 there exists a dominating function d(x) such that E[d(X)] < ∞, and
Then E[f(X,θ)] is continuous in θ, and
This result is useful to derive consistency of a large class of estimators (see Extremum estimator).
Borel's law of large numbers
Borel's law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event occurs approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. More precisely, if E denotes the event in question, p its probability of occurrence, and N_{n}(E) the number of times E occurs in the first n trials, then with probability one,^{[19]}
Chebyshev's inequality. Let X be a random variable with finite expected value μ and finite nonzero variance σ^{2}. Then for any real number k > 0,
This theorem makes rigorous the intuitive notion of probability as the longrun relative frequency of an event's occurrence. It is a special case of any of several more general laws of large numbers in probability theory.
Proof
Given X_{1}, X_{2}, ... an infinite sequence of i.i.d. random variables with finite expected value E(X_{1}) = E(X_{2}) = ... = µ < ∞, we are interested in the convergence of the sample average
The weak law of large numbers states:
Theorem:
Proof using Chebyshev's inequality
This proof uses the assumption of finite variance (for all ). The independence of the random variables implies no correlation between them, and we have that
The common mean μ of the sequence is the mean of the sample average:
Using Chebyshev's inequality on results in
This may be used to obtain the following:
As n approaches infinity, the expression approaches 1. And by definition of convergence in probability, we have obtained
Proof using convergence of characteristic functions
By Taylor's theorem for complex functions, the characteristic function of any random variable, X, with finite mean μ, can be written as
All X_{1}, X_{2}, ... have the same characteristic function, so we will simply denote this φ_{X}.
Among the basic properties of characteristic functions there are
 if X and Y are independent.
These rules can be used to calculate the characteristic function of in terms of φ_{X}:
The limit e^{itμ} is the characteristic function of the constant random variable μ, and hence by the Lévy continuity theorem, converges in distribution to μ:
μ is a constant, which implies that convergence in distribution to μ and convergence in probability to μ are equivalent (see Convergence of random variables.) Therefore,
This shows that the sample mean converges in probability to the derivative of the characteristic function at the origin, as long as the latter exists.
See also
 Asymptotic equipartition property
 Central limit theorem
 Infinite monkey theorem
 Law of averages
 Law of the iterated logarithm
 Lindy effect
 Regression toward the mean
Notes
 ↑ Mlodinow, L. The Drunkard's Walk. New York: Random House, 2008. p. 50.
 ↑ Jakob Bernoulli, Ars Conjectandi: Usum & Applicationem Praecedentis Doctrinae in Civilibus, Moralibus & Oeconomicis, 1713, Chapter 4, (Translated into English by Oscar Sheynin)
 ↑ Poisson names the "law of large numbers" (la loi des grands nombres) in: S.D. Poisson, Probabilité des jugements en matière criminelle et en matière civile, précédées des règles générales du calcul des probabilitiés (Paris, France: Bachelier, 1837), p. 7. He attempts a twopart proof of the law on pp. 139–143 and pp. 277 ff.
 ↑ Hacking, Ian. (1983) "19thcentury Cracks in the Concept of Determinism", Journal of the History of Ideas, 44 (3), 455475 JSTOR 2709176
 ↑ Lua error in Module:Citation/CS1/Identifiers at line 47: attempt to index field 'wikibase' (a nil value).
 ↑ ^{6.0} ^{6.1} Seneta 2013.
 ↑ Loève 1977, Chapter 1.4, p. 14
 ↑ Loève 1977, Chapter 17.3, p. 251
 ↑ "The strong law of large numbers « What's new". Terrytao.wordpress.com. Retrieved 20120609.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ Law of large numbers views.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ Ross (2009)
 ↑ Weak law converges to constant.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ "A NOTE ON THE WEAK LAW OF LARGE NUMBERS FOR EXCHANGEABLE RANDOM VARIABLES" (PDF). Dguvl Hun Hong and Sung Ho Lee.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ "weak law of large numbers: proof using characteristic functions vs proof using truncation VARIABLES".<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ Mukherjee, Sayan. "Law of large numbers" (PDF).<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ J. Geyer, Charles. "Law of large numbers" (PDF).<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ Newey & McFadden 1994, Lemma 2.4
 ↑ Lua error in Module:Citation/CS1/Identifiers at line 47: attempt to index field 'wikibase' (a nil value).
 ↑ An Analytic Technique to Prove Borel's Strong Law of Large Numbers Wen, L. Am Math Month 1991
References
 Grimmett, G. R. and Stirzaker, D. R. (1992). Probability and Random Processes, 2nd Edition. Clarendon Press, Oxford. ISBN 0198536658. <templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 Richard Durrett (1995). Probability: Theory and Examples, 2nd Edition. Duxbury Press.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 Martin Jacobsen (1992). Videregående Sandsynlighedsregning (Advanced Probability Theory) 3rd Edition. HCØtryk, Copenhagen. ISBN 8791180716.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 Loève, Michel (1977). Probability theory 1 (4th ed.). Springer Verlag.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 Newey, Whitney K.; McFadden, Daniel (1994). Large sample estimation and hypothesis testing. Handbook of econometrics, vol. IV, Ch. 36. Elsevier Science. pp. 2111–2245.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 Ross, Sheldon (2009). A first course in probability (8th ed.). Prentice Hall press. ISBN 9780136033134.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 Sen, P. K; Singer, J. M. (1993). Large sample methods in statistics. Chapman & Hall, Inc.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 Lua error in Module:Citation/CS1/Identifiers at line 47: attempt to index field 'wikibase' (a nil value).
External links
 Hazewinkel, Michiel, ed. (2001), "Law of large numbers", Encyclopedia of Mathematics, Springer, ISBN 9781556080104<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 Weisstein, Eric W., "Weak Law of Large Numbers", MathWorld.
 Weisstein, Eric W., "Strong Law of Large Numbers", MathWorld.
 Animations for the Law of Large Numbers by Yihui Xie using the R package animation
 Apple CEO Tim Cook said something that would make statisticians cringe. "We don't believe in such laws as laws of large numbers. This is sort of, uh, old dogma, I think, that was cooked up by somebody [..]" said Tim Cook and while: "However, the law of large numbers has nothing to do with large companies, large revenues, or large growth rates. The law of large numbers is a fundamental concept in probability theory and statistics, tying together theoretical probabilities that we can calculate to the actual outcomes of experiments that we empirically perform. explained Business Insider