# Law of large numbers

Evaluación | Biopsicología | Comparativo | Cognitivo | Del desarrollo | Idioma | Diferencias individuales | Personalidad | Filosofía | Social | Métodos | Estadística | Clínico | Educativo | Industrial | Artículos profesionales | Psicología mundial | Estadística: Método científico · Métodos de búsqueda · Diseño experimental · cursos de pregrado de estadistica · Pruebas estadísticas · Teoría de juego · Decision theory The law of large numbers is a fundamental concept in statistics and probability that describes how the average of a randomly selected large sample from a population is likely to be close to the average of the whole population. El término "law of large numbers" was introduced by S.D. Poisson in 1835 as he discussed a 1713 version of it put forth by James Bernoulli.[1] In formal language: If an event of probability p is observed repeatedly during independent repetitions, the ratio of the observed frequency of that event to the total number of repetitions converges towards p as the number of repetitions becomes arbitrarily large. In statistics, this means that when a large number of units of something is measured, the sample's average will be close to the true average of all of the units—including those that were not measured. (El término "promedio" means the arithmetic mean.) There are two versions of the Law of Large Numbers, one called the "weak" law and the other the "strong" ley. This article will describe both versions in technical detail, but in essence the two laws do not describe different actual laws but instead refer to different ways of describing the convergence of the sample mean with the population mean. The weak law states that as the sample size grows larger, the difference between the sample mean and the population mean will approach zero. The strong law states that as the sample size grows larger, the probability that the sample mean and the population mean will be exactly equal approaches 1. The phrase "law of large numbers" is also sometimes used in a less technical way to refer to the principle that the probability of any possible event (even an unlikely one) occurring at least once in a series increases with the number of events in the series. Por ejemplo, the odds that you will win the lottery are very low; sin embargo, the odds that someone will win the lottery are quite good, provided that a large enough number of people purchased lottery tickets. One misperception of LLN is that if an event has not occurred in many trials, the probability of it occurring in a subsequent trial is increased. Por ejemplo, the probability of a fair die turning up a 3 es 1 en 6. LLN says that over a large number of throws, the observed frequency of 3s will be close to 1 en 6. This however does not mean that if the first 5 throws of the die do not turn up a 3, the sixth throw will turn up a 3 with certainty (probability 1). The probability for the 6th throw turning up a 3 remains 1 en 6. In an infinite (or very large) set of observations, the value of any one individual observation cannot be predicted based upon past observations. Such predictions are known as the Gambler's Fallacy. Contents 1 The law of large numbers and the central limit theorem 2 The weak law 2.1 Proof 3 The strong law 4 A weaker law and proof 5 Referencias 6 Ver también 7 External links The law of large numbers and the central limit theorem The central limit theorem (CLT) gives the distribution of sums of identical random variables, regardless of the shape of the distribution of the random variables (as long as the distribution has finite variance), as long as the number of random variables added is large. CLT thus applies to the sample mean of a large sample as the mean is a sum. The variance as given by CLT collapses as the sample size grows larger, it follows that the mean converges to a number (which CLT says is the population mean). This is the LLN. So LLN is a result that can be obtained from the CLT. CLT enables statisticians evaluate the reliability of their results because they are able to make assumptions about a sample and extrapolate their results or conclusions to the population from which the sample was derived with a certain degree of confidence. See Statistical hypothesis testing. The remainder of this article will assume the reader has a familiarity with mathematical concepts and notation. The weak law The weak law of large numbers states that if X1, X2, X3, ... is an infinite sequence of random variables, where all the random variables have the same expected value μ and variance σ2; and are uncorrelated (es decir,, the correlation between any two of them is zero), then the sample average converges in probability to μ. Somewhat less tersely: For any positive number ε, no matter how small, we have Proof Chebyshev's inequality is used to prove this result. Finite variance (for all ) and no correlation yield that The common mean μ of the sequence is the mean of the sample average: Using Chebyshev's inequality on results in This may be used to obtain the following: As n approaches infinity, the expression approaches 1. Proof ends here The result holds also for the 'infinite variance' case, provided the are mutually independent and their (finito) mean μ exists. A consequence of the weak law of large numbers is the asymptotic equipartition property. The strong law The strong law of large numbers states that if X1, X2, X3, ... is an infinite sequence of random variables that are pairwise independent and identically distributed with E(|Xi|)

Si quieres conocer otros artículos parecidos a **Law of large numbers** puedes visitar la categoría **Pages using ISBN magic links**.

Deja una respuesta