banner



Methods of Moments Estimator Continuous Unbounded Examples

Parameter estimation technique in statistics

In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis.

It starts by expressing the population moments (i.e., the expected values of powers of the random variable under consideration) as functions of the parameters of interest. Those expressions are then set equal to the sample moments. The number of such equations is the same as the number of parameters to be estimated. Those equations are then solved for the parameters of interest. The solutions are estimates of those parameters.

The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Pearson.[ citation needed ]

Method [edit]

Suppose that the problem is to estimate k {\displaystyle k} unknown parameters θ 1 , θ 2 , , θ k {\displaystyle \theta _{1},\theta _{2},\dots ,\theta _{k}} characterizing the distribution f W ( w ; θ ) {\displaystyle f_{W}(w;\theta )} of the random variable W {\displaystyle W} .[1] Suppose the first k {\displaystyle k} moments of the true distribution (the "population moments") can be expressed as functions of the θ {\displaystyle \theta } s:

μ 1 E [ W ] = g 1 ( θ 1 , θ 2 , , θ k ) , μ 2 E [ W 2 ] = g 2 ( θ 1 , θ 2 , , θ k ) , μ k E [ W k ] = g k ( θ 1 , θ 2 , , θ k ) . {\displaystyle {\begin{aligned}\mu _{1}&\equiv \operatorname {E} [W]=g_{1}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\[4pt]\mu _{2}&\equiv \operatorname {E} [W^{2}]=g_{2}(\theta _{1},\theta _{2},\ldots ,\theta _{k}),\\&\,\,\,\vdots \\\mu _{k}&\equiv \operatorname {E} [W^{k}]=g_{k}(\theta _{1},\theta _{2},\ldots ,\theta _{k}).\end{aligned}}}

Suppose a sample of size n {\displaystyle n} is drawn, resulting in the values w 1 , , w n {\displaystyle w_{1},\dots ,w_{n}} . For j = 1 , , k {\displaystyle j=1,\dots ,k} , let

μ ^ j = 1 n i = 1 n w i j {\displaystyle {\widehat {\mu }}_{j}={\frac {1}{n}}\sum _{i=1}^{n}w_{i}^{j}}

be the j-th sample moment, an estimate of μ j {\displaystyle \mu _{j}} . The method of moments estimator for θ 1 , θ 2 , , θ k {\displaystyle \theta _{1},\theta _{2},\ldots ,\theta _{k}} denoted by θ ^ 1 , θ ^ 2 , , θ ^ k {\displaystyle {\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\dots ,{\widehat {\theta }}_{k}} is defined as the solution (if there is one) to the equations:[ citation needed ]

μ ^ 1 = g 1 ( θ ^ 1 , θ ^ 2 , , θ ^ k ) , μ ^ 2 = g 2 ( θ ^ 1 , θ ^ 2 , , θ ^ k ) , μ ^ k = g k ( θ ^ 1 , θ ^ 2 , , θ ^ k ) . {\displaystyle {\begin{aligned}{\widehat {\mu }}_{1}&=g_{1}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}),\\[4pt]{\widehat {\mu }}_{2}&=g_{2}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}),\\&\,\,\,\vdots \\{\widehat {\mu }}_{k}&=g_{k}({\widehat {\theta }}_{1},{\widehat {\theta }}_{2},\ldots ,{\widehat {\theta }}_{k}).\end{aligned}}}

Advantages and disadvantages [edit]

The method of moments is fairly simple and yields consistent estimators (under very weak assumptions), though these estimators are often biased.

It is an alternative to the method of maximum likelihood.

However, in some cases the likelihood equations may be intractable without computers, whereas the method-of-moments estimators can be computed much more quickly and easily. Due to easy computability, method-of-moments estimates may be used as the first approximation to the solutions of the likelihood equations, and successive improved approximations may then be found by the Newton–Raphson method. In this way the method of moments can assist in finding maximum likelihood estimates.

In some cases, infrequent with large samples but not so infrequent with small samples, the estimates given by the method of moments are outside of the parameter space (as shown in the example below); it does not make sense to rely on them then. That problem never arises in the method of maximum likelihood[ citation needed ]. Also, estimates by the method of moments are not necessarily sufficient statistics, i.e., they sometimes fail to take into account all relevant information in the sample.

When estimating other structural parameters (e.g., parameters of a utility function, instead of parameters of a known probability distribution), appropriate probability distributions may not be known, and moment-based estimates may be preferred to maximum likelihood estimation.

Examples [edit]

An example application of the method of moments is to estimate polynomial probability density distributions. In this case, an approximate polynomial of order N {\displaystyle N} is defined on an interval [ a , b ] {\displaystyle [a,b]} . The method of moments then yields a system of equations, whose solution involves the inversion of a Hankel matrix.[2]

Proving the central limit theorem [edit]

Let X 1 , X 2 , {\displaystyle X_{1},X_{2},\cdots } be independent random variables with mean 0 and variance 1, then let S n := 1 n i = 1 n X i {\displaystyle S_{n}:={\frac {1}{\sqrt {n}}}\sum _{i=1}^{n}X_{i}} . We can compute the moments of S n {\displaystyle S_{n}} as

E [ S n 0 ] = 1 , E [ S n 1 ] = 0 , E [ S n 2 ] = 1 , E [ S n 3 ] = 0 , {\displaystyle E[S_{n}^{0}]=1,E[S_{n}^{1}]=0,E[S_{n}^{2}]=1,E[S_{n}^{3}]=0,\cdots }

Explicit expansion shows that

E [ S n 2 k + 1 ] = 0 ; E [ S n 2 k ] = ( n k ) ( 2 k ) ! 2 k n k = n ( n 1 ) ( n k + 1 ) n k ( 2 k 1 ) ! ! {\displaystyle E[S_{n}^{2k+1}]=0;\quad E[S_{n}^{2k}]={\frac {{\binom {n}{k}}{\frac {(2k)!}{2^{k}}}}{n^{k}}}={\frac {n(n-1)\cdots (n-k+1)}{n^{k}}}(2k-1)!!}

where the numerator is the number of ways to select k {\displaystyle k} distinct pairs of balls by picking one each from 2 k {\displaystyle 2k} buckets, each containing balls numbered from 1 {\displaystyle 1} to n {\displaystyle n} . At the n {\displaystyle n\to \infty } limit, all moments converge to that of a standard normal distribution. More analysis then show that this convergence in moments imply a convergence in distribution.

Essentially this argument was published by Chebyshev in 1887.[3]

Uniform distribution [edit]

Consider the uniform distribution on the interval [ a , b ] {\displaystyle [a,b]} , U ( a , b ) {\displaystyle U(a,b)} . If W U ( a , b ) {\displaystyle W\sim U(a,b)} then we have

μ 1 = E [ W ] = 1 2 ( a + b ) {\displaystyle \mu _{1}=\operatorname {E} [W]={\frac {1}{2}}(a+b)}
μ 2 = E [ W 2 ] = 1 3 ( a 2 + a b + b 2 ) {\displaystyle \mu _{2}=\operatorname {E} [W^{2}]={\frac {1}{3}}(a^{2}+ab+b^{2})}

Solving these equations gives

a ^ = μ 1 3 ( μ 2 μ 1 2 ) {\displaystyle {\widehat {a}}=\mu _{1}-{\sqrt {3\left(\mu _{2}-\mu _{1}^{2}\right)}}}
b ^ = μ 1 + 3 ( μ 2 μ 1 2 ) {\displaystyle {\widehat {b}}=\mu _{1}+{\sqrt {3\left(\mu _{2}-\mu _{1}^{2}\right)}}}

Given a set of samples { w i } {\displaystyle \{w_{i}\}} we can use the sample moments μ ^ 1 {\displaystyle {\widehat {\mu }}_{1}} and μ ^ 2 {\displaystyle {\widehat {\mu }}_{2}} in these formulae in order to estimate a {\displaystyle a} and b {\displaystyle b} .

Note, however, that this method can produce inconsistent results in some cases. For example, the set of samples { 0 , 0 , 0 , 0 , 1 } {\displaystyle \{0,0,0,0,1\}} results in the estimate a ^ = 1 5 2 3 5 , b ^ = 1 5 + 2 3 5 {\displaystyle {\widehat {a}}={\frac {1}{5}}-{\frac {2{\sqrt {3}}}{5}},{\widehat {b}}={\frac {1}{5}}+{\frac {2{\sqrt {3}}}{5}}} even though b ^ < 1 {\displaystyle {\widehat {b}}<1} and so it is impossible for the set { 0 , 0 , 0 , 0 , 1 } {\displaystyle \{0,0,0,0,1\}} to have been drawn from U ( a ^ , b ^ ) {\displaystyle U({\widehat {a}},{\widehat {b}})} in this case.

See also [edit]

  • Generalized method of moments
  • Decoding methods

References [edit]

  1. ^ Kimiko O. Bowman and L. R. Shenton, "Estimator: Method of Moments", pp 2092–2098, Encyclopedia of statistical sciences, Wiley (1998).
  2. ^ J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573
  3. ^ Fischer, Hans (2011). "4. Chebyshev's and Markov's Contributions". History of the central limit theorem : from classical to modern probability theory. New York: Springer. ISBN978-0-387-87857-7. OCLC 682910965.

isaachsenfroutilt.blogspot.com

Source: https://en.wikipedia.org/wiki/Method_of_moments_(statistics)

0 Response to "Methods of Moments Estimator Continuous Unbounded Examples"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel