SlideShare a Scribd company logo
1 of 27
Download to read offline
Methods of Point
Estimation
By
Suruchi Somwanshi
M.Sc. (Mathematics)
M.Sc. (Statistics)
TOPICS TO BE COVERED
1. Introduction to statistical inference
2. Theory of estimation
3. Methods of estimation
3.1 Method of maximum likelihood estimation
3.2 Method of moments
1. Introduction to statistical inference
Statistics
Inferential Statistics
Estimation Testing of Hypothesis
Descriptive statistics
Descriptive analysis Graphical Presentation
What do we mean by Statistical Inference?
Drawing conclusion or making decision about population based
on information collected from the sample.
Population Sample
Representative
Making Conclusions
โ€“ Statistical inference is further divided into two parts
Testing of hypothesis &
Theory of Estimation
Testing of hypothesis โ€“
โžข The theory of testing of hypothesis is initiated by J. Neyman and
E. S. Pearson.
โžข It provides the rule which makes one to decide about the
acceptance or rejection of the hypothesis under study.
Theory of estimation โ€“
โžข The theory of estimation was founded by Prof. R. A. Fisher.
โžข It discuss the ways of assigning the value to a population
parameter based on values of corresponding statistics (function
of sample observations).
2. Theory of estimation
โžข The theory of estimation was founded by R. A. Fisher.
Inferential
Statistics
Estimation
Point
Estimation
Interval
Estimation
Testing of
hypothesis
What do we mean by Estimation
It discuss the ways of assigning the values to a population parameter
based on the values of the corresponding statistics(function of the
sample observations).
The statistics used to estimate population parameter is called
estimator.
The value of the estimator is called estimate.
Types of estimation
There are two types of estimation
Point estimation
&
Interval estimation
Point Estimation
It involves the use of sample data to calculate a single value(known
as a Point estimate) which is to serve as a best guess or best estimate
of an unknown population parameter. More formally, it is the
application of a point estimator to the data to obtain a point
estimate.
Interval estimation
It is the use of sample data to calculate an interval of possible values
of an unknown population parameter; this is in contrast to point
estimation, which gives a single value
Is an interval which is formed by two quantities based on sample
data within which the parameter will lie with very high probability.
3. Methods of Estimation
โ€“ Following are some of the important methods for obtaining good
estimators :
โžข Method of maximum likelihood estimation
โžข Method of moments
3.1 Method of maximum likelihood
estimation
โ€“ It is initially formulated by C. F. Gauss.
โ€“ In statistics, maximum likelihood estimation (MLE) is a method
of estimating the parameters of a probability distribution by
maximizing a likelihood function, so that under the
assumed statistical model the observed data is most probable.
The point in the parameter space that maximizes the likelihood
function is called the maximum likelihood estimate.
Likelihood function
It is formed from the joint density function of the sample.
i.e.,
๐ฟ = ๐ฟ ๐œƒ = ๐‘“ ๐‘ฅ1, ๐œƒ โ€ฆ โ€ฆ โ€ฆ . ๐‘“ ๐‘ฅ๐‘›, ๐œƒ = เท‘
๐‘–=1
๐‘›
๐‘“ ๐‘ฅ๐‘–, ๐œƒ
Where ๐‘ฅ1, ๐‘ฅ2, ๐‘ฅ3,โ€ฆโ€ฆ. ๐‘ฅ๐‘› be a random sample of size n from a
population with density function ๐‘“ ๐‘ฅ, ๐œƒ .
Steps to perform in MLE
1. Define the likelihood, ensuring youโ€™re using the correct
distribution for your classification problem.
2. Take the natural log and reduce the product function to a sum
function.
3. Then compute the parameter by considering the case
๐œ•
๐œ•๐œƒ
๐‘™๐‘œ๐‘”๐ฟ = 0 &
๐œ•2
๐œ•๐œƒ2 ๐‘™๐‘œ๐‘”๐ฟ < 0
This equations are usually referred to as the Likelihood Equation for
estimating the parameters.
Example
Suppose we have a random sample ๐‘‹1, ๐‘‹2, โ€ฆ โ€ฆ โ€ฆ , ๐‘‹๐‘› where :
๐‘‹๐‘– = 0 ; ๐‘–๐‘“ ๐‘Ž ๐‘Ÿ๐‘Ž๐‘›๐‘‘๐‘œ๐‘š๐‘™๐‘ฆ ๐‘ ๐‘’๐‘™๐‘’๐‘๐‘ก๐‘’๐‘‘ ๐‘ ๐‘ก๐‘ข๐‘‘๐‘’๐‘›๐‘ก ๐‘‘๐‘œ๐‘’๐‘  ๐‘›๐‘œ๐‘ก ๐‘œ๐‘ค๐‘› ๐‘Ž ๐‘๐‘Ž๐‘Ÿ, ๐‘Ž๐‘›๐‘‘
๐‘‹๐‘– = 1 ; ๐‘–๐‘“ ๐‘Ž ๐‘Ÿ๐‘Ž๐‘›๐‘‘๐‘œ๐‘š๐‘™๐‘ฆ ๐‘ ๐‘’๐‘™๐‘’๐‘๐‘ก๐‘’๐‘‘ ๐‘ ๐‘ก๐‘ข๐‘‘๐‘’๐‘›๐‘ก ๐‘‘๐‘œ๐‘’๐‘  ๐‘œ๐‘ค๐‘› ๐‘Ž ๐‘๐‘Ž๐‘Ÿ.
Assuming that the ๐‘‹๐‘– are independent Bernoulli random variables with
unknown parameter p, find the maximum likelihood estimator of p, the
proportion of students who own a sports car.
If the ๐‘‹๐‘– are independent Bernoulli random variables with unknown
parameter p, then the probability mass function of each ๐‘‹๐‘– is :
๐‘“ ๐‘ฅ; ๐‘ = ๐‘๐‘ฅ 1 โˆ’ ๐‘ 1โˆ’๐‘ฅ
For ๐‘‹๐‘– = 0๐‘œ๐‘Ÿ 1 ๐‘Ž๐‘›๐‘‘ 0 < ๐‘ < 1.
Therefore, the likelihood function L(p) is, by definition:
Answer
๐ฟ ๐‘ = ฯ‚๐‘–=1
๐‘›
๐‘“ ๐‘ฅ; ๐‘ = ๐‘๐‘ฅ1 1 โˆ’ ๐‘ 1โˆ’๐‘ฅ1 ร— ๐‘๐‘ฅ2 1 โˆ’ ๐‘ 1โˆ’๐‘ฅ2 ร— โ‹ฏ โ€ฆ โ€ฆ โ€ฆ ร— ๐‘๐‘ฅ๐‘›แˆบ
แˆป
1 โˆ’
๐‘ 1โˆ’๐‘ฅ๐‘›
For 0 < p < 1.
Simplifying, by summing up the exponents we get:
๐ฟ ๐‘ = ๐‘ฯƒ๐‘–=1
๐‘›
๐‘ฅ๐‘– 1 โˆ’ ๐‘ ๐‘› โˆ’ ฯƒ๐‘–=1
๐‘›
๐‘ฅ๐‘– โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ..(1)
Now, in order to implement the method of maximum likelihood, we
need to find the value of unknown parameter p that maximizes the
likelihood L(p) given in equation (1).
So to maximize the function, we are need to differentiate the likelihood
function with respect to p.
And to make the differentiation easy we are going to use the logarithm
of likelihood function as it is an increasing function of x.
That is, if ๐‘ฅ1 < ๐‘ฅ2 , then๐‘“แˆบ๐‘ฅ1แˆป < ๐‘“แˆบ๐‘ฅ2แˆป. That means the value of p that
maximizes the natural logarithm of the likelihood function log L(p) is also
the value of p that maximizes the likelihood function L(p).
So we take the derivative of log L(p) with respect to p instead of taking the
derivative of L(p).
In this case, the log likelihood function is :
๐‘™๐‘œ๐‘”๐ฟ ๐‘ = ฯƒ๐‘–=1
๐‘›
๐‘ฅ๐‘– log ๐‘ + ๐‘› โˆ’ ฯƒ๐‘–=1
๐‘›
๐‘ฅ๐‘– log 1 โˆ’ ๐‘ โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ.. (2)
Taking the derivative of log L(p) with respect to p and equate it with 0 we get :
๐œ• log ๐ฟ ๐‘
๐œ•๐‘
= 0
=>
ฯƒ ๐‘ฅ๐‘–
๐‘
โˆ’
๐‘› โˆ’ ฯƒ ๐‘ฅ๐‘–
1 โˆ’ ๐‘
= 0
Now by simplifying this for p we get;
Here (โ€œ^โ€) is used to represent the estimate of parameter p.
Though we find the estimate of parameter p, technically to verify that it is
maximum. For that the second derivative of the logL(p) with respect to p should
negative i.e.,
๐œ•2 log ๐ฟ ๐‘
๐œ•๐‘2
< 0 => โˆ’๐‘› < 0 โ€ฆ โ€ฆ โ€ฆ โ€ฆ . แˆบ๐‘๐‘ฆ 3แˆป
Thus, ฦธ
๐‘ =
ฯƒ ๐‘ฅ๐‘–
๐‘›
is maximum likelihood estimator of p.
3.2 Method of moments
โ€“ This method was discovered and studied in detail by Karl Pearson.
โ€“ The basic idea behind this form of the method is to:
1. Equate the first sample moment about the origin
๐‘€1 =
1
๐‘›
ฯƒ๐‘–=1
๐‘›
๐‘‹๐‘– = าง
๐‘ฅ
to the first theoretical moment E(X).
2. Equate the second sample moment about the origin
๐‘€2=
1
๐‘›
ฯƒ๐‘–=1
๐‘›
๐‘‹๐‘–
2
to the second theoretical moment E(๐‘‹2
).
3. Continue equating sample moments about the origin, ๐‘€๐‘˜, with
the corresponding theoretical moments E(๐‘‹๐‘˜),k=3,4,โ€ฆ until you
have as many equations as you have parameters.
4. Solve this equation for the parameters.
โ€“ The resulting values are called method of moments estimators. It
seems reasonable that this method would provide good estimates,
since the empirical distribution converges in some sense to the
probability distribution. Therefore, the corresponding moments
should be about equal.
Another Form of the Method
โ€“ The basic idea behind this form of the method is to:
1. Equate the first sample moment about the origin ๐‘€1 =
1
๐‘›
ฯƒ๐‘–=1
๐‘›
๐‘‹๐‘– = าง
๐‘ฅ to the first theoretical moment E(X).
2. Equate the second sample moment about the mean ๐‘€1 =
1
๐‘›
ฯƒ๐‘–=1
๐‘›
แˆบ๐‘ฅ๐‘– โˆ’ าง
๐‘ฅแˆป2 to the second theoretical moment about the
mean E[แˆบ๐‘‹ โˆ’ ๐œ‡แˆป2].
3. Continue equating sample moments about the mean Mkโˆ— with the
corresponding theoretical moments about the
mean E[แˆบ๐‘‹ โˆ’ ๐œ‡แˆป๐‘˜], k=3,4,โ€ฆ until you have as many equations as
you have parameters.
4. Solve for the parameters.
โ€“ Again, the resulting values are called method of moments
estimators.
Example
Let ๐‘‹1, ๐‘‹2, โ€ฆ โ€ฆ โ€ฆ , ๐‘‹๐‘› be normal random variate with mean ๐œ‡ and
variance ๐œŽ2. What are the method of moment estimators of the
mean ๐œ‡ and variance ๐œŽ2 ?
The first and second theoretical moments about the origin are:
๐ธ ๐‘‹๐‘– = ๐œ‡ ๐‘Ž๐‘›๐‘‘ ๐ธ ๐‘‹๐‘–
2
= ๐œŽ2 + ๐œ‡2
Here we have two parameters for which we are trying to derive
method of momentโ€™s estimators.
Answer
Therefore, we need two equations here. Equating the first theoretical
moment about the origin with the corresponding sample moment, we get:
๐ธ ๐‘‹๐‘– = ๐œ‡ =
1
๐‘›
เท
๐‘–=1
๐‘›
๐‘‹๐‘– โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ . แˆบ1แˆป
And equating the second theoretical moment about the origin with the
corresponding sample moment, we get:
๐ธ ๐‘‹๐‘–
2
= ๐œŽ2 + ๐œ‡2 =
1
๐‘›
เท
๐‘–=1
๐‘›
๐‘‹๐‘–
2
โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ . 2
Now from equation (1) we say that the method of moments estimator
for the mean ๐œ‡ is the sample mean:
ฦธ
๐œ‡๐‘€๐‘€ =
1
๐‘›
เท
๐‘–=1
๐‘›
๐‘‹๐‘– = เดค
๐‘‹
And by substituting the sample mean as the estimator of ๐œ‡ in the
second equation and solving for ๐œŽ2, we get the method of moments
estimator for the variance ๐œŽ2
is;
เทข
๐œŽ2
๐‘€๐‘€ =
1
๐‘›
เท
๐‘–=1
๐‘›
๐‘‹๐‘–
2
โˆ’ ๐œ‡2 =
1
๐‘›
เท
๐‘–=1
๐‘›
๐‘‹๐‘–
2
โˆ’ เดค
๐‘‹2 =
1
๐‘›
เท
๐‘–=1
๐‘›
แˆบ๐‘‹๐‘– โˆ’ เดฅ
๐‘‹ แˆป2
For this example, if cross check, then method of moments estimators
are the same as the maximum likelihood estimators.
Thank
You

More Related Content

What's hot

Descriptive statistics
Descriptive statisticsDescriptive statistics
Descriptive statisticsAttaullah Khan
ย 
Estimation in statistics
Estimation in statisticsEstimation in statistics
Estimation in statisticsRabea Jamal
ย 
Cramer row inequality
Cramer row inequality Cramer row inequality
Cramer row inequality VashuGupta8
ย 
Sampling Distributions and Estimators
Sampling Distributions and Estimators Sampling Distributions and Estimators
Sampling Distributions and Estimators Long Beach City College
ย 
Monotone likelihood ratio test
Monotone likelihood ratio testMonotone likelihood ratio test
Monotone likelihood ratio testSohel rana
ย 
The sampling distribution
The sampling distributionThe sampling distribution
The sampling distributionHarve Abella
ย 
7. binomial distribution
7. binomial distribution7. binomial distribution
7. binomial distributionKaran Kukreja
ย 
Skewness & Kurtosis
Skewness & KurtosisSkewness & Kurtosis
Skewness & KurtosisNavin Bafna
ย 
Central limit theorem
Central limit theoremCentral limit theorem
Central limit theoremVijeesh Soman
ย 
Hypothesis testing
Hypothesis testingHypothesis testing
Hypothesis testingArnab Sadhu
ย 
Exponential probability distribution
Exponential probability distributionExponential probability distribution
Exponential probability distributionMuhammad Bilal Tariq
ย 
Point and Interval Estimation
Point and Interval EstimationPoint and Interval Estimation
Point and Interval EstimationShubham Mehta
ย 
Application of Univariate, Bi-variate and Multivariate analysis Pooja k shetty
Application of Univariate, Bi-variate and Multivariate analysis Pooja k shettyApplication of Univariate, Bi-variate and Multivariate analysis Pooja k shetty
Application of Univariate, Bi-variate and Multivariate analysis Pooja k shettySundar B N
ย 
Probability Distribution
Probability DistributionProbability Distribution
Probability DistributionPharmacy Universe
ย 
Parametric vs Nonparametric Tests: When to use which
Parametric vs Nonparametric Tests: When to use whichParametric vs Nonparametric Tests: When to use which
Parametric vs Nonparametric Tests: When to use whichGรถnenรง Dalgฤฑรง
ย 
Probability And Probability Distributions
Probability And Probability Distributions Probability And Probability Distributions
Probability And Probability Distributions Sahil Nagpal
ย 
ders 3 Unit root test.pptx
ders 3 Unit root test.pptxders 3 Unit root test.pptx
ders 3 Unit root test.pptxErgin Akalpler
ย 
Goodness of fit (ppt)
Goodness of fit (ppt)Goodness of fit (ppt)
Goodness of fit (ppt)Sharlaine Ruth
ย 

What's hot (20)

STATISTIC ESTIMATION
STATISTIC ESTIMATIONSTATISTIC ESTIMATION
STATISTIC ESTIMATION
ย 
Descriptive statistics
Descriptive statisticsDescriptive statistics
Descriptive statistics
ย 
Estimation in statistics
Estimation in statisticsEstimation in statistics
Estimation in statistics
ย 
Cramer row inequality
Cramer row inequality Cramer row inequality
Cramer row inequality
ย 
Sampling Distributions and Estimators
Sampling Distributions and Estimators Sampling Distributions and Estimators
Sampling Distributions and Estimators
ย 
Monotone likelihood ratio test
Monotone likelihood ratio testMonotone likelihood ratio test
Monotone likelihood ratio test
ย 
The sampling distribution
The sampling distributionThe sampling distribution
The sampling distribution
ย 
7. binomial distribution
7. binomial distribution7. binomial distribution
7. binomial distribution
ย 
Skewness & Kurtosis
Skewness & KurtosisSkewness & Kurtosis
Skewness & Kurtosis
ย 
Central limit theorem
Central limit theoremCentral limit theorem
Central limit theorem
ย 
Hypothesis testing
Hypothesis testingHypothesis testing
Hypothesis testing
ย 
Law of large numbers
Law of large numbersLaw of large numbers
Law of large numbers
ย 
Exponential probability distribution
Exponential probability distributionExponential probability distribution
Exponential probability distribution
ย 
Point and Interval Estimation
Point and Interval EstimationPoint and Interval Estimation
Point and Interval Estimation
ย 
Application of Univariate, Bi-variate and Multivariate analysis Pooja k shetty
Application of Univariate, Bi-variate and Multivariate analysis Pooja k shettyApplication of Univariate, Bi-variate and Multivariate analysis Pooja k shetty
Application of Univariate, Bi-variate and Multivariate analysis Pooja k shetty
ย 
Probability Distribution
Probability DistributionProbability Distribution
Probability Distribution
ย 
Parametric vs Nonparametric Tests: When to use which
Parametric vs Nonparametric Tests: When to use whichParametric vs Nonparametric Tests: When to use which
Parametric vs Nonparametric Tests: When to use which
ย 
Probability And Probability Distributions
Probability And Probability Distributions Probability And Probability Distributions
Probability And Probability Distributions
ย 
ders 3 Unit root test.pptx
ders 3 Unit root test.pptxders 3 Unit root test.pptx
ders 3 Unit root test.pptx
ย 
Goodness of fit (ppt)
Goodness of fit (ppt)Goodness of fit (ppt)
Goodness of fit (ppt)
ย 

Similar to Methods of point estimation

MLE.pdf
MLE.pdfMLE.pdf
MLE.pdfappalondhe2
ย 
Intro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdfIntro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdfJifarRaya
ย 
Fortran chapter 2.pdf
Fortran chapter 2.pdfFortran chapter 2.pdf
Fortran chapter 2.pdfJifarRaya
ย 
3 es timation-of_parameters[1]
3 es timation-of_parameters[1]3 es timation-of_parameters[1]
3 es timation-of_parameters[1]Fernando Jose Damayo
ย 
!Business statistics tekst
!Business statistics tekst!Business statistics tekst
!Business statistics tekstKing Nisar
ย 
Point Estimation
Point EstimationPoint Estimation
Point Estimationmathscontent
ย 
Inorganic CHEMISTRY
Inorganic CHEMISTRYInorganic CHEMISTRY
Inorganic CHEMISTRYSaikumar raja
ย 
Basic of Statistical Inference Part-III: The Theory of Estimation from Dexlab...
Basic of Statistical Inference Part-III: The Theory of Estimation from Dexlab...Basic of Statistical Inference Part-III: The Theory of Estimation from Dexlab...
Basic of Statistical Inference Part-III: The Theory of Estimation from Dexlab...Dexlab Analytics
ย 
03 Data Mining Techniques
03 Data Mining Techniques03 Data Mining Techniques
03 Data Mining TechniquesValerii Klymchuk
ย 
machine learning.pdf
machine learning.pdfmachine learning.pdf
machine learning.pdfMoeenAhmad11
ย 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regressionAvjinder (Avi) Kaler
ย 
CHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptx
CHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptxCHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptx
CHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptxanshujain54751
ย 
Basic statistics
Basic statistics Basic statistics
Basic statistics TilayeMatebe
ย 
Koh_Liang_ICML2017
Koh_Liang_ICML2017Koh_Liang_ICML2017
Koh_Liang_ICML2017Masa Kato
ย 
Factor Extraction method in factor analysis with example in R studio.pptx
Factor Extraction method in factor analysis with example in R studio.pptxFactor Extraction method in factor analysis with example in R studio.pptx
Factor Extraction method in factor analysis with example in R studio.pptxGauravRajole
ย 
Elementary statistical inference1
Elementary statistical inference1Elementary statistical inference1
Elementary statistical inference1SEMINARGROOT
ย 
Module-2_Notes-with-Example for data science
Module-2_Notes-with-Example for data scienceModule-2_Notes-with-Example for data science
Module-2_Notes-with-Example for data sciencepujashri1975
ย 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsUniversity of Salerno
ย 
Chi-squared Goodness of Fit Test Project Overview and.docx
Chi-squared Goodness of Fit Test Project  Overview and.docxChi-squared Goodness of Fit Test Project  Overview and.docx
Chi-squared Goodness of Fit Test Project Overview and.docxmccormicknadine86
ย 

Similar to Methods of point estimation (20)

MLE.pdf
MLE.pdfMLE.pdf
MLE.pdf
ย 
Intro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdfIntro. to computational Physics ch2.pdf
Intro. to computational Physics ch2.pdf
ย 
Fortran chapter 2.pdf
Fortran chapter 2.pdfFortran chapter 2.pdf
Fortran chapter 2.pdf
ย 
3 es timation-of_parameters[1]
3 es timation-of_parameters[1]3 es timation-of_parameters[1]
3 es timation-of_parameters[1]
ย 
!Business statistics tekst
!Business statistics tekst!Business statistics tekst
!Business statistics tekst
ย 
Point Estimation
Point EstimationPoint Estimation
Point Estimation
ย 
Inorganic CHEMISTRY
Inorganic CHEMISTRYInorganic CHEMISTRY
Inorganic CHEMISTRY
ย 
Basic of Statistical Inference Part-III: The Theory of Estimation from Dexlab...
Basic of Statistical Inference Part-III: The Theory of Estimation from Dexlab...Basic of Statistical Inference Part-III: The Theory of Estimation from Dexlab...
Basic of Statistical Inference Part-III: The Theory of Estimation from Dexlab...
ย 
03 Data Mining Techniques
03 Data Mining Techniques03 Data Mining Techniques
03 Data Mining Techniques
ย 
machine learning.pdf
machine learning.pdfmachine learning.pdf
machine learning.pdf
ย 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regression
ย 
Thesis
ThesisThesis
Thesis
ย 
CHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptx
CHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptxCHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptx
CHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptx
ย 
Basic statistics
Basic statistics Basic statistics
Basic statistics
ย 
Koh_Liang_ICML2017
Koh_Liang_ICML2017Koh_Liang_ICML2017
Koh_Liang_ICML2017
ย 
Factor Extraction method in factor analysis with example in R studio.pptx
Factor Extraction method in factor analysis with example in R studio.pptxFactor Extraction method in factor analysis with example in R studio.pptx
Factor Extraction method in factor analysis with example in R studio.pptx
ย 
Elementary statistical inference1
Elementary statistical inference1Elementary statistical inference1
Elementary statistical inference1
ย 
Module-2_Notes-with-Example for data science
Module-2_Notes-with-Example for data scienceModule-2_Notes-with-Example for data science
Module-2_Notes-with-Example for data science
ย 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov Chains
ย 
Chi-squared Goodness of Fit Test Project Overview and.docx
Chi-squared Goodness of Fit Test Project  Overview and.docxChi-squared Goodness of Fit Test Project  Overview and.docx
Chi-squared Goodness of Fit Test Project Overview and.docx
ย 

Recently uploaded

Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Nistarini College, Purulia (W.B) India
ย 
Call Girls in Mayapuri Delhi ๐Ÿ’ฏCall Us ๐Ÿ”9953322196๐Ÿ” ๐Ÿ’ฏEscort.
Call Girls in Mayapuri Delhi ๐Ÿ’ฏCall Us ๐Ÿ”9953322196๐Ÿ” ๐Ÿ’ฏEscort.Call Girls in Mayapuri Delhi ๐Ÿ’ฏCall Us ๐Ÿ”9953322196๐Ÿ” ๐Ÿ’ฏEscort.
Call Girls in Mayapuri Delhi ๐Ÿ’ฏCall Us ๐Ÿ”9953322196๐Ÿ” ๐Ÿ’ฏEscort.aasikanpl
ย 
Caco-2 cell permeability assay for drug absorption
Caco-2 cell permeability assay for drug absorptionCaco-2 cell permeability assay for drug absorption
Caco-2 cell permeability assay for drug absorptionPriyansha Singh
ย 
Nanoparticles synthesis and characterizationโ€‹ โ€‹
Nanoparticles synthesis and characterizationโ€‹  โ€‹Nanoparticles synthesis and characterizationโ€‹  โ€‹
Nanoparticles synthesis and characterizationโ€‹ โ€‹kaibalyasahoo82800
ย 
Analytical Profile of Coleus Forskohlii | Forskolin .pptx
Analytical Profile of Coleus Forskohlii | Forskolin .pptxAnalytical Profile of Coleus Forskohlii | Forskolin .pptx
Analytical Profile of Coleus Forskohlii | Forskolin .pptxSwapnil Therkar
ย 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoSรฉrgio Sacani
ย 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...RohitNehra6
ย 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptxanandsmhk
ย 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTSรฉrgio Sacani
ย 
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...jana861314
ย 
Cultivation of KODO MILLET . made by Ghanshyam pptx
Cultivation of KODO MILLET . made by Ghanshyam pptxCultivation of KODO MILLET . made by Ghanshyam pptx
Cultivation of KODO MILLET . made by Ghanshyam pptxpradhanghanshyam7136
ย 
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisRaman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisDiwakar Mishra
ย 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxgindu3009
ย 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfmuntazimhurra
ย 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxUmerFayaz5
ย 
Work, Energy and Power for class 10 ICSE Physics
Work, Energy and Power for class 10 ICSE PhysicsWork, Energy and Power for class 10 ICSE Physics
Work, Energy and Power for class 10 ICSE Physicsvishikhakeshava1
ย 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Sรฉrgio Sacani
ย 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Lokesh Kothari
ย 

Recently uploaded (20)

Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...
ย 
Call Girls in Mayapuri Delhi ๐Ÿ’ฏCall Us ๐Ÿ”9953322196๐Ÿ” ๐Ÿ’ฏEscort.
Call Girls in Mayapuri Delhi ๐Ÿ’ฏCall Us ๐Ÿ”9953322196๐Ÿ” ๐Ÿ’ฏEscort.Call Girls in Mayapuri Delhi ๐Ÿ’ฏCall Us ๐Ÿ”9953322196๐Ÿ” ๐Ÿ’ฏEscort.
Call Girls in Mayapuri Delhi ๐Ÿ’ฏCall Us ๐Ÿ”9953322196๐Ÿ” ๐Ÿ’ฏEscort.
ย 
Caco-2 cell permeability assay for drug absorption
Caco-2 cell permeability assay for drug absorptionCaco-2 cell permeability assay for drug absorption
Caco-2 cell permeability assay for drug absorption
ย 
Nanoparticles synthesis and characterizationโ€‹ โ€‹
Nanoparticles synthesis and characterizationโ€‹  โ€‹Nanoparticles synthesis and characterizationโ€‹  โ€‹
Nanoparticles synthesis and characterizationโ€‹ โ€‹
ย 
Analytical Profile of Coleus Forskohlii | Forskolin .pptx
Analytical Profile of Coleus Forskohlii | Forskolin .pptxAnalytical Profile of Coleus Forskohlii | Forskolin .pptx
Analytical Profile of Coleus Forskohlii | Forskolin .pptx
ย 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on Io
ย 
Engler and Prantl system of classification in plant taxonomy
Engler and Prantl system of classification in plant taxonomyEngler and Prantl system of classification in plant taxonomy
Engler and Prantl system of classification in plant taxonomy
ย 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...
ย 
CELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdfCELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdf
ย 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
ย 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
ย 
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
ย 
Cultivation of KODO MILLET . made by Ghanshyam pptx
Cultivation of KODO MILLET . made by Ghanshyam pptxCultivation of KODO MILLET . made by Ghanshyam pptx
Cultivation of KODO MILLET . made by Ghanshyam pptx
ย 
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisRaman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
ย 
Presentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptxPresentation Vikram Lander by Vedansh Gupta.pptx
Presentation Vikram Lander by Vedansh Gupta.pptx
ย 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
ย 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptx
ย 
Work, Energy and Power for class 10 ICSE Physics
Work, Energy and Power for class 10 ICSE PhysicsWork, Energy and Power for class 10 ICSE Physics
Work, Energy and Power for class 10 ICSE Physics
ย 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
ย 
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
Labelling Requirements and Label Claims for Dietary Supplements and Recommend...
ย 

Methods of point estimation

  • 1. Methods of Point Estimation By Suruchi Somwanshi M.Sc. (Mathematics) M.Sc. (Statistics)
  • 2. TOPICS TO BE COVERED 1. Introduction to statistical inference 2. Theory of estimation 3. Methods of estimation 3.1 Method of maximum likelihood estimation 3.2 Method of moments
  • 3. 1. Introduction to statistical inference Statistics Inferential Statistics Estimation Testing of Hypothesis Descriptive statistics Descriptive analysis Graphical Presentation
  • 4. What do we mean by Statistical Inference? Drawing conclusion or making decision about population based on information collected from the sample. Population Sample Representative Making Conclusions
  • 5. โ€“ Statistical inference is further divided into two parts Testing of hypothesis & Theory of Estimation Testing of hypothesis โ€“ โžข The theory of testing of hypothesis is initiated by J. Neyman and E. S. Pearson. โžข It provides the rule which makes one to decide about the acceptance or rejection of the hypothesis under study. Theory of estimation โ€“ โžข The theory of estimation was founded by Prof. R. A. Fisher. โžข It discuss the ways of assigning the value to a population parameter based on values of corresponding statistics (function of sample observations).
  • 6. 2. Theory of estimation โžข The theory of estimation was founded by R. A. Fisher. Inferential Statistics Estimation Point Estimation Interval Estimation Testing of hypothesis
  • 7. What do we mean by Estimation It discuss the ways of assigning the values to a population parameter based on the values of the corresponding statistics(function of the sample observations). The statistics used to estimate population parameter is called estimator. The value of the estimator is called estimate.
  • 8. Types of estimation There are two types of estimation Point estimation & Interval estimation
  • 9. Point Estimation It involves the use of sample data to calculate a single value(known as a Point estimate) which is to serve as a best guess or best estimate of an unknown population parameter. More formally, it is the application of a point estimator to the data to obtain a point estimate.
  • 10. Interval estimation It is the use of sample data to calculate an interval of possible values of an unknown population parameter; this is in contrast to point estimation, which gives a single value Is an interval which is formed by two quantities based on sample data within which the parameter will lie with very high probability.
  • 11. 3. Methods of Estimation โ€“ Following are some of the important methods for obtaining good estimators : โžข Method of maximum likelihood estimation โžข Method of moments
  • 12. 3.1 Method of maximum likelihood estimation โ€“ It is initially formulated by C. F. Gauss. โ€“ In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate.
  • 13. Likelihood function It is formed from the joint density function of the sample. i.e., ๐ฟ = ๐ฟ ๐œƒ = ๐‘“ ๐‘ฅ1, ๐œƒ โ€ฆ โ€ฆ โ€ฆ . ๐‘“ ๐‘ฅ๐‘›, ๐œƒ = เท‘ ๐‘–=1 ๐‘› ๐‘“ ๐‘ฅ๐‘–, ๐œƒ Where ๐‘ฅ1, ๐‘ฅ2, ๐‘ฅ3,โ€ฆโ€ฆ. ๐‘ฅ๐‘› be a random sample of size n from a population with density function ๐‘“ ๐‘ฅ, ๐œƒ .
  • 14. Steps to perform in MLE 1. Define the likelihood, ensuring youโ€™re using the correct distribution for your classification problem. 2. Take the natural log and reduce the product function to a sum function. 3. Then compute the parameter by considering the case ๐œ• ๐œ•๐œƒ ๐‘™๐‘œ๐‘”๐ฟ = 0 & ๐œ•2 ๐œ•๐œƒ2 ๐‘™๐‘œ๐‘”๐ฟ < 0 This equations are usually referred to as the Likelihood Equation for estimating the parameters.
  • 15. Example Suppose we have a random sample ๐‘‹1, ๐‘‹2, โ€ฆ โ€ฆ โ€ฆ , ๐‘‹๐‘› where : ๐‘‹๐‘– = 0 ; ๐‘–๐‘“ ๐‘Ž ๐‘Ÿ๐‘Ž๐‘›๐‘‘๐‘œ๐‘š๐‘™๐‘ฆ ๐‘ ๐‘’๐‘™๐‘’๐‘๐‘ก๐‘’๐‘‘ ๐‘ ๐‘ก๐‘ข๐‘‘๐‘’๐‘›๐‘ก ๐‘‘๐‘œ๐‘’๐‘  ๐‘›๐‘œ๐‘ก ๐‘œ๐‘ค๐‘› ๐‘Ž ๐‘๐‘Ž๐‘Ÿ, ๐‘Ž๐‘›๐‘‘ ๐‘‹๐‘– = 1 ; ๐‘–๐‘“ ๐‘Ž ๐‘Ÿ๐‘Ž๐‘›๐‘‘๐‘œ๐‘š๐‘™๐‘ฆ ๐‘ ๐‘’๐‘™๐‘’๐‘๐‘ก๐‘’๐‘‘ ๐‘ ๐‘ก๐‘ข๐‘‘๐‘’๐‘›๐‘ก ๐‘‘๐‘œ๐‘’๐‘  ๐‘œ๐‘ค๐‘› ๐‘Ž ๐‘๐‘Ž๐‘Ÿ. Assuming that the ๐‘‹๐‘– are independent Bernoulli random variables with unknown parameter p, find the maximum likelihood estimator of p, the proportion of students who own a sports car.
  • 16. If the ๐‘‹๐‘– are independent Bernoulli random variables with unknown parameter p, then the probability mass function of each ๐‘‹๐‘– is : ๐‘“ ๐‘ฅ; ๐‘ = ๐‘๐‘ฅ 1 โˆ’ ๐‘ 1โˆ’๐‘ฅ For ๐‘‹๐‘– = 0๐‘œ๐‘Ÿ 1 ๐‘Ž๐‘›๐‘‘ 0 < ๐‘ < 1. Therefore, the likelihood function L(p) is, by definition: Answer ๐ฟ ๐‘ = ฯ‚๐‘–=1 ๐‘› ๐‘“ ๐‘ฅ; ๐‘ = ๐‘๐‘ฅ1 1 โˆ’ ๐‘ 1โˆ’๐‘ฅ1 ร— ๐‘๐‘ฅ2 1 โˆ’ ๐‘ 1โˆ’๐‘ฅ2 ร— โ‹ฏ โ€ฆ โ€ฆ โ€ฆ ร— ๐‘๐‘ฅ๐‘›แˆบ แˆป 1 โˆ’ ๐‘ 1โˆ’๐‘ฅ๐‘› For 0 < p < 1. Simplifying, by summing up the exponents we get: ๐ฟ ๐‘ = ๐‘ฯƒ๐‘–=1 ๐‘› ๐‘ฅ๐‘– 1 โˆ’ ๐‘ ๐‘› โˆ’ ฯƒ๐‘–=1 ๐‘› ๐‘ฅ๐‘– โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ..(1)
  • 17. Now, in order to implement the method of maximum likelihood, we need to find the value of unknown parameter p that maximizes the likelihood L(p) given in equation (1). So to maximize the function, we are need to differentiate the likelihood function with respect to p. And to make the differentiation easy we are going to use the logarithm of likelihood function as it is an increasing function of x. That is, if ๐‘ฅ1 < ๐‘ฅ2 , then๐‘“แˆบ๐‘ฅ1แˆป < ๐‘“แˆบ๐‘ฅ2แˆป. That means the value of p that maximizes the natural logarithm of the likelihood function log L(p) is also the value of p that maximizes the likelihood function L(p).
  • 18. So we take the derivative of log L(p) with respect to p instead of taking the derivative of L(p). In this case, the log likelihood function is : ๐‘™๐‘œ๐‘”๐ฟ ๐‘ = ฯƒ๐‘–=1 ๐‘› ๐‘ฅ๐‘– log ๐‘ + ๐‘› โˆ’ ฯƒ๐‘–=1 ๐‘› ๐‘ฅ๐‘– log 1 โˆ’ ๐‘ โ€ฆโ€ฆโ€ฆโ€ฆโ€ฆ.. (2) Taking the derivative of log L(p) with respect to p and equate it with 0 we get : ๐œ• log ๐ฟ ๐‘ ๐œ•๐‘ = 0
  • 19. => ฯƒ ๐‘ฅ๐‘– ๐‘ โˆ’ ๐‘› โˆ’ ฯƒ ๐‘ฅ๐‘– 1 โˆ’ ๐‘ = 0 Now by simplifying this for p we get; Here (โ€œ^โ€) is used to represent the estimate of parameter p. Though we find the estimate of parameter p, technically to verify that it is maximum. For that the second derivative of the logL(p) with respect to p should negative i.e., ๐œ•2 log ๐ฟ ๐‘ ๐œ•๐‘2 < 0 => โˆ’๐‘› < 0 โ€ฆ โ€ฆ โ€ฆ โ€ฆ . แˆบ๐‘๐‘ฆ 3แˆป Thus, ฦธ ๐‘ = ฯƒ ๐‘ฅ๐‘– ๐‘› is maximum likelihood estimator of p.
  • 20. 3.2 Method of moments โ€“ This method was discovered and studied in detail by Karl Pearson. โ€“ The basic idea behind this form of the method is to: 1. Equate the first sample moment about the origin ๐‘€1 = 1 ๐‘› ฯƒ๐‘–=1 ๐‘› ๐‘‹๐‘– = าง ๐‘ฅ to the first theoretical moment E(X). 2. Equate the second sample moment about the origin ๐‘€2= 1 ๐‘› ฯƒ๐‘–=1 ๐‘› ๐‘‹๐‘– 2 to the second theoretical moment E(๐‘‹2 ).
  • 21. 3. Continue equating sample moments about the origin, ๐‘€๐‘˜, with the corresponding theoretical moments E(๐‘‹๐‘˜),k=3,4,โ€ฆ until you have as many equations as you have parameters. 4. Solve this equation for the parameters. โ€“ The resulting values are called method of moments estimators. It seems reasonable that this method would provide good estimates, since the empirical distribution converges in some sense to the probability distribution. Therefore, the corresponding moments should be about equal.
  • 22. Another Form of the Method โ€“ The basic idea behind this form of the method is to: 1. Equate the first sample moment about the origin ๐‘€1 = 1 ๐‘› ฯƒ๐‘–=1 ๐‘› ๐‘‹๐‘– = าง ๐‘ฅ to the first theoretical moment E(X). 2. Equate the second sample moment about the mean ๐‘€1 = 1 ๐‘› ฯƒ๐‘–=1 ๐‘› แˆบ๐‘ฅ๐‘– โˆ’ าง ๐‘ฅแˆป2 to the second theoretical moment about the mean E[แˆบ๐‘‹ โˆ’ ๐œ‡แˆป2]. 3. Continue equating sample moments about the mean Mkโˆ— with the corresponding theoretical moments about the mean E[แˆบ๐‘‹ โˆ’ ๐œ‡แˆป๐‘˜], k=3,4,โ€ฆ until you have as many equations as you have parameters. 4. Solve for the parameters. โ€“ Again, the resulting values are called method of moments estimators.
  • 23. Example Let ๐‘‹1, ๐‘‹2, โ€ฆ โ€ฆ โ€ฆ , ๐‘‹๐‘› be normal random variate with mean ๐œ‡ and variance ๐œŽ2. What are the method of moment estimators of the mean ๐œ‡ and variance ๐œŽ2 ?
  • 24. The first and second theoretical moments about the origin are: ๐ธ ๐‘‹๐‘– = ๐œ‡ ๐‘Ž๐‘›๐‘‘ ๐ธ ๐‘‹๐‘– 2 = ๐œŽ2 + ๐œ‡2 Here we have two parameters for which we are trying to derive method of momentโ€™s estimators. Answer
  • 25. Therefore, we need two equations here. Equating the first theoretical moment about the origin with the corresponding sample moment, we get: ๐ธ ๐‘‹๐‘– = ๐œ‡ = 1 ๐‘› เท ๐‘–=1 ๐‘› ๐‘‹๐‘– โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ . แˆบ1แˆป And equating the second theoretical moment about the origin with the corresponding sample moment, we get: ๐ธ ๐‘‹๐‘– 2 = ๐œŽ2 + ๐œ‡2 = 1 ๐‘› เท ๐‘–=1 ๐‘› ๐‘‹๐‘– 2 โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ โ€ฆ . 2
  • 26. Now from equation (1) we say that the method of moments estimator for the mean ๐œ‡ is the sample mean: ฦธ ๐œ‡๐‘€๐‘€ = 1 ๐‘› เท ๐‘–=1 ๐‘› ๐‘‹๐‘– = เดค ๐‘‹ And by substituting the sample mean as the estimator of ๐œ‡ in the second equation and solving for ๐œŽ2, we get the method of moments estimator for the variance ๐œŽ2 is; เทข ๐œŽ2 ๐‘€๐‘€ = 1 ๐‘› เท ๐‘–=1 ๐‘› ๐‘‹๐‘– 2 โˆ’ ๐œ‡2 = 1 ๐‘› เท ๐‘–=1 ๐‘› ๐‘‹๐‘– 2 โˆ’ เดค ๐‘‹2 = 1 ๐‘› เท ๐‘–=1 ๐‘› แˆบ๐‘‹๐‘– โˆ’ เดฅ ๐‘‹ แˆป2 For this example, if cross check, then method of moments estimators are the same as the maximum likelihood estimators.