Sampling & EstimationTopic #21 of 33

Point Estimation

Estimating population parameters: unbiased estimators, maximum likelihood, and method of moments.

Overview

Point estimation uses sample data to calculate a single value (point estimate) as the best guess for an unknown population parameter.

Key Terms

TermDefinition
ParameterUnknown population characteristic (μ\mu, σ\sigma, pp)
EstimatorRule or formula used to calculate estimate
EstimateSpecific value calculated from data

Common Point Estimators

ParameterPoint Estimator
Population mean (μ\mu)Sample mean (xˉ\bar{x})
Population variance (σ2\sigma^2)Sample variance (s2s^2)
Population proportion (pp)Sample proportion (p^\hat{p})
Population SD (σ\sigma)Sample SD (ss)

Properties of Good Estimators

1. Unbiasedness

An estimator is unbiased if:

E(θ^)=θE(\hat{\theta}) = \theta

The expected value of the estimator equals the parameter.

  • xˉ\bar{x} is unbiased for μ\mu
  • s2s^2 is unbiased for σ2\sigma^2 (with n1n-1 in denominator)
  • p^\hat{p} is unbiased for pp

2. Efficiency

Among unbiased estimators, the most efficient has the smallest variance.

3. Consistency

As nn \to \infty, the estimator converges to the true parameter:

θ^pθ\hat{\theta} \xrightarrow{p} \theta

4. Sufficiency

Uses all relevant information in the sample.

Bias and Variance

Mean Squared Error (MSE)=Variance+Bias2\text{Mean Squared Error (MSE)} = \text{Variance} + \text{Bias}^2 MSE(θ^)=Var(θ^)+[E(θ^)θ]2\text{MSE}(\hat{\theta}) = \text{Var}(\hat{\theta}) + [E(\hat{\theta}) - \theta]^2

For unbiased estimators: MSE = Variance

Methods of Estimation

Method of Moments

Match sample moments to population moments:

Sample mean=Population mean\text{Sample mean} = \text{Population mean} xˉ=μ\bar{x} = \mu

Maximum Likelihood Estimation (MLE)

Find θ\theta that maximizes the likelihood of observing the data:

L(θ)=P(dataθ)L(\theta) = P(\text{data} \mid \theta) θ^MLE=argmaxL(θ)\hat{\theta}_{MLE} = \arg\max L(\theta)

Examples

Example 1: Estimating Mean

Sample data: 12, 15, 18, 21, 24

xˉ=12+15+18+21+245=905=18\bar{x} = \frac{12 + 15 + 18 + 21 + 24}{5} = \frac{90}{5} = 18

Point estimate of μ\mu is 18.

Example 2: Estimating Proportion

In 200 voters, 120 support a candidate:

p^=120200=0.60\hat{p} = \frac{120}{200} = 0.60

Point estimate of pp is 0.60 (60%).

Example 3: Unbiased Variance

Why divide by (n1)(n-1)?

If we used nn:

E[(xixˉ)2n]=n1n×σ2σ2(biased)E\left[\frac{\sum(x_i - \bar{x})^2}{n}\right] = \frac{n-1}{n} \times \sigma^2 \neq \sigma^2 \quad \text{(biased)}

Using n1n-1:

E[(xixˉ)2n1]=σ2(unbiased)E\left[\frac{\sum(x_i - \bar{x})^2}{n-1}\right] = \sigma^2 \quad \text{(unbiased)}

Standard Error of Estimators

EstimatorStandard Error
xˉ\bar{x}σ/n\sigma/\sqrt{n} or s/ns/\sqrt{n}
p^\hat{p}p(1p)/n\sqrt{p(1-p)/n} or p^(1p^)/n\sqrt{\hat{p}(1-\hat{p})/n}

Limitations

Point estimates:

  • Give a single value, not a range
  • Don't indicate uncertainty or precision
  • May be exactly wrong (just one value)

→ This motivates interval estimation (confidence intervals)