Statistics/Point Estimates

From testwiki
Jump to navigation Jump to search

Point Estimates

Definition: Suppose a random variable X follows a statistical distribution or a law θ indexed by parameter θ. Then a function g:Xθ^ from the sample space to the parameter space is called a point estimator of θ.

In general, let f(θ) be any function of θ. Then any function from the sample space to the domain of f will be called a point estimator of f(θ).

Definition: If h is a point estimator for θ, then for a realization x of the random variable X, the quantity h(x) is called a point estimate of θ and is denoted as θ^.

Notice that the estimate θ^ is a random variable (unlike the true parameter θ), since it depends on X.

Examples

  1. Suppose X1,X2,...,Xn follow independent Normal(μ,σ2). Then an estimator for the mean μ is the sample mean Xin.
  2. Suppose X1,X2,...,Xn follow Uniform[θ,θ+1]. Then an estimator for θ is minXi. Another is maxXi1. Yet another is Xin12

Notice that the above definition does not restrict the point estimator to only the "good" ones. For example, according to the definition it is perfectly fine to estimate the mean μ in the above example as something absurd, like 10Xi2+exp(X1). This freedom is in the definition is deliberate. In general, however, when we form point estimators we take some measure of goodness into account. It should be kept in mind that the point estimators will always be targeted to be close to the parameter it estimates, intuitively and if possible, formally.