•  
  •  
 

Abstract

Most topics in statistical inference focus on the estimation of parameters for probability distributions with known forms, where the observed values of a random variable represent an unknown parameter or a vector of parameters. In non-Bayesian estimation methods, statisticians assume that the parameter is an unknown fixed constant, implying that the information provided by a sample of an appropriate size drawn from the population is sufficient for the estimation process. However, in most practical applications, the parameter is actually a random variable rather than a fixed quantity. Consequently, its estimation requires prior information in the form of a probability distribution, typically proposed by the statistician and known as the prior distribution. Accordingly, the Bayesian approach is more suitable than other methods, such as maximum likelihood estimation, because it incorporates available information about the parameter and the sample through a probability distribution called the posterior distribution. This approach undoubtedly leads to more accurate and realistic results in statistical inference, whether in hypothesis testing or estimation. Furthermore, the Bayesian estimator maximizes the expected utility as a function of the observed data; specifically, it represents the conditional expectation of the posterior distribution when the squared error loss function is employed. It is also characterized as a biased yet admissible estimator that is more efficient than any other non-Bayesian estimator. This research aims to derive a Bayesian confidence interval for the mean of a normally distributed random variable with a known variance, assuming that the proposed prior distribution for the parameter is also a normal distribution with a known mean and variance.

DOI

10.33095/t0ez4n03

Subject Area

Statistical

First Page

273

Last Page

278

Rights

https://creativecommons.org/licenses/by-nc/4.0

Share

COinS