Entropy Inference Based on An Objective Bayesian Approach for Upper Record Values Having the Two-Parameter Logistic Distribution

Abstract: This paper provides an entropy inference method based on an objective Bayesian approach for upper record values having the two-parameter logistic distribution. We derive the entropy based on i-th upper record value and the joint entropy based on the upper record values, and examine their properties. For objective Bayesian analysis, we provide objective priors such as the Jeffreys and reference priors for unknown parameters of the logistic distribution based on upper record values. Then, an entropy inference method based on the objective priors is developed. In real data analysis, we assess the quality of the proposed models under the objective priors.


Introduction
Shannon [1] proposed information theory to quantify information loss and introduces statistical entropy.Baratpour et al. [2] provided the entropy of a continuous probability distribution with upper record values and several bounds for this entropy by using the hazard rate function.Abo-Eleneen [3] suggested an efficient computation method for entropy in progressively Type-II censored samples.Kang et al. [4] derived estimators of the entropy of a double-exponential distribution based on multiply Type-II censored samples by using maximum likelihood estimators (MLEs) and approximate MLEs (AMLEs).Seo and Kang [5] developed estimation methods for entropy by using estimators of the shape parameter in the generalized half-logistic distribution based on Type-II censored samples.
This paper provides an entropy inference method based on an objective Bayesian approach for upper record values having the two-parameter logistic distribution.The cumulative distribution function (cdf) and probability density function (pdf) of the random variable X with this distribution are given by where µ is the location parameter and σ is the scale parameter.The rest of this paper is organized as follows: Section 2 provides the jeffreys and reference priors, and derives the entropy inference method based on the provided noninformative priors.Section 3 analyses a rea data set to show the validity of the proposed method, and Section 4 concludes this paper.

Objective Priors
Let X U(i) , . . ., X U(k) be the upper record values X 1 , . . ., X n from the logistic distribution with pdf (1).Then the corrsponding likelihood function is given by The FIsher information matrix for (µ, σ) is given by By the result provided in Asgharzadeh et al. [9], all elements of the Fisher information (2) are proportional to 1/σ 2 .Therefore, the Jeffreys prior is by the definition that it is proportional to the square root of the determinant of the Fisher information.However, the Jeffreys prior has some drawbacks for multi-parameter case such as marginalization paradox, Neyman-Scott problem, and so on.Alternately, Bernardo [10] introduced the reference prior, and Berger and Bernardo [11,12] provided a general algorithm for deriving the reference prior.By using the algorithm, we can obtain the reference prior (µ, σ) as regardless which parameter is of interest.Unfortunately, it is impossible to express in closed forms the marginal distribution for µ and σ under the objective priors (3) and (4).To generate the Markov chain Monte Carlo (MCMC) samples from the marginal distributions, we should conduct a MCMC technique.
The full conditional posterior distribution for µ and σ under a joint prior π(µ, σ) are given by and respectively.
Under both objective priors (3) and ( 4), the full conditional posterior distribution ( 5) is log-concave.Therefore, we can draw the MCMC samples µ i (i = 1, . . ., N) from the conditional posterior distribution (5) by using the method proposed by Devroye [13].We also need to note the fact that σ ∈ R + , but µ ∈ R and X U(i) ∈ R. In this case, it is not easy to find a suitable proposal distribution for drawing the MCMC samples σ i (i = 1, . . ., N) from the full conditional posterior distribution (6).Therefore, we employ the random-walk Metropolis algorithm based on a normal proposal distribution truncated at zero.

Entropy
Theorem 1.The entropy based on ith upper record value X U(i) is Remark 1.It is clear that the entropy ( 7) is an increasing function of σ.Therefore, the larger σ, the less information is provided by the distribution due to increasing entropy.
Remark 2. We can obtain the following relationships between two adjacent entropies: which is an increasing function of σ as Remaek 1.
Proof.The joint entropy based on upper record values X U(1) , . . ., X U(k) is defined by Park [14] as where is the joint density function of x U(1) , . . ., x U(k) .In addition, it is simplified to a single-integral by Rad et al.
[15] as Then the integral term in ( 9) is given by This completes the proof.
We present these changes of the entropy (7) and the entropy (8) in Tables 1, 2 and Figure 1.Here, we need to note that σ is unknown parameter and should be estimated when real data are observed.The following theorem provides an estimator of the joint entropy (8) in a Bayesian framework.

Theorem 3. The Bayes estimator of the joint entropy
where E π|x (•) is the posterior expectation.
Proof.In the Bayesian view, the entropy estimator based on X U(1) , . . ., X U(k) is defined as Then, the estimator ( 11) is given by The term in (10), E π|x (log σ), is approximated as where M is the number of burn-in samples.It completes the proof.
The following section examines the validity of the provided objective Bayesian method by analyzing a real data set.

Application
Asgharzadeh et al. [9] analyzed the upper record values 2.70, 3.78, 4.83, 8.02, 8.37 from the total annual rainfall (in inches) during March recorded at Los Angeles Civic Center from 1973 to 2006.The MCMC samples are generated by using the MCMC algorithm described in Subsection 2.1.To obtain the optimal acceptance rate (see Robert and Rosenthal [7]) under the provided priors (3) and ( 4), the variances in a truncated normal proposal are assigned to 0.7 and 0.8, respectively.Based on 5,500 MCMC samples with 500 brun-in samples, the Bayes estimates under the square error loss function and corresponding 95% HPD CrIs are computed to compare the MLEs.The results are reported in Table 3.To check the validity of the MCMC samples, we present their autocorrelation functions (ACFs) and trace plots.From Figures 2 and 3, we can see that the MCMC samples are mixing and converge to the stationary distribution well.Further, we assess the quality of the Bayesian models under the provided priors (3) and (4) based on the replications X rep U(i) (i = 1, . . ., 5) of the observed upper record values from the posterior predictive distributions given by where f X rep (x rep ) is the marginal density function of X rep .The replications are obtained as where X rep(j) U(i) is a sample from the marginal density function f • X rep (x rep ).The replications and their mean and standard deviation (std) are given in Table 4.The mean and standard deviation (std) of the observed upper record values are 5.54 and 2.541, respectively.The model under the Jeffreys prior (3) shows better performance for the replications X rep U(i) (i = 1, 2, 3, 5) and mean, while that under the reference prior (4) shows better performance for the replication X rep U(4) and std.However, there is no significant difference between the replications under the priors (3) and (4).That is, we can not conclude which model is better for this observed upper record values.Therefore, we show the estimation results of the joint entropy (10) under both priors (3) and (4) in Table 5.Further, we present kernel densities of the joint entropies based on the MCMC samples to show the joint entropies graphically.

Conclusions
This paper proposed an entropy inference method based on an objective Bayesian approach for upper record values having the two-parameter logistic distribution.First, we provided the noninformative priors such as the Jeffreys and reference priors for unknown parameters of the two-parameter logistic distribution, and then derived the entropy based on i-th upper record value and the joint entropy based on the upper record values, and examined their properties.We evaluated the objective Bayesian models under the provided objective priors through the posterior predictive checking that is conducted based on the replications of the observed upper record values.The proposed objective Bayesian approach is usefull when there is not enough prior information, and saves a lot of effort and time to obtain prior information.

Figure 1 .
Figure 1.Entropy for upper record values.

Table 1 .
Entropy based on ith upper record value X U(i) .

Table 3 .
Estimates and the corresponding 95% HPD CrI for µ and σ.

Table 4 .
Replications, and their Mean and standard deviation (std).

Table 5 .
Estimates and the corresponding 95% HPD CrI of the joint entropy ĤB