Comparison of Two Parametric Estimators for the Entropy of the Lognormal Distribution

Title & Authors
Comparison of Two Parametric Estimators for the Entropy of the Lognormal Distribution
Choi, Byung-Jin;

Abstract
This paper proposes two parametric entropy estimators, the minimum variance unbiased estimator and the maximum likelihood estimator, for the lognormal distribution for a comparison of the properties of the two estimators. The variances of both estimators are derived. The influence of the bias of the maximum likelihood estimator on estimation is analytically revealed. The distributions of the proposed estimators obtained by the delta approximation method are also presented. Performance comparisons are made with the two estimators. The following observations are made from the results. The MSE efficacy of the minimum variance unbiased estimator appears consistently high and increases rapidly as the sample size and variance, n and $\small{{\sigma}^2}$, become simultaneously small. To conclude, the minimum variance unbiased estimator outperforms the maximum likelihood estimator.
Keywords
Lognormal distribution;entropy;parametric entropy estimator;consistency;normal approximation;MSE efficacy;
Language
Korean
Cited by
References
1.
Aitchison, J. and Brown, J. A. C. (1957). The Lognormal Distribution, Cambridge University Press, Cambridge.

2.
Burbea, J. and Rao, C. R. (1982). Entropy differential metric, distance and divergence measures in probability spaces: A unified approach, Journal of Multivariate Analysis, 12, 576-579.

3.
Crow, E. L. and Shimizu, K. (1988). Lognormal Distributions: Theory and Applications, Marcel Dekker, New York.

4.
Davies, G. R. (1929). The analysis of frequency distributions, Journal of the American Statistical Association, 24, 467-480.

5.
Finney, D. J. (1941). On the distribution of a variate whose logarithmic is normally distributed, Journal of the Royal Statistical Society, Series B, 7, 155-161.

6.
Galton, F. (1879). The geometric mean in vital and social statistics, Proceedings of the Royal Society of London, 29, 965-967.

7.
Havrda, J. and Charvat, F. (1967). Quantification method in classification processes: Concept of structural $\alpha$-entropy, Kybernetika, 3, 30-35.

8.
Johnson, N. L., Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, Volume 1, John Wiley & Sons, New York.

9.
Kapteyn, J. C. (1903). Skew Frequency Curves in Biology and Statistics, Astronomical Laboratory Noordhoff, Groningen.

10.
Kapteyn, J. C. and van Uven, M. J. (1916). Skew Frequency Curves in Biology and Statistics, Hotsema Brothers, Groningen.

11.
Kapur, J. N. and Kesavan, H. K. (1992). Entropy Optimization Principles with Applications, Academic Press, San Diego.

12.
Koopmans, L. H., Owen, D. B. and Rosenblatt, J. I. (1964). Confidence intervals for the coefficient of variation for the normal and lognormal distributions, Biometrika, 51, 25-32.

13.
Kullback, S. and Leibler, R. A. (1951). On information and sufficiency, The Annals of Mathematical Statistics, 22, 79-86.

14.
Nakamura, T. (1991). Existence of maximum likelihood estimates for interval-censored data from some three-parameter models with a shift origin, Journal of the Royal Statistical Society, Series B, 53, 211-220.

15.
Nydell, S. (1919). The mean errors of the characteristics in logarithmic-normal distribution, Skandinavisk Aktuarietidskrift, 1, 134-144.

16.
Olshen, A. C. (1937). Transformations of the Pearson Type III distributions, The Annals of Mathematical Statistics, 8, 176-200.

17.
Shannon, C. E. (1948). A mathematical theory of communication, Bell System Technical Journal, 27, 379-423, 623-656.

18.
Soofi, E. S. and Retzer, J. J. (2002). Information indices: Unification and applications, Journal of Econometrics, 107, 17-40.

19.
Ullah, A. (1996). Entropy, divergence and distance measures with econometric applications, Journal of Statistical Planning and Inference, 49, 137-162.

20.
Wu, C. Y. (1966). The types of limit distribution for some terms of variational series, Scientia Sinica, 15, 745-762.