Publikationsserver der Universitätsbibliothek Marburg

Titel:Identification and estimation of hidden Markov models
Autor:Alexandrovich, Grigory
Weitere Beteiligte: Holzmann, Hajo (Prof. Dr.)
Veröffentlicht:2014
URI:https://archiv.ub.uni-marburg.de/diss/z2014/0393
URN: urn:nbn:de:hebis:04-z2014-03934
DOI: https://doi.org/10.17192/z2014.0393
DDC: Mathematik
Titel (trans.):Identifizierung und Schätzung von hidden Markov Modellen
Publikationsdatum:2014-09-25
Lizenz:https://rightsstatements.org/vocab/InC-NC/1.0/

Dokument

Schlagwörter:
maximum likelihood, Identifikation, Schätzung, Statistik, Konsistenz, maximum likelihood, identification, hidden Markov models, Identifikation, Konsistenz, Schätzung, consistency, hidden Markov Modelle, mixture models

Summary:
In the current thesis several selected aspects of the two related latent class models; finite mixtures and hidden Markov models, were considered. The problem of calculating the MLE of a Gaussian mixture with Newton's method and the consistency of penalized MLE were investigated. A penalized MLE procedure for univariate Gaussian hidden Markov models was introduced and shown to be consistent. Furthermore, a result on identifiability of nonparametric hidden Markov models is derived.

Bibliographie / References

  1. Tanaka, K. (2009). Strong consistency of the maximum likelihood estimator for finite mixtures of location-scale distributions when penalty is imposed on the ratios of the scale parameters. Scandinavian Journal of Statistics, 36 171–184.
  2. Chen, J. and Tan, X. (2009). Inference for multivariate normal mixtures. Journal of Multivariate Analysis, 100 1367–1383.
  3. Gassiat, E., Cleynen, A. and Robin, S. (2013). Finite state space non parametric hidden markov models are in general identifiable. preprint.
  4. Alexandrovich, G. and Holzmann, H. (2014). Nonparametric identifi- cation of hidden Markov models. ArXiv e-prints. 1404.4210.
  5. Serfling, R. J. (1980). Approximation Theorems of Mathematical Statistics. Willey, New York.
  6. Nocedal, J. and Wright, S. J. (2006). Numerical Optimization. S Springer. Bibliography [37] Norris, J. (1998). Markov Chains. No. Nr. 2008 in Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press. URL http://books.google.de/books?id=qM65VRmOJZAC.
  7. Ciuperca, G., Ridolfi, A. and Idier, J. (2003). Penalized maximum likelihood estimator for normal mixtures. Scandinavian Journal of Statistics, 30 645–59.
  8. Dempster, A. P., Laird, N. M. and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B, 39 1–38.
  9. Lindgren, G. (1978). Markov Regime Models for Mixed Distributions and Switching Regressions. Scand. J. Statistics 81–91.
  10. Xu, L. and Jordan, M. I. (1995). On convergence properties of the em algorithm for gaussian mixtures. Neural Computation, 8 129–151.
  11. Celeux, G., Chauveau, D. and Diebolt, J. (1995). On stochastic ver- sions of the em algorithm. Inria, Rapport de recherche no. 2514.
  12. Fraley, C. and Raftery, A. E. (2002). Model-based clustering, discrim- inant analysis, and density estimation. Journal of the American Statistical Association, 97 611–631.
  13. Hennig, C. (2010). Methods for merging gaussian mixture components. Advances in Data Analysis and Classification. URL http://dx.doi.org/ 10.1007/s11634-010-0058-3.
  14. Kruskal, J. B. (1977). Three-way arrays: rank and uniqueness of trilin- ear decompositions, with application to arithmetic complexity and statistics. Linear Algebra and its Applications 95–138.
  15. Petrie, T. (1969). Probabilistic functions of finite state Markov chains. The Annals of Mathematical Statistics, 40 97–115.
  16. McLachlan, G. J. and Bashford, K. E. (1998). Mixture Models Inference and Applications to Clustering. Marcel Dekker, New York.
  17. Titterington, D. M., Smith, A. F. M. and Makov, U. E. (1985). Statistical Analysis of Finite Mixture Distributions. John Wiley & Sons.
  18. Everitt, B. S. (1984). Maximum Likelihood Estimation of the Parame- ters in a Mixture of Two Univariate Normal Distributions; A Comparison of Different Algorithms. Journal of the Royal Statistical Society. Series D, 33 205–215.
  19. Peters, B. C. and Walker, H. F. (1978). An Iterative Procedure for Obtaining Maximum-Likelihood Estimates of the Parameters for a Mixture of Normal Distributions. SIAM Journal on Applied Mathematics, 25 362–378.
  20. Holladay, J. C. and Varga, R. S. (1958). On powers of non-negative matrices. Proceedings of American Mathematical Society.
  21. Aitkin, M. and Aitkin, I. (1996). A hybrid EM/Gauss-Newton algorithm for maximum likelihood in mixture distributions. Statistics and Computing, 6 127–130.
  22. Merlevède, F., Peligrad, M. and Rio, E. (2011). A Bernstein type in- equality and moderate deviations for weakly dependent sequences. Probability Theory and Related Fields, 151 435–474.
  23. Lange, K. (1995). A quasi-Newton acceleration of the EM algorithm. Sta- tistica Sinica, 5 1–18.
  24. Bickel, J. P., Ritov, Y. and Rydèn, T. (1998). Asymptotic Normality of the maximum likelihood estimator for general Hidden Markov Models. The Annals of Statistics, 26 1614–1635.
  25. McLachlan, G. and Peel, D. (2000). Finite Mixture Models. Wiley, New York.
  26. Rabiner, L. and Juang, B.-H. (1993). Fundamentals of Speech Recognis- tion. Prentice-Hall International, Inc.
  27. Allman, E. S., Matias, C. and Rhodes, J. A. (2009). Identifiability of parameters in latent structure models with many observed variables. The Annals of Statistics, 37 3099–3132.
  28. Chen, J., Tan, X. and Zhang, R. (2008). Inference for normal mixtures in mean and variance. Statistica Sinica, 18 443–465.
  29. Kelley, C. T. (1995). Iterative methods for linear and nonlinear equations. SIAM Publications.
  30. Fraley, C. and Raftery, A. E. (2006). Mclust version 3 for r normal mixture modeling and model-based clustering.
  31. Redner, R. A. and Walker, H. F. (1984). Mixture Densities, Maximum Likelihood and the Em Algorithm. SIAM Review, 26 195–239.
  32. Alexander, K. S. (1984). Probability inequalities for empirical processes and a law of the iterated logarithm. Annals of Probability, 12 1041–1067.
  33. R. Lebret, F. L., S. Ioveff (2013). Rmixmod an interface of MIXMOD, v.1.1.3.
  34. Jank, W. (2006). The EM Algorithm, Its Stochastic Implementation and Global Optimization Some Challenges and Opportunities for OR. Inria, Rap- port de recherche no. 2514.
  35. Akama, Y. and Irie, K. (2011). VC dimension of ellipsoids. ArXiv e-prints. 1109.4347.
  36. [47] van der Vaart, A. and Wellner, J. A. (2000). Weak Convergence and Empirical Processes. Springer.
  37. Teicher, H. (1967). Identifiability of mixtures of product measures.
  38. Kiefer, J. and Wolfowitz, J. (1956). Consistency of the Maximum Like- lihood Estimator in the Presence of Infinitely Many Incidental Parameters. Annals of Mathematical Statistics, 27 887–906.
  39. Wald, A. (1949). Note on the consistency of the maximum likelihood esti- mate. Annals of Mathematical Statistics, 20 595–601.
  40. Billingsley, P. (1986). Probability and Measure. John Wiley & Sons.
  41. Hathaway, R. J. (1985). A Constrained Formulation of Maximum- Likelihood Estimation for Normal Mixture Distributions. Annals of Statistics, 13 795–800.
  42. Jakowitz, S. J. and Spargins, J. D. (1968). On the identifiability of finite mixtures. The Annals of Mathematical Statistics 209–214. Bibliography [24] Jamshidian, M. and Jennrich, I. R. (1997). Acceleration of the EM Algorithm by Using Quasi-Newton Methods. Journal of the Royal Statistical Society. Series B, 59 569–587.
  43. Baudry, J. P., Raftery, A. E., Celeux, G., Kenneth, L. and Got- tardo, R. (2008). Combining mixture components for clustering. Inria, Rapport de recherche no. 6644.
  44. Leroux, B. G. (1990). Maximum-likelihood estimation for hidden Markov models. Stochastic Processes and their Applications, 40 127–143.
  45. Ferguson, T. S. (1996). A course in large sample theory. Chapman & Hall.


* Das Dokument ist im Internet frei zugänglich - Hinweise zu den Nutzungsrechten