Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

You searched for subject:(scale usage heterogeneity). One record found.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


The Ohio State University

1. Olsen, Andrew Nolan. When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo Methods.

Degree: PhD, Statistics, 2015, The Ohio State University

Markov chains are an incredibly powerful tool for statisticians and other practitioners. They allow for random draws, though autocorrelated, to be obtained from a vast array of target distributions, even when the distribution is known only up to a constant. These draws may then be used to answer key questions of interest. Markov chains are used in many settings and are the predominant method for performing inference for Bayesian methods. The utility of Markov chains lies largely in the simplicity with which they are implemented. The most basic algorithms are easily understood and are not challenging to program. The trade-off with ease of implementation, however, is that issues with Markov chains, particularly with respect to convergence, can occasionally be left undiagnosed. For example, a Markov chain may not have been run long enough to accurately capture the features of the distribution of interest, or perhaps the error of the resulting estimates is grossly underrepresented, if it is considered at all.The study of Markov chain convergence can be summarized by two main questions:Question 1: Was the simulation run long enough? Question 2: How accurate are the resulting estimates?While simple and clear, these questions are often left unanswered when Markov chain Monte Carlo methods are implemented. This is largely due to the fact that these answers require theoretical analysis of the convergence of the Markov chain, which can be challenging. This dissertation discusses the theory of Markov chains and their convergence, including how to rigorously answer Question 1 and Question 2. A variety of methods are available, and several are illustrated with examples.One approach answers Question 1 by obtaining draws that approximate the target distribution closely. Markov chains may then be started from these draws, resulting in immediate closeness to the target distribution. Several algorithms for accomplishing this are introduced and developed. Results are provided which quantify the quality of the approximations. A comparison of the efficiency of the algorithms is also provided. Another approach is the formal establishment of convergence rates. Once these are established, one method to answer Question 1 is to compute the number of iterations required so that the ultimate distribution obtained is close to the target distribution. This approach is also illustrated with examples.A final approach is to compute standard errors of the resulting estimates, which directly answers Question 2. Question 1, however, is also answered because when estimates are accurate enough, the chain has been run for a sufficient duration. This is similarly illustrated with examples.Bayesian scale-usage models are used to analyze surveys where individual respondents differ in their use of a rating scale. The convergence rate theory for these models, which guarantees answers to Question 1 and Question 2, is fully established. The methods are then extended to a setting where demographics can govern the way in which respondents differ in their answer… Advisors/Committee Members: Herbei, Radu (Advisor).

Subjects/Keywords: Statistics; Markov chain Monte Carlo convergence; Markov chain Monte Carlo standard errors; geometric ergodicity; scale-usage heterogeneity

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Olsen, A. N. (2015). When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo Methods. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1433770406

Chicago Manual of Style (16th Edition):

Olsen, Andrew Nolan. “When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo Methods.” 2015. Doctoral Dissertation, The Ohio State University. Accessed January 19, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1433770406.

MLA Handbook (7th Edition):

Olsen, Andrew Nolan. “When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo Methods.” 2015. Web. 19 Jan 2021.

Vancouver:

Olsen AN. When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo Methods. [Internet] [Doctoral dissertation]. The Ohio State University; 2015. [cited 2021 Jan 19]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1433770406.

Council of Science Editors:

Olsen AN. When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo Methods. [Doctoral Dissertation]. The Ohio State University; 2015. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1433770406

.