You searched for subject:(Item response theory)
.
Showing records 1 – 30 of
403 total matches.
◁ [1] [2] [3] [4] [5] … [14] ▶

University of Georgia
1.
Ra, Jongmin.
Sensitivity of prior specification within testlet model.
Degree: PhD, Educational Psychology, 2011, University of Georgia
URL: http://purl.galileo.usg.edu/uga_etd/ra_jongmin_201112_phd
► There have been enormous statistical advances made in the analysis of standardized educational and psychological tests. Parallel with this, the practical advantages of the Bayesian…
(more)
▼ There have been enormous statistical advances made in the analysis of standardized educational and psychological tests. Parallel with this, the practical advantages of the Bayesian approach were recognized in
item response theory (IRT) and have been adopted to provide more detailed information about
item parameters and an individual's underlying latent
ability.
The purpose of this study is to examine the sensitivity of di erent prior distributions within the three-parameter logistic testlet (3PLT) model. First, the e cacy of the 3PLT model in the WinBUGS 1.4 program (Spiegelhalter, Thomas, Best, & Lunn, 2003) was compared to the 3PLT model in the SCORIGHT 3.0 (Wang, Bradlow, & Wainer, 2004) and the Gibbs (Du, 1998) programs, neither of which can manipulate prespeci ed prior distributions. Later, the impacts of di erent prior distributions in the 3PLT model will be discussed.
Advisors/Committee Members: Seock-Ho Kim.
Subjects/Keywords: Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ra, J. (2011). Sensitivity of prior specification within testlet model. (Doctoral Dissertation). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/ra_jongmin_201112_phd
Chicago Manual of Style (16th Edition):
Ra, Jongmin. “Sensitivity of prior specification within testlet model.” 2011. Doctoral Dissertation, University of Georgia. Accessed February 15, 2019.
http://purl.galileo.usg.edu/uga_etd/ra_jongmin_201112_phd.
MLA Handbook (7th Edition):
Ra, Jongmin. “Sensitivity of prior specification within testlet model.” 2011. Web. 15 Feb 2019.
Vancouver:
Ra J. Sensitivity of prior specification within testlet model. [Internet] [Doctoral dissertation]. University of Georgia; 2011. [cited 2019 Feb 15].
Available from: http://purl.galileo.usg.edu/uga_etd/ra_jongmin_201112_phd.
Council of Science Editors:
Ra J. Sensitivity of prior specification within testlet model. [Doctoral Dissertation]. University of Georgia; 2011. Available from: http://purl.galileo.usg.edu/uga_etd/ra_jongmin_201112_phd

University of Georgia
2.
Sen, Sedat.
Robustness of mixture IRT models to violations of latent normality.
Degree: PhD, Educational Psychology, 2014, University of Georgia
URL: http://purl.galileo.usg.edu/uga_etd/sen_sedat_201405_phd
► Unlike the traditional item response theory (IRT) models, mixture IRT (MixIRT) models can be useful when subpopulations are suspected. The usual MixIRT model is typically…
(more)
▼ Unlike the traditional
item response theory (IRT) models, mixture IRT (MixIRT) models can be useful when subpopulations are suspected. The usual MixIRT model is typically estimated assuming normally distributed latent ability. Research on finite mixture models suggests that spurious latent classes can be extracted even in the absence of population heterogeneity if the distribution of the data is non-normal. In this study, we conducted two simulation studies and an empirical study to examine the robustness of MixIRT models to violations of latent normality. Single class IRT data sets were generated using different ability distributions and then analyzed with MixIRT models to determine the impact of these distributions on the extraction of latent classes. Results suggest that estimation of mixed Rasch models resulted in spurious latent class problems in the data when distributions were bimodal and uniform. Mixture 2PL and mixture 3PL IRT models were found to be
more robust to latent non-normality. Akaike's information criterion (AIC) and the Bayesian information criterion (BIC), were used to inform model selection. For most conditions, the performance of the BIC index was better than the AIC index for selection of the correct model.
Advisors/Committee Members: Seock-Ho Kim.
Subjects/Keywords: Mixture item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sen, S. (2014). Robustness of mixture IRT models to violations of latent normality. (Doctoral Dissertation). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/sen_sedat_201405_phd
Chicago Manual of Style (16th Edition):
Sen, Sedat. “Robustness of mixture IRT models to violations of latent normality.” 2014. Doctoral Dissertation, University of Georgia. Accessed February 15, 2019.
http://purl.galileo.usg.edu/uga_etd/sen_sedat_201405_phd.
MLA Handbook (7th Edition):
Sen, Sedat. “Robustness of mixture IRT models to violations of latent normality.” 2014. Web. 15 Feb 2019.
Vancouver:
Sen S. Robustness of mixture IRT models to violations of latent normality. [Internet] [Doctoral dissertation]. University of Georgia; 2014. [cited 2019 Feb 15].
Available from: http://purl.galileo.usg.edu/uga_etd/sen_sedat_201405_phd.
Council of Science Editors:
Sen S. Robustness of mixture IRT models to violations of latent normality. [Doctoral Dissertation]. University of Georgia; 2014. Available from: http://purl.galileo.usg.edu/uga_etd/sen_sedat_201405_phd

University of Georgia
3.
Kim, Insuk.
A comparison of a Bayesian and maximum likelihood algorithms for estimation of a multilevel IRT model.
Degree: PhD, Educational Psychology, 2007, University of Georgia
URL: http://purl.galileo.usg.edu/uga_etd/kim_insuk_200705_phd
► Multilevel Item Response Theory (IRT) models provide an analytic approach that formally incorporates the hierarchical structure characteristic of much educational and psychological data. In this…
(more)
▼ Multilevel
Item Response Theory (IRT) models provide an analytic approach that formally incorporates the hierarchical structure characteristic of much educational and psychological data. In this study, maximum likelihood (ML) estimation, which is the method most widely used in current applied multilevel IRT analyses and Bayesian estimation, which has become a viable alternative to ML-based estimation techniques were examined.
Item and
ability parameter estimates from Bayesian and ML methods were compared using both empirical data and simulated data. It was found that Bayesian estimation using WinBUGS performed better than ML estimations in all conditions with regard to the
item parameter estimates. For the individual (Level 2) variance estimates, PQL estimation using HLM showed less bias than the others. However, Bayesian and ML estimations performed similarly to each other for the group (Level 3) variance parameter estimates.
Advisors/Committee Members: Deborah Bandalos.
Subjects/Keywords: Multilevel Item Response Theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Kim, I. (2007). A comparison of a Bayesian and maximum likelihood algorithms for estimation of a multilevel IRT model. (Doctoral Dissertation). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/kim_insuk_200705_phd
Chicago Manual of Style (16th Edition):
Kim, Insuk. “A comparison of a Bayesian and maximum likelihood algorithms for estimation of a multilevel IRT model.” 2007. Doctoral Dissertation, University of Georgia. Accessed February 15, 2019.
http://purl.galileo.usg.edu/uga_etd/kim_insuk_200705_phd.
MLA Handbook (7th Edition):
Kim, Insuk. “A comparison of a Bayesian and maximum likelihood algorithms for estimation of a multilevel IRT model.” 2007. Web. 15 Feb 2019.
Vancouver:
Kim I. A comparison of a Bayesian and maximum likelihood algorithms for estimation of a multilevel IRT model. [Internet] [Doctoral dissertation]. University of Georgia; 2007. [cited 2019 Feb 15].
Available from: http://purl.galileo.usg.edu/uga_etd/kim_insuk_200705_phd.
Council of Science Editors:
Kim I. A comparison of a Bayesian and maximum likelihood algorithms for estimation of a multilevel IRT model. [Doctoral Dissertation]. University of Georgia; 2007. Available from: http://purl.galileo.usg.edu/uga_etd/kim_insuk_200705_phd

Wright State University
4.
Alarcon, Gene Michael.
THE DEVELOPMENT OF THE WRIGHT WORK ENGAGEMENT SCALE.
Degree: PhD, Human Factors and Industrial/Organizational Psychology
PhD, 2009, Wright State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=wright1260985760
► Recent developments in organizational attitude research have focused on the concept of engagement. Despite the growing literature on engagement there is little agreement on the…
(more)
▼ Recent developments in organizational attitude
research have focused on the concept of engagement. Despite the
growing literature on engagement there is little agreement on the
conceptualization of engagement. The current study sought to
conceptualize and measure work engagement using
Item Response
Theory. The Wright Work Engagement Scale was created using two
samples, a student sample for exploratory analyses and a working
sample for
item analyses. Results indicate engagement is a
unidimensional construct. The 12
item Work Engagement Scale was
created and demonstrated sufficient convergent and discriminant
validity.
Advisors/Committee Members: Edwards, Jean (Committee Chair).
Subjects/Keywords: Psychology; engagement; item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Alarcon, G. M. (2009). THE DEVELOPMENT OF THE WRIGHT WORK ENGAGEMENT SCALE. (Doctoral Dissertation). Wright State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=wright1260985760
Chicago Manual of Style (16th Edition):
Alarcon, Gene Michael. “THE DEVELOPMENT OF THE WRIGHT WORK ENGAGEMENT SCALE.” 2009. Doctoral Dissertation, Wright State University. Accessed February 15, 2019.
http://rave.ohiolink.edu/etdc/view?acc_num=wright1260985760.
MLA Handbook (7th Edition):
Alarcon, Gene Michael. “THE DEVELOPMENT OF THE WRIGHT WORK ENGAGEMENT SCALE.” 2009. Web. 15 Feb 2019.
Vancouver:
Alarcon GM. THE DEVELOPMENT OF THE WRIGHT WORK ENGAGEMENT SCALE. [Internet] [Doctoral dissertation]. Wright State University; 2009. [cited 2019 Feb 15].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1260985760.
Council of Science Editors:
Alarcon GM. THE DEVELOPMENT OF THE WRIGHT WORK ENGAGEMENT SCALE. [Doctoral Dissertation]. Wright State University; 2009. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=wright1260985760

University of Aberdeen
5.
Jackson, Jeanette.
Tackling measurement issues in health predictors and outcomes using item response theory.
Degree: PhD, 2008, University of Aberdeen
URL: http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225747
;
http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.521322
► The Functional Limitation Profile (FLP), the Hospital Anxiety and Depression Scale (HADS) and the Recovery Locus of Control scale (RLOC) are three well established and…
(more)
▼ The Functional Limitation Profile (FLP), the Hospital Anxiety and Depression Scale (HADS) and the Recovery Locus of Control scale (RLOC) are three well established and useful measures used in Health Psychology. However, the reliable and valid measurement of these health predictors and outcomes has associated problems. The present thesis tackles measurement issues in all three instruments using item response theory (IRT). The Scientific Advisory Committee of the Medical Outcomes Trust has suggested the methodological and theoretical rationale for the conceptual and measurement model of available measurement instruments should be reported. The introduction chapter provides theoretical background in order to understand activity limitations and participation restrictions as behaviours affected by a certain health condition, as well as by thoughts and feelings. Within this theoretical framework, the present thesis investigates the measurement of mood using the HADS and functional limitations using the FLP in three different health conditions: (1) stroke patients, (2) patients with myocardial infarction, and (3) patients who underwent joint replacement surgery. The measurement of perceived personal control beliefs using the RLOC scale, and the relationship between control cognitions, mood and functional limitations were examined in stroke patients since all three measures were available for secondary analysis in this sample. The main findings are that (1) highly sensitive FLP items measure precisely different levels of disability and handicap, (2) removing 2 HADS items results in precise measurements of different levels of anxiety and depression, and (3) internal but not external perceived personal control beliefs measured sensitively different levels of the underlying construct.
Subjects/Keywords: 616.89; Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Jackson, J. (2008). Tackling measurement issues in health predictors and outcomes using item response theory. (Doctoral Dissertation). University of Aberdeen. Retrieved from http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225747 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.521322
Chicago Manual of Style (16th Edition):
Jackson, Jeanette. “Tackling measurement issues in health predictors and outcomes using item response theory.” 2008. Doctoral Dissertation, University of Aberdeen. Accessed February 15, 2019.
http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225747 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.521322.
MLA Handbook (7th Edition):
Jackson, Jeanette. “Tackling measurement issues in health predictors and outcomes using item response theory.” 2008. Web. 15 Feb 2019.
Vancouver:
Jackson J. Tackling measurement issues in health predictors and outcomes using item response theory. [Internet] [Doctoral dissertation]. University of Aberdeen; 2008. [cited 2019 Feb 15].
Available from: http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225747 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.521322.
Council of Science Editors:
Jackson J. Tackling measurement issues in health predictors and outcomes using item response theory. [Doctoral Dissertation]. University of Aberdeen; 2008. Available from: http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225747 ; http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.521322

Rutgers University
6.
Chiu, Ting-Wei, 1976-.
Correction for guessing in the framework of the 3PL item response theory.
Degree: PhD, Education, 2010, Rutgers University
URL: http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053033
► Guessing behavior is an important topic with regard to assessing proficiency on multiple choice tests, particularly for examinees at lower levels of proficiency due to…
(more)
▼ Guessing behavior is an important topic with regard to assessing proficiency on multiple choice tests, particularly for examinees at lower levels of proficiency due to greater the potential for systematic error or bias which that inflates observed test scores. Methods that incorporate a correction for guessing on high-stakes tests generally rely on a scoring model that aims to minimize the potential benefit of guessing. In some cases, a formula score based on classical test theory (CTT) is applied with the intention of eliminating the influence of guessing from the number-right score (e.g., Holzinger, 1924). However, since its inception, significant controversy has surrounded the use and consequences associated with classical methods of correcting for guessing. More recently, item response theory (IRT) has been used to conceptualize and describe the effects of guessing. Yet CTT remains a dominant aspect of many assessment programs, and IRT models are rarely used for estimating proficiency with MC items – where guessing is most likely to exert an influence. Although there has been tremendous growth in the research of formal modeling based on IRT with respect to guessing, none of these IRT approaches have had widespread application. This dissertation provides a conceptual analysis of how the ―correction for guessing works within the framework of a 3PL model, and two new guessing correction formulas based on IRT are derived for improving observed score estimates. To demonstrate the utility of the new formula scores, they are applied as conditioning variable in two different approaches to DIF: the Mantel-Haenszel and logistic regression procedures. Two IRT formula scores were developed using Taylor approximations. Each of these formula scores requires the use of sample statistics in lieu of IRT parameters for estimating corrected true scores, and these statistics were obtained in two different ways that are referred to as the pseudo-Bayes and conditional probability methods. It is shown that the IRT formula scores adjust the number-correct score based on both the proficiency of an examinees and the examinee‘s pattern of responses across items. In two different simulation studies, the classical formula score performed better in terms of bias statistics, but the IRT formula scores had notable improvement in bias and r2 statistics compared to the number-correct score. The advantage of the IRT formula scores accounted for about 10% more of the variance in corrected true scores in the first quartile. Results also suggested that not much information lost due to the use of Taylor approximation. The pseudo-Bayes and conditional probabilities methods also resulted in little information loss. When applied to DIF analyses, the IRT formula scores had lower bias in both the log-odds ratios and type 1 error rates compared to the number-corrected score. Overall, the IRT formula scores decreased bias in the log-odds ratio by about 6% and in the type 1 error rate by about 10%.
Includes abstract
Advisors/Committee Members: Chiu, Ting-Wei, 1976- (author), Camilli, Gregory A. (chair), Penfield, Douglas A. (internal member), Chiu, Chia-Yi (internal member), Nichols, Paul (outside member).
Subjects/Keywords: Item response theory; Ability – Testing
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Chiu, Ting-Wei, 1. (2010). Correction for guessing in the framework of the 3PL item response theory. (Doctoral Dissertation). Rutgers University. Retrieved from http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053033
Chicago Manual of Style (16th Edition):
Chiu, Ting-Wei, 1976-. “Correction for guessing in the framework of the 3PL item response theory.” 2010. Doctoral Dissertation, Rutgers University. Accessed February 15, 2019.
http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053033.
MLA Handbook (7th Edition):
Chiu, Ting-Wei, 1976-. “Correction for guessing in the framework of the 3PL item response theory.” 2010. Web. 15 Feb 2019.
Vancouver:
Chiu, Ting-Wei 1. Correction for guessing in the framework of the 3PL item response theory. [Internet] [Doctoral dissertation]. Rutgers University; 2010. [cited 2019 Feb 15].
Available from: http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053033.
Council of Science Editors:
Chiu, Ting-Wei 1. Correction for guessing in the framework of the 3PL item response theory. [Doctoral Dissertation]. Rutgers University; 2010. Available from: http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053033

University of Minnesota
7.
Feuerstahler, Leah.
Exploring Alternate Latent Trait Metrics with the Filtered Monotonic Polynomial IRT Model.
Degree: PhD, Psychology, 2016, University of Minnesota
URL: http://hdl.handle.net/11299/182267
► Item response theory (IRT) is a broad modeling framework that makes precise predictions about item response behavior given individuals’ locations on a latent (unobserved) variable.…
(more)
▼ Item response theory (IRT) is a broad modeling framework that makes precise predictions about item response behavior given individuals’ locations on a latent (unobserved) variable. If the item-trait regressions, also known as item response functions (IRFs), are monotonically increasing and if assumptions about unidimensionality and local independence are satisfied, then examinees can be ordered uniquely on the latent trait. Scales that satisfy these three assumptions can be transformed monotonically without altering scale properties—that is, they define an ordinal-level scale (Stevens, 1946). When fitting an IRT model, however, the scale of the latent variable—that is, its location and interval spacing—must be identified by introducing extra assumptions. In practice, the scale is identified by specifying either the parametric form of the IRF (parametric IRT) or the distribution of the latent trait (nonparametric IRT). Filtered monotonic polynomial IRT (FMP) has been proposed as a type of nonparametric IRT method (Liang & Browne, 2015), but shares important properties with parametric methods. In this dissertation, it is demonstrated that any IRF defined within the FMP framework can be re-expressed as another FMP IRF by taking linear or nonlinear transformations of the latent trait. A general form for these transformations is presented in terms of matrix algebra. Finally, I propose a composite FMP IRT model in which nonlinear transformations of the latent trait are modeled explicitly by a monotonic composite function.I argue that the composite model offers many advantages over existing methods. First, the composite FMP model narrows the methodological gap between para- metric and nonparametric item response models, allowing for item banking and adaptive testing within a flexible modeling framework. Second, this composite model suggests a sequential NIRT curve-fitting method that allows users to explore both alternate (e.g., non-normal) latent densities and flexible IRF shapes. Finally, the composite FMP model allows users to explore and employ alternate scalings of the latent trait without sacrificing the methodological advantages of parametric models.
Subjects/Keywords: item response theory; measurement
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Feuerstahler, L. (2016). Exploring Alternate Latent Trait Metrics with the Filtered Monotonic Polynomial IRT Model. (Doctoral Dissertation). University of Minnesota. Retrieved from http://hdl.handle.net/11299/182267
Chicago Manual of Style (16th Edition):
Feuerstahler, Leah. “Exploring Alternate Latent Trait Metrics with the Filtered Monotonic Polynomial IRT Model.” 2016. Doctoral Dissertation, University of Minnesota. Accessed February 15, 2019.
http://hdl.handle.net/11299/182267.
MLA Handbook (7th Edition):
Feuerstahler, Leah. “Exploring Alternate Latent Trait Metrics with the Filtered Monotonic Polynomial IRT Model.” 2016. Web. 15 Feb 2019.
Vancouver:
Feuerstahler L. Exploring Alternate Latent Trait Metrics with the Filtered Monotonic Polynomial IRT Model. [Internet] [Doctoral dissertation]. University of Minnesota; 2016. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/11299/182267.
Council of Science Editors:
Feuerstahler L. Exploring Alternate Latent Trait Metrics with the Filtered Monotonic Polynomial IRT Model. [Doctoral Dissertation]. University of Minnesota; 2016. Available from: http://hdl.handle.net/11299/182267

University of Illinois – Urbana-Champaign
8.
Liu, Liwen.
New model-data fit indices for item response theory (IRT): an evaluation and application.
Degree: PhD, Psychology, 2015, University of Illinois – Urbana-Champaign
URL: http://hdl.handle.net/2142/88154
► I reviewed the recently developed limited-information model fit statistics by Maydeu-Olivares and colleagues (e.g., Maydeu-Olivares & Joe, 2005; Maydeu-Olivares & Liu, 2012; Liu & Maydeu-Olivares,…
(more)
▼ I reviewed the recently developed limited-information model fit statistics by Maydeu-Olivares and colleagues (e.g., Maydeu-Olivares & Joe, 2005; Maydeu-Olivares & Liu, 2012; Liu & Maydeu-Olivares, 2014) and conducted a simulation study to explore the properties of these new statistics under conditions often seen in practice. The results showed that the overall and piecewise fit statistics were to some extent sensitive to misfit caused by multidimensionality, although the limited-information fit statistics tended to flag more
item pairs as misfit than the heuristic fit statistics. I also applied the fit statistics to three AP® exams, one personality inventory, and a rating scale used in organizational settings. Although a unidimensional IRT model was expected to fit the Physics B Exam better than the English Literature Exam, the average piecewise fit statistics showed no such difference. The fit statistics also suggested that a more advanced IRT model should be fitted to the self-rated personality inventory. Finally, the fit statistics seemed to be effective in detecting misfit caused by data skewness.
Advisors/Committee Members: Drasgow, Fritz (advisor), Drasgow, Fritz (Committee Chair), Chang, Hua-Hua (committee member), Roberts, Brent W. (committee member), Carpenter, Nichelle (committee member), Newman, Daniel A. (committee member).
Subjects/Keywords: model fit; Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, L. (2015). New model-data fit indices for item response theory (IRT): an evaluation and application. (Doctoral Dissertation). University of Illinois – Urbana-Champaign. Retrieved from http://hdl.handle.net/2142/88154
Chicago Manual of Style (16th Edition):
Liu, Liwen. “New model-data fit indices for item response theory (IRT): an evaluation and application.” 2015. Doctoral Dissertation, University of Illinois – Urbana-Champaign. Accessed February 15, 2019.
http://hdl.handle.net/2142/88154.
MLA Handbook (7th Edition):
Liu, Liwen. “New model-data fit indices for item response theory (IRT): an evaluation and application.” 2015. Web. 15 Feb 2019.
Vancouver:
Liu L. New model-data fit indices for item response theory (IRT): an evaluation and application. [Internet] [Doctoral dissertation]. University of Illinois – Urbana-Champaign; 2015. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/2142/88154.
Council of Science Editors:
Liu L. New model-data fit indices for item response theory (IRT): an evaluation and application. [Doctoral Dissertation]. University of Illinois – Urbana-Champaign; 2015. Available from: http://hdl.handle.net/2142/88154

University of Minnesota
9.
Su, Shiyang.
Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency.
Degree: PhD, Educational Psychology, 2017, University of Minnesota
URL: http://hdl.handle.net/11299/190489
► With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs…
(more)
▼ With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to explain the construct underlying response times, the relationship between response times and response accuracy, and to understand examinees’ behaviors. Different from most existing psychometric models, the current study is based on the idea of reading comprehension fluency in the reading literature and proposes several item response theory based models combining response times and response accuracy. To better understand the construct of reading comprehension fluency, the current study used a new computer-administered assessment of reading comprehension and recorded both the responses and response times of each item. Response times connect examinees’ performance on the reading comprehension test to the concepts of fluency or automaticity in the reading literature, concepts that are evidenced by responses that are accurate and appropriately fast. The current study evaluates reading comprehension fluency through two approaches: one with polytomously scored variables and one with conditional variables. The models show the benefits of using the response time information in terms of improving the construct validity when the measured latent construct is reading comprehension fluency. The current study contributes to an interpretation of the latent trait of reading fluency. The models can be used to identify the intervals along the comprehension continuum in which the students tend to read fluently.
Subjects/Keywords: Comprehension fluency; Item response theory; Response times
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Su, S. (2017). Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency. (Doctoral Dissertation). University of Minnesota. Retrieved from http://hdl.handle.net/11299/190489
Chicago Manual of Style (16th Edition):
Su, Shiyang. “Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency.” 2017. Doctoral Dissertation, University of Minnesota. Accessed February 15, 2019.
http://hdl.handle.net/11299/190489.
MLA Handbook (7th Edition):
Su, Shiyang. “Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency.” 2017. Web. 15 Feb 2019.
Vancouver:
Su S. Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency. [Internet] [Doctoral dissertation]. University of Minnesota; 2017. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/11299/190489.
Council of Science Editors:
Su S. Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency. [Doctoral Dissertation]. University of Minnesota; 2017. Available from: http://hdl.handle.net/11299/190489

University of North Carolina – Greensboro
10.
Ames, Allison Jennifer.
Bayesian model criticism: prior sensitivity of the posterior
predictive checks method.
Degree: 2015, University of North Carolina – Greensboro
URL: http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=18056
► Use of noninformative priors with the Posterior Predictive Checks (PPC) method requires more attention. Previous research of the PPC has treated noninformative priors as always…
(more)
▼ Use of noninformative priors with the Posterior
Predictive Checks (PPC) method requires more attention. Previous
research of the PPC has treated noninformative priors as always
noninformative in relation to the likelihood, regardless of
model-data fit. However, as model-data fit deteriorates, and the
steepness of the likelihood's curvature diminishes, the prior can
become more informative than initially intended. The objective of
this dissertation was to investigate whether specification of the
prior distribution has an effect on the conclusions drawn from the
PPC method. Findings indicated that the choice of discrepancy
measure is an important factor in the overall success of the
method, and that different discrepancy measures are affected more
than others by prior specification.;
Item response theory,
Model-data fit, Posterior predictive checks, Prior
distribution
Advisors/Committee Members: Randall Penfield (advisor).
Subjects/Keywords: Item response theory; Bayesian statistical decision theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Ames, A. J. (2015). Bayesian model criticism: prior sensitivity of the posterior
predictive checks method. (Doctoral Dissertation). University of North Carolina – Greensboro. Retrieved from http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=18056
Chicago Manual of Style (16th Edition):
Ames, Allison Jennifer. “Bayesian model criticism: prior sensitivity of the posterior
predictive checks method.” 2015. Doctoral Dissertation, University of North Carolina – Greensboro. Accessed February 15, 2019.
http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=18056.
MLA Handbook (7th Edition):
Ames, Allison Jennifer. “Bayesian model criticism: prior sensitivity of the posterior
predictive checks method.” 2015. Web. 15 Feb 2019.
Vancouver:
Ames AJ. Bayesian model criticism: prior sensitivity of the posterior
predictive checks method. [Internet] [Doctoral dissertation]. University of North Carolina – Greensboro; 2015. [cited 2019 Feb 15].
Available from: http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=18056.
Council of Science Editors:
Ames AJ. Bayesian model criticism: prior sensitivity of the posterior
predictive checks method. [Doctoral Dissertation]. University of North Carolina – Greensboro; 2015. Available from: http://libres.uncg.edu/ir/listing.aspx?styp=ti&id=18056

University of Georgia
11.
Alagoz, Cigdem.
Scoring tests with dichotomous and polytomous items.
Degree: MA, Educational Psychology, 2005, University of Georgia
URL: http://purl.galileo.usg.edu/uga_etd/alagoz_cigdem_200505_ma
► This study applies item response theory methods to the tests combining multiple-choice (MC) and constructed response (CR) item types. Issues discussed include the following: 1)…
(more)
▼ This study applies
item response theory methods to the tests combining multiple-choice (MC) and constructed
response (CR)
item types. Issues discussed include the following: 1) the selection of the best fitting model from the most widely used three combinations of
item response models; 2) the estimation of ability and
item parameters; 3) the potential loss of information from both simultaneous and separate calibration runs. Empirical results are presented from a mathematics achievement test that includes both
item types. Both two-parameter logistic (2PL) and three-parameter logistic (3PL) models fit to the data better than the one-parameter logistic (1PL) model for the MC items. Both graded
response (GR) and generalized partial credit (GPC) models fit better to the CR items than the partial credit (PC) model. The 2PL&GR and 3PL&GPC model combinations provided better fit than did the 1PL&PC.
Item and ability parameter estimates from separate and simultaneous calibration runs across various models were highly consistent. Calibrating the MC and CR items together or separately did not cause information loss. Use of the CR items in the test increased reliability. Simultaneous calibration of the MC and CR items provided consistent estimates and an implicitly weighted ability measure.
Advisors/Committee Members: Seock-Ho Kim.
Subjects/Keywords: Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Alagoz, C. (2005). Scoring tests with dichotomous and polytomous items. (Masters Thesis). University of Georgia. Retrieved from http://purl.galileo.usg.edu/uga_etd/alagoz_cigdem_200505_ma
Chicago Manual of Style (16th Edition):
Alagoz, Cigdem. “Scoring tests with dichotomous and polytomous items.” 2005. Masters Thesis, University of Georgia. Accessed February 15, 2019.
http://purl.galileo.usg.edu/uga_etd/alagoz_cigdem_200505_ma.
MLA Handbook (7th Edition):
Alagoz, Cigdem. “Scoring tests with dichotomous and polytomous items.” 2005. Web. 15 Feb 2019.
Vancouver:
Alagoz C. Scoring tests with dichotomous and polytomous items. [Internet] [Masters thesis]. University of Georgia; 2005. [cited 2019 Feb 15].
Available from: http://purl.galileo.usg.edu/uga_etd/alagoz_cigdem_200505_ma.
Council of Science Editors:
Alagoz C. Scoring tests with dichotomous and polytomous items. [Masters Thesis]. University of Georgia; 2005. Available from: http://purl.galileo.usg.edu/uga_etd/alagoz_cigdem_200505_ma

Texas A&M University
12.
Fan, Yinan.
Psychometric Validation of the Hispanic Bilingual Gifted Screening Instrument: An Item Response Theory Approach.
Degree: 2014, Texas A&M University
URL: http://hdl.handle.net/1969.1/153649
► Demographics in the United States continue to shift with a rapidly growing Hispanic population. On the other hand, a mismatch still exists between Hispanic students?…
(more)
▼ Demographics in the United States continue to shift with a rapidly growing Hispanic population. On the other hand, a mismatch still exists between Hispanic students? enrollment in gifted and talented (G/T) programs and general programs. The under-representation of Hispanic students in G/T programs has been attributed to a lack of proper instrument to identify gifted students who are linguistically and culturally diverse; insufficient preparation of teacher in the initial teacher referral phases; and ambiguous definitions of intelligence and giftedness.
In this study I investigated psychometric properties of the Hispanic Bilingual Gifted Screening Instrument (HBGSI) within an
item response theory (IRT) framework. The HBGSI was developed with social-cultural context in mind and has been recommended for use in the first phase of teacher referral process. Participants in this study were Hispanic bilingual students in first to third grade, who participated in a large-scale longitudinal randomized study carried out in a Texas urban school district. The purpose of this study was to further validate HBGSI within the framework of IRT, exploring the factor structure and dimensionality of the instrument at the
item level. I further tested the possibility of constructing an abbreviated version of HBGSI with fewer items for ease of administration, which would potentially lower the demand on the teacher?s time, enhance accessibility and facilitate increased usage of the instrument.
Results revealed a bifactor structure with a strong general factor corresponding to overall giftedness among Hispanic bilingual students, and five domain factors regarding social responsibility, academic achievement, creative performance, problem solving, and native language proficiency. The multidimensional bifactor IRT model provided information related to each
item concerning its discriminating power, thresholds and information regarding the latent constructs. Best items were selected while preserving the integrity of the original HBGSI, and cutting the length to almost half. Thus an abbreviated version of HBGSI was feasible and the adaptation is presented. Overall, this study further validated that the HBGSI holds promise in screening potential Hispanic bilingual students in elementary grades.
Advisors/Committee Members: Lara-Alecio, Rafael (advisor), Tong, Fuhui (advisor), Yoon, Myeongsun (committee member), Irby, Beverly J. (committee member), Li, Yeping (committee member).
Subjects/Keywords: Hispanic bilingual gifted; Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Fan, Y. (2014). Psychometric Validation of the Hispanic Bilingual Gifted Screening Instrument: An Item Response Theory Approach. (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/153649
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Fan, Yinan. “Psychometric Validation of the Hispanic Bilingual Gifted Screening Instrument: An Item Response Theory Approach.” 2014. Thesis, Texas A&M University. Accessed February 15, 2019.
http://hdl.handle.net/1969.1/153649.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Fan, Yinan. “Psychometric Validation of the Hispanic Bilingual Gifted Screening Instrument: An Item Response Theory Approach.” 2014. Web. 15 Feb 2019.
Vancouver:
Fan Y. Psychometric Validation of the Hispanic Bilingual Gifted Screening Instrument: An Item Response Theory Approach. [Internet] [Thesis]. Texas A&M University; 2014. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/1969.1/153649.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Fan Y. Psychometric Validation of the Hispanic Bilingual Gifted Screening Instrument: An Item Response Theory Approach. [Thesis]. Texas A&M University; 2014. Available from: http://hdl.handle.net/1969.1/153649
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

The Ohio State University
14.
Keum, EunHee.
Applying Longitudinal IRT Models to Small Samples for Scale
Evaluation.
Degree: PhD, Psychology, 2016, The Ohio State University
URL: http://rave.ohiolink.edu/etdc/view?acc_num=osu1460996452
► Item response theory (IRT) modeling can provide detailed information about the performance of questionnaires/scales. Despite the benefit of IRT models, they often require larger sample…
(more)
▼ Item response theory (IRT) modeling can provide
detailed information about the performance of
questionnaires/scales. Despite the benefit of IRT models, they
often require larger sample sizes for reliable estimation compared
with simpler models. Small sample sizes can even potentially impact
the calibration of simple unidimensional IRT models. In
psychological assessment, however, the same scale is often
administered to a small number of respondents on multiple
occasions. To the extent the repeated measures from the same
individual are not perfectly correlated, we can gain extra
information regarding both respondents and the scale of interest.
To obtain useful psychometric information about a scale, we have to
be able to recover the parameters of IRT models fairly well and
demonstrate that other ancillary procedures also function well.
Therefore, we explored under what conditions proposed longitudinal
IRT models can be reliably estimated and their parameters can be
reasonably recovered through simulations. We further discussed how
these models can be applied in psychometric assessment of a scale
when only a small sample is available.
Advisors/Committee Members: Edwards, Michael (Advisor).
Subjects/Keywords: Quantitative Psychology; Psychology; Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Keum, E. (2016). Applying Longitudinal IRT Models to Small Samples for Scale
Evaluation. (Doctoral Dissertation). The Ohio State University. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=osu1460996452
Chicago Manual of Style (16th Edition):
Keum, EunHee. “Applying Longitudinal IRT Models to Small Samples for Scale
Evaluation.” 2016. Doctoral Dissertation, The Ohio State University. Accessed February 15, 2019.
http://rave.ohiolink.edu/etdc/view?acc_num=osu1460996452.
MLA Handbook (7th Edition):
Keum, EunHee. “Applying Longitudinal IRT Models to Small Samples for Scale
Evaluation.” 2016. Web. 15 Feb 2019.
Vancouver:
Keum E. Applying Longitudinal IRT Models to Small Samples for Scale
Evaluation. [Internet] [Doctoral dissertation]. The Ohio State University; 2016. [cited 2019 Feb 15].
Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1460996452.
Council of Science Editors:
Keum E. Applying Longitudinal IRT Models to Small Samples for Scale
Evaluation. [Doctoral Dissertation]. The Ohio State University; 2016. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=osu1460996452

California State University – Sacramento
15.
Byington, Bryan Kent.
Dichotomous or polytomous data: which is best when analyzing job analysis ratings using item response theory?.
Degree: MA, Psychology (Industrial/Organizational Psychology, 2010, California State University – Sacramento
URL: http://hdl.handle.net/10211.9/376
► Job analysis surveys typically involve rating scales (e.g., frequency, importance) with multiple response options. These data are typically analyzed with descriptive statistics (e.g., mean, standard…
(more)
▼ Job analysis surveys typically involve rating scales (e.g., frequency, importance) with multiple
response options. These data are typically analyzed with descriptive statistics (e.g., mean, standard deviation, percentages). Recently,
Item Response Theory (IRT) has been explored as a technique for analyzing job analysis data. At times, when analyzing this data using IRT, the polytomous ratings are collapsed to only two data points. This thesis reviews the consequences of treating job analysis data as polytomous versus dichotomous for IRT analysis. The thesis uses archival data that was not previously analyzed with IRT. Focusing on common task statements from three entry-level job positions, it was demonstrated that there are certain advantages to keeping the data polytomous. Some advantages include increased clarity of how the individuals rated the tasks and better overall
item fit. However, the dichotomized ratings do sufficiently answer the question of whether or not the task is part of the job.
Advisors/Committee Members: Hurtz, Gregory Matthew.
Subjects/Keywords: Task statements; Job analysis; Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Byington, B. K. (2010). Dichotomous or polytomous data: which is best when analyzing job analysis ratings using item response theory?. (Masters Thesis). California State University – Sacramento. Retrieved from http://hdl.handle.net/10211.9/376
Chicago Manual of Style (16th Edition):
Byington, Bryan Kent. “Dichotomous or polytomous data: which is best when analyzing job analysis ratings using item response theory?.” 2010. Masters Thesis, California State University – Sacramento. Accessed February 15, 2019.
http://hdl.handle.net/10211.9/376.
MLA Handbook (7th Edition):
Byington, Bryan Kent. “Dichotomous or polytomous data: which is best when analyzing job analysis ratings using item response theory?.” 2010. Web. 15 Feb 2019.
Vancouver:
Byington BK. Dichotomous or polytomous data: which is best when analyzing job analysis ratings using item response theory?. [Internet] [Masters thesis]. California State University – Sacramento; 2010. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/10211.9/376.
Council of Science Editors:
Byington BK. Dichotomous or polytomous data: which is best when analyzing job analysis ratings using item response theory?. [Masters Thesis]. California State University – Sacramento; 2010. Available from: http://hdl.handle.net/10211.9/376

Penn State University
16.
Sie, Haskell.
Statistical Aspects Of Computerized Adaptive Testing.
Degree: PhD, Statistics, 2014, Penn State University
URL: https://etda.libraries.psu.edu/catalog/22715
► In the past several decades, Computerized Adaptive Testing (CAT) has received much attention in educational and psychological research due to the efficiency in achieving the…
(more)
▼ In the past several decades, Computerized Adaptive
Testing (CAT) has received much attention in educational and
psychological research due to the efficiency in achieving the goal
of assessment, whether it is to estimate the latent trait of test
takers with high precision or to accurately classify them into one
of several latent classes. In the latter case, the adaptive nature
of CAT is used in educational testing to make inferences about the
location of examinees' latent ability relative to one or more
pre-specified cut-off points along the ability continuum. When
there is only one cut-off point and two proficiency groups, this
type of CAT is commonly referred to as Adaptive Mastery Testing
(AMT). A well-known approach in AMT is to combine the Sequential
Probability Ratio Test (SPRT) stopping rule with item selection to
maximize Fisher information at the mastery threshold. In the first
part of this dissertation, a new approach is proposed in which a
time limit is defined for the test and examinees' response times
are considered in both item selection and test termination. Item
selection is performed by maximizing Fisher information per time
unit, rather than Fisher information itself. The test is terminated
once the SPRT makes a classification decision, the time limit is
exceeded, or there is no remaining item that has a high enough
probability of being answered before the time limit. In a
simulation study, the new procedure showed a substantial reduction
in average testing time while slightly improving classification
accuracy compared to the original method. In addition, the new
procedure reduced the percentage of examinees who exceeded the time
limit. Another well-known stopping rule in AMT is to terminate the
assessment once the examinee's two-sided ability confidence
interval lies entirely above or below the cut score. The second
part of this dissertation proposes new procedures that seek to
improve such a variable-length stopping rule by coupling it with
curtailment and stochastic curtailment. Under the new procedures,
test termination can occur earlier if the probability is high
enough that the current classification decision remains the same
should the test continue. Computation of this probability utilizes
normality of an asymptotically equivalent version of the maximum
likelihood estimate (MLE) of ability. In two simulation studies,
the new procedures showed a substantial reduction in average test
length (ATL) while maintaining similar classification accuracy to
the original stopping rule based on the ability confidence
interval. In the last part of this dissertation, generalization to
multidimensional CAT (MCAT) is examined. Research has shown that
MCAT improves the precision of both subscores and overall scores
compared to its unidimensional counterpart. Several studies have
investigated the performance of MCAT in recovering examinees’
multiple abilities depending on the item selection methods. None of
these studies, however, considered an item pool containing a
mixture of multiple-choice (MC) and…
Subjects/Keywords: Item response theory; multidimensional model; likelihood
function
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Sie, H. (2014). Statistical Aspects Of Computerized Adaptive Testing. (Doctoral Dissertation). Penn State University. Retrieved from https://etda.libraries.psu.edu/catalog/22715
Chicago Manual of Style (16th Edition):
Sie, Haskell. “Statistical Aspects Of Computerized Adaptive Testing.” 2014. Doctoral Dissertation, Penn State University. Accessed February 15, 2019.
https://etda.libraries.psu.edu/catalog/22715.
MLA Handbook (7th Edition):
Sie, Haskell. “Statistical Aspects Of Computerized Adaptive Testing.” 2014. Web. 15 Feb 2019.
Vancouver:
Sie H. Statistical Aspects Of Computerized Adaptive Testing. [Internet] [Doctoral dissertation]. Penn State University; 2014. [cited 2019 Feb 15].
Available from: https://etda.libraries.psu.edu/catalog/22715.
Council of Science Editors:
Sie H. Statistical Aspects Of Computerized Adaptive Testing. [Doctoral Dissertation]. Penn State University; 2014. Available from: https://etda.libraries.psu.edu/catalog/22715

Georgia Tech
17.
King, David R.
Stochastic approximation of the multidimensional generalized graded unfolding model.
Degree: PhD, Psychology, 2017, Georgia Tech
URL: http://hdl.handle.net/1853/60115
► The multidimensional generalized graded unfolding model (MGGUM; Roberts & Shim, 2010) is a distance-based, unfolding multidimensional item response theory (MIRT) model for measuring person and…
(more)
▼ The multidimensional generalized graded unfolding model (MGGUM; Roberts & Shim, 2010) is a distance-based, unfolding multidimensional
item response theory (MIRT) model for measuring person and
item characteristics from graded or binary disagree-agree responses to Thurstone or Likert style questionnaire items. The current paper examined the utility of the Metropolis-Hastings Robbins-Monro (MH-RM; Cai, 2010a; Cai, 2010b; Cai, 2010c) algorithm for estimating
item parameters in the MGGUM. Initial attempts to estimate the MGGUM with the MH-RM resulted in severe misestimation of
item parameters, although estimation accuracy was markedly improved through modifications to the MH-RM. Namely, the Newton-Raphson step for updating
item parameters was replaced with the L-BFGS-B method for constrained optimization (Byrd, Lu, Nocedal, & Zhu, 1995). Runtime and estimation accuracy of the modified MH-RM were examined through a parameter recovery study that varied test length (10, 20, or 30 items), sample size (1000, 1500, or 2000 persons), number of
response categories (2, 4, or 6), dimensional structure of items (simple or complex), and dimensionality (2 or 3 dimensions). Furthermore, the practical utility of the method was explored through a real data analysis of facial affect responses. Results indicated that the modified MH-RM is an efficient method for estimating high-dimensional MGGUMs and that estimation accuracy is comparable to other commonly used methods.
Advisors/Committee Members: Roberts, James S. (advisor), Embretson, Susan E. (committee member), Habing, Brian (committee member), Hertzog, Christopher (committee member), Spieler, Daniel (committee member).
Subjects/Keywords: Psychometrics; Item response theory; Parameter estimation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
King, D. R. (2017). Stochastic approximation of the multidimensional generalized graded unfolding model. (Doctoral Dissertation). Georgia Tech. Retrieved from http://hdl.handle.net/1853/60115
Chicago Manual of Style (16th Edition):
King, David R. “Stochastic approximation of the multidimensional generalized graded unfolding model.” 2017. Doctoral Dissertation, Georgia Tech. Accessed February 15, 2019.
http://hdl.handle.net/1853/60115.
MLA Handbook (7th Edition):
King, David R. “Stochastic approximation of the multidimensional generalized graded unfolding model.” 2017. Web. 15 Feb 2019.
Vancouver:
King DR. Stochastic approximation of the multidimensional generalized graded unfolding model. [Internet] [Doctoral dissertation]. Georgia Tech; 2017. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/1853/60115.
Council of Science Editors:
King DR. Stochastic approximation of the multidimensional generalized graded unfolding model. [Doctoral Dissertation]. Georgia Tech; 2017. Available from: http://hdl.handle.net/1853/60115

University of Sydney
18.
Joslyn, Cassandra.
Re-examining adolescent bipolar disorder and related psychopathology using meta-analysis and item response theory
.
Degree: 2016, University of Sydney
URL: http://hdl.handle.net/2123/16771
► The aims of this thesis were to summarise and synthesise the current research into BD; critically evaluate existing literature to assess whether age of onset…
(more)
▼ The aims of this thesis were to summarise and synthesise the current research into BD; critically evaluate existing literature to assess whether age of onset is associated with poorer outcomes in BD; and examine whether individual symptoms may be clinically useful as risk markers in childhood and adolescence. Study one was a meta-analysis of existing research investigating outcomes associated with an early onset of BD. Data was analysed from fifteen papers that compared clinical presentation and outcomes in BD grouped according to age of onset (Total n = 7370). Clinical features found to have the strongest relationship with an earlier age of onset were those amenable to intervention such as comorbid anxiety, substance use, and treatment delay.
Study two used a novel analytical approach to evaluate whether individual clinical symptoms differed in their capacity to discriminate between those scoring high and low on underlying traits of depression and mania; or in the information they provided in relation to severity. The sample consisted of n=186 participants aged 12–21yrs including n=105 with a first degree relative diagnosed with BD (At Risk); n=63 control participants; and n=18 with a confirmed diagnosis of BD. Results support hypotheses from previous research that specific mood symptoms are more informative of risk in BD than general symptoms; and are in line with previous findings that indicate increased energy is a core feature of mania. These findings are important in relation to ongoing controversy around diagnoses of paediatric BD, and the broadening of diagnostic criteria. Overall, the studies in this thesis provide information useful to clinicians in identifying at risk populations that may benefit from early support, monitoring and intervention; and identify key risk areas in adolescent populations informing important areas of future research.
Subjects/Keywords: Bipolar Disorder;
Item Response Theory;
Adolescence
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Joslyn, C. (2016). Re-examining adolescent bipolar disorder and related psychopathology using meta-analysis and item response theory
. (Thesis). University of Sydney. Retrieved from http://hdl.handle.net/2123/16771
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Joslyn, Cassandra. “Re-examining adolescent bipolar disorder and related psychopathology using meta-analysis and item response theory
.” 2016. Thesis, University of Sydney. Accessed February 15, 2019.
http://hdl.handle.net/2123/16771.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Joslyn, Cassandra. “Re-examining adolescent bipolar disorder and related psychopathology using meta-analysis and item response theory
.” 2016. Web. 15 Feb 2019.
Vancouver:
Joslyn C. Re-examining adolescent bipolar disorder and related psychopathology using meta-analysis and item response theory
. [Internet] [Thesis]. University of Sydney; 2016. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/2123/16771.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Joslyn C. Re-examining adolescent bipolar disorder and related psychopathology using meta-analysis and item response theory
. [Thesis]. University of Sydney; 2016. Available from: http://hdl.handle.net/2123/16771
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

McGill University
19.
Rossi, Natasha T.
Nonparametric estimation of item response functions using the EM algorithm.
Degree: MA, Department of Psychology., 2001, McGill University
URL: http://digitool.library.mcgill.ca/thesisfile32939.pdf
► Bock and Aitkin (1981) developed an EM algorithm for the maximum marginal likelihood estimation of parametric item response curves, such that these estimates could be…
(more)
▼ Bock and Aitkin (1981) developed an EM algorithm for the maximum marginal likelihood estimation of parametric
item response curves, such that these estimates could be obtained in the absence of the estimation of examinee parameters. Using functional data analytic techniques described by Ramsay and Silverman (1997), this algorithm is extended to achieve nonparametric estimates of
item response functions. Unlike their parametric counterparts, nonparametric functions have the freedom to adopt any possible shape, making the current approach an attractive alternative to the popular three-parameter logistic model. A basis function expansion is described for the
item response functions, as is a roughness penalty which mediates a compromise between the fit of the data and the smoothness of the estimate. The algorithm is developed and applied to both actual and simulated data to illustrate its performance, and how the nonparametric estimates compare to results obtained through more classical methods.
Advisors/Committee Members: Ramsay, James O. (advisor).
Subjects/Keywords: Item response theory.
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share






Oregon State University
20.
Cole, Emily Lynne.
An application of item response theory to the test of gross motor development.
Degree: MS, Movement Studies for the Disabled, 1989, Oregon State University
URL: http://hdl.handle.net/1957/39726
► The purposes of this study were to (a) provide insight into the use of item response theory (IRT) with psychomotor skills, (b) assess the psychometric…
(more)
▼ The purposes of this study were to (a) provide insight into the use of
item response theory (IRT) with psychomotor skills, (b) assess the psychometric properties of the Test of Gross Motor Development (TGMD) using IRT, and (c) provide a basis for future studies of the TGMD using IRT. The dichotomously scored TGMD is a test instrument which measures psychomotor skills in a framework similar to cognitive tests, thus providing a convenient "transitional" type test which can be used to examine the use of IRT with psychomotor skill tests. The present study employed data used by Ulrich (1985) in the original psychometric analysis of the TGMD. The data consisted of 913 subjects aged 3 to 10 years, nonhandicapped and 20 mildly mentally handicapped. Since IRT cannot provide accurate ability estimates at mastery levels of 0% and 100% mastery, 32 subjects were deleted from the record. Since the TGMD was found to be multidimensional, the test was analyzed by subtests so not to violate the unidimensionality assumption of IRT. Interpretation of traditional
item statistics using classical test
theory (CTT) and IRT
item parameters revealed that
item difficulty and
item discrimination were closely related. The locomotor IRT difficulty parameters revealed a high negative correlation (r = -.87) with the CTT difficulty statistics, while the object control IRT difficulty parameters displayed a very high negative correlation (r = -.98) with their CTT counterparts.
Item response theory discrimination parameters correlated highly with CTT discrimination statistics within the locomotor (r = .91) and the object control (r = .94) subtests. IRT analysis revealed that the locomotor subtest was less difficult (median difficulty = -.944) than the object control subtest (median difficulty = .053) and the object control subtest displayed a better discrimination index (median = 2.17) than the locomotor subtest (median = 1.54). In addition to difficulty and discrimination indices, IRT also provided the amount of information given by each
item and subtest, which indicated the precision in measuring various ability levels. The locomotor subtest information was reported at I = 15.50, indicating adequate precision to measure low ability (Θ = -1.857). The object control information function showed that the subtest displayed more information (I = 18.24) at a slightly higher ability level (Θ = -1.643). The results of the
item analysis revealed that all items (behavioral criteria) of the hop, leap, and the overhand throw displayed effective psychometric properties, while 9 out of 12 skills contained items that displayed poor psychometric characteristics and/or did not fit the two-parameter model. The run (items 1, 3, and 4), gallop (items 6 and 8), horizontal jump (
item 18), skip (
item 20), slide (items 23, 24, 25, 26), strike (items 27, 28, and 29), stationary bounce (
item 32), catch (
item 34 and 35), and the kick (
item 38) should be revised. Since the TGMD is also used as a criterion-referenced test the decision validity of the mastery classification…
Advisors/Committee Members: Dunn, John (advisor), Ringle, John (committee member).
Subjects/Keywords: Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Cole, E. L. (1989). An application of item response theory to the test of gross motor development. (Masters Thesis). Oregon State University. Retrieved from http://hdl.handle.net/1957/39726
Chicago Manual of Style (16th Edition):
Cole, Emily Lynne. “An application of item response theory to the test of gross motor development.” 1989. Masters Thesis, Oregon State University. Accessed February 15, 2019.
http://hdl.handle.net/1957/39726.
MLA Handbook (7th Edition):
Cole, Emily Lynne. “An application of item response theory to the test of gross motor development.” 1989. Web. 15 Feb 2019.
Vancouver:
Cole EL. An application of item response theory to the test of gross motor development. [Internet] [Masters thesis]. Oregon State University; 1989. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/1957/39726.
Council of Science Editors:
Cole EL. An application of item response theory to the test of gross motor development. [Masters Thesis]. Oregon State University; 1989. Available from: http://hdl.handle.net/1957/39726

University of British Columbia
21.
Liu, Xiufeng.
Robustness redressed : an exploratory study on the relationships among overall assumption violation, model-data-fit, and invariance properties for item response theory models
.
Degree: 1993, University of British Columbia
URL: http://hdl.handle.net/2429/1215
► This study compares item and examinee properties, studies the robustness of IRT models, and examines the difference in robustness when using model-data-fit as a robustness…
(more)
▼ This study compares item and examinee properties, studies the robustness of IRT models, and examines the difference in robustness when using model-data-fit as a robustness criterion. A conceptualization of robustness as a statistical relationship between model assumption violation and invariance properties has been created in this study based on current understanding on IRT models. Using real data from British Columbia Science Assessments, a series of regressional and canonical analyses were conducted. Scatterplots were used to study possible non-linear relationships. The means and standard deviations of "a" and "c" parameter estimates obtained by applying the three-parameter model to a data sample were used as indices of equal discrimination and non-guessing assumption violation for the Rasch model. The assumption of local independence was taken as being equivalent to the assumption of unidimensionality, and Humphreys' pattern index "p" was used to assess the degree of unidimensionality assumption violation. Means and standard deviations of Yen's Q [i subscript] were used to assess the model-data-fit of items at the total test level. Another statistic to assess the model-data-fit of examinees (D [i subscript]) was created and validated in this study. The mean and standard deviation of D [i subscript] were used to assess model-data-fit of examinees at the total test level. The statistics used in this study for assessing item and ability parameter estimate invariance properties were correlations between estimates obtained from a sample and the estimates obtained from an assessment data file. It was found that model-data-fit of items and model-data-fit of examinees are two statistically independent total test properties of model-data-fit. Therefore, there is a necessity in practice to differentiate model-data-fit of items and model-data-fit of examinees. It was also found that item estimate invariance and ability estimate invariance are statistically independent total test properties of invariance. Therefore, there is also a necessity in practice to differentiate item invariance and ability invariance. When invariance is used as a criterion for robustness, the three-parameter model is robust for all the combinations of sample size and test length. The Rasch model is not robust in terms of ability estimate invariance when a large sample size is combined with a moderate test length, or when a moderate sample size is combined with a long test length. Finally, no significant relationship between model-data-fit and invariance was found. Therefore, results of robustness studies obtained when model-data-fit is used as a criterion and the results when invariance is used as a criterion may be totally different, or even contradictory. Because invariance is the fundamental premise of IRT models, invariance properties rather than model-data-fit should be used as criteria for robustness.
Subjects/Keywords: Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Liu, X. (1993). Robustness redressed : an exploratory study on the relationships among overall assumption violation, model-data-fit, and invariance properties for item response theory models
. (Thesis). University of British Columbia. Retrieved from http://hdl.handle.net/2429/1215
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Liu, Xiufeng. “Robustness redressed : an exploratory study on the relationships among overall assumption violation, model-data-fit, and invariance properties for item response theory models
.” 1993. Thesis, University of British Columbia. Accessed February 15, 2019.
http://hdl.handle.net/2429/1215.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Liu, Xiufeng. “Robustness redressed : an exploratory study on the relationships among overall assumption violation, model-data-fit, and invariance properties for item response theory models
.” 1993. Web. 15 Feb 2019.
Vancouver:
Liu X. Robustness redressed : an exploratory study on the relationships among overall assumption violation, model-data-fit, and invariance properties for item response theory models
. [Internet] [Thesis]. University of British Columbia; 1993. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/2429/1215.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Liu X. Robustness redressed : an exploratory study on the relationships among overall assumption violation, model-data-fit, and invariance properties for item response theory models
. [Thesis]. University of British Columbia; 1993. Available from: http://hdl.handle.net/2429/1215
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of British Columbia
22.
Scales, Michael J.
Examinee control of item order effects on latent trait model and classical model test statistics
.
Degree: 1990, University of British Columbia
URL: http://hdl.handle.net/2429/29353
► The purpose of this study was to determine what effect changes in the item order had on classical and on latent trait test statistics. As…
(more)
▼ The purpose of this study was to determine what effect changes in the item order had on classical and on latent trait test statistics. As well, comparisons were made between students who were allowed to answer the questions in any order, and students who were required to answer the questions In the order presented in the test booklet. The results were then analyzed using the student's ability level as an additional independent factor.
Four different formats of a forty item mathematics test were used with 590 students in grade eight. Half of the booklets had the items sequenced from easiest to hardest. The other booklets were sequenced from hardest to easiest. In addition, half of the tests of each sequence had special directions which prevented students from altering the given item difficulty sequence. The classroom teachers provided a rating of each student's ability in mathematics.
The order of the items was found to have a significant effect. Tests which were sequenced from hard to easy had a lower mean score. Although students with test booklets with restrictive directions had lower scores on average, it was not a statistically significant difference. There were no significant interactions found. Classical and latent trait item difficulty statistics showed a high degree of correlation.
It was concluded that under certain circumstances, the order of the items could effect both classical and latent trait statistics. It was also recommended that care should be taken when assumptions are made about parallel forms or local independence.
Subjects/Keywords: Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Scales, M. J. (1990). Examinee control of item order effects on latent trait model and classical model test statistics
. (Thesis). University of British Columbia. Retrieved from http://hdl.handle.net/2429/29353
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Scales, Michael J. “Examinee control of item order effects on latent trait model and classical model test statistics
.” 1990. Thesis, University of British Columbia. Accessed February 15, 2019.
http://hdl.handle.net/2429/29353.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Scales, Michael J. “Examinee control of item order effects on latent trait model and classical model test statistics
.” 1990. Web. 15 Feb 2019.
Vancouver:
Scales MJ. Examinee control of item order effects on latent trait model and classical model test statistics
. [Internet] [Thesis]. University of British Columbia; 1990. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/2429/29353.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Scales MJ. Examinee control of item order effects on latent trait model and classical model test statistics
. [Thesis]. University of British Columbia; 1990. Available from: http://hdl.handle.net/2429/29353
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Rutgers University
23.
Pawlak, Anthony P.
A classical test theory and item response theory analysis of the DSM-IV symptom criteria for a major depressive episode using data from the National Comorbidity Survey – Replication.
Degree: PhD, Education, 2010, Rutgers University
URL: http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053148
► Formal psychiatric symptom criteria are used to delineate the boundary between “normal” and “abnormal” behavior. In North America, the current official psychodiagnostic criteria for a…
(more)
▼ Formal psychiatric symptom criteria are used to delineate the boundary between “normal” and “abnormal” behavior. In North America, the current official psychodiagnostic criteria for a multitude of psychiatric disorders are codified in the Diagnostic and Statistical Manual of Mental Disorders (4th Edition, text revision) (APA, 2000). Psychodiagnostic symptom criteria are indicators of psychopathological constructs that are clearly latent, however, it is somewhat astonishing that formal psychometric techniques that have been developed to model latent constructs have not been used to develop and evaluate psychodiagnostic symptom criteria (Aggen, Neale, & Kendler, 2005; Zimmerman, McGlinchey, Young, & Chelminski, 2006a, 2006b). There are two main psychometric paradigms that are currently in use: classical test theory and item response theory (Crocker & Algina, 1986). Classical test theory has been extensively used on both cognitive constructs and noncognitive constructs (Crocker & Algina, 1986; Embretson & Hershberger, 1999). Item response theory is considered to be theoretically superior to classical test theory and it has revolutionized the creation and evaluation of cognitive constructs (Crocker & Algina, 1986; Embretson & Hershberger, 1999; McDonald, 1999). However, item response theory has not been extensively utilized for the creation and evaluation of noncognitive constructs, even though it holds great promise in this regard (Reise, 1999; Reise & Henson, 2003). The proposed study will use classical test theory and item response theory to assess the psychodiagnostic symptom criteria for depression as found in the Diagnostic and Statistical Manual of Mental Disorders (4th Edition, text revision) (APA, 2000). The data to be used in the proposed study was collected in the National Comorbidity Survey – Replication, which was a nationally representative epidemiological community survey (Kessler et al., 2004; Kessler & Merikangas, 2004). The results of such a study will give a sophisticated psychometric perspective on the psychodiagnostic symptom criteria of depression that has not yet been available and it will provide valuable information on improving and refining future diagnostic symptom criteria of depression.
Includes abstract
Advisors/Committee Members: Pawlak, Anthony P. (author), Penfield, Douglas A (chair), Camilli, Gregory (internal member), Tomlinson-Clarke, Saundra (internal member), Langenbucher, James (outside member).
Subjects/Keywords: Item response theory; Psychometrics; Depression, Mental – Diagnosis
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Pawlak, A. P. (2010). A classical test theory and item response theory analysis of the DSM-IV symptom criteria for a major depressive episode using data from the National Comorbidity Survey – Replication. (Doctoral Dissertation). Rutgers University. Retrieved from http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053148
Chicago Manual of Style (16th Edition):
Pawlak, Anthony P. “A classical test theory and item response theory analysis of the DSM-IV symptom criteria for a major depressive episode using data from the National Comorbidity Survey – Replication.” 2010. Doctoral Dissertation, Rutgers University. Accessed February 15, 2019.
http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053148.
MLA Handbook (7th Edition):
Pawlak, Anthony P. “A classical test theory and item response theory analysis of the DSM-IV symptom criteria for a major depressive episode using data from the National Comorbidity Survey – Replication.” 2010. Web. 15 Feb 2019.
Vancouver:
Pawlak AP. A classical test theory and item response theory analysis of the DSM-IV symptom criteria for a major depressive episode using data from the National Comorbidity Survey – Replication. [Internet] [Doctoral dissertation]. Rutgers University; 2010. [cited 2019 Feb 15].
Available from: http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053148.
Council of Science Editors:
Pawlak AP. A classical test theory and item response theory analysis of the DSM-IV symptom criteria for a major depressive episode using data from the National Comorbidity Survey – Replication. [Doctoral Dissertation]. Rutgers University; 2010. Available from: http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000053148

University of Missouri – Columbia
24.
He, Yong, 1973-.
Robust scale transformation methods in IRT true score equating under common-item nonequivalent groups design.
Degree: 2013, University of Missouri – Columbia
URL: http://hdl.handle.net/10355/37615
► Common test items play an important role in equating multiple test forms under the common-item nonequivalent groups design. Inconsistent item parameter estimates among common items…
(more)
▼ Common test items play an important role in equating multiple test forms under the common-
item nonequivalent groups design. Inconsistent
item parameter estimates among common items can lead to large bias in equated scores for IRT true score equating. Current methods extensively focus on detection and elimination of outlying common items, which usually leads to enlarged random equating error and inadequate content representation of common items. New robust scale transformation methods based on robust regression, the robust Deming regression method, the robust Haebara method, and the least absolute values (LAV) method, were proposed. In simulation studies, performances of the proposed methods were compared to the Stocking-Lord method which yields the least equating errors among the traditional method and to outlier removal methods. The results indicate: 1) the robust Haebara method and the LAV method usually outperform the robust Deming regression method, 2) the robust Haebara method and the LAV method perform as well as the Stocking Lord method under the condition of No outlier, 3) the robust Haebara method and the LAV method perform better than the Stocking-Lord method when a single outlying common
item is simulated, 4) the LAV method and the robust Haebara method are better than, or at least comparable to, the existing outlier removal methods in the presence of a single outlying common
item, and 5) the LAV method and the robust Haebara method have smaller equated scores than the Stocking-Lord method using the CBASE data of English and Mathematics.
Advisors/Committee Members: Osterlind, Steven J. (advisor), University of Missouri-Columbia. Graduate School. Theses and Dissertations. Dissertations. 2013 Dissertations (other).
Subjects/Keywords: test equating; item response theory; scale transformation
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
He, Yong, 1. (2013). Robust scale transformation methods in IRT true score equating under common-item nonequivalent groups design. (Thesis). University of Missouri – Columbia. Retrieved from http://hdl.handle.net/10355/37615
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
He, Yong, 1973-. “Robust scale transformation methods in IRT true score equating under common-item nonequivalent groups design.” 2013. Thesis, University of Missouri – Columbia. Accessed February 15, 2019.
http://hdl.handle.net/10355/37615.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
He, Yong, 1973-. “Robust scale transformation methods in IRT true score equating under common-item nonequivalent groups design.” 2013. Web. 15 Feb 2019.
Vancouver:
He, Yong 1. Robust scale transformation methods in IRT true score equating under common-item nonequivalent groups design. [Internet] [Thesis]. University of Missouri – Columbia; 2013. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/10355/37615.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
He, Yong 1. Robust scale transformation methods in IRT true score equating under common-item nonequivalent groups design. [Thesis]. University of Missouri – Columbia; 2013. Available from: http://hdl.handle.net/10355/37615
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Missouri – Columbia
25.
Wang, Ting.
Use of Score-based Tests in IRT Models.
Degree: 2015, University of Missouri – Columbia
URL: http://hdl.handle.net/10355/47159
► Measurement invariance is a fundamental assumption in item response theory models, where the relationship between a latent construct (ability) and observed item responses are of…
(more)
▼ Measurement invariance is a fundamental assumption in
item response theory models, where the relationship between a latent construct (ability) and observed
item responses are of interest. Violation of this assumption would render the scale misinterpreted or cause systematic bias against certain groups of people. While a number of methods have been proposed to detect measurement invariance violations, they all require definition of problematic model parameters and respondent grouping information in advance. However, these "locating" pieces of information are typically unknown in practice. As an alternative, this dissertation focuses on a family of recently-proposed tests based on stochastic processes of casewise derivatives of the likelihood function (i.e., scores). These score-based tests only require estimation of the null model (when measurement invariance assumption holds), with problematic subgroups of respondents and model parameters being identified in a factor-analytic, continuous data context. In this dissertation, I aim to generalize these tests to
item response theory models for categorical data. The tests' theoretical background and implementation are detailed. The tests' ability to identify problematic subgroups and model parameters is studied via simulation. An empirical example involving the tests is also provided. In the end, potential applications and future development are discussed.
Advisors/Committee Members: Merkle, Edgar (advisor).
Subjects/Keywords: Item response theory; Invariant measures; Psychometrics
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wang, T. (2015). Use of Score-based Tests in IRT Models. (Thesis). University of Missouri – Columbia. Retrieved from http://hdl.handle.net/10355/47159
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Chicago Manual of Style (16th Edition):
Wang, Ting. “Use of Score-based Tests in IRT Models.” 2015. Thesis, University of Missouri – Columbia. Accessed February 15, 2019.
http://hdl.handle.net/10355/47159.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
MLA Handbook (7th Edition):
Wang, Ting. “Use of Score-based Tests in IRT Models.” 2015. Web. 15 Feb 2019.
Vancouver:
Wang T. Use of Score-based Tests in IRT Models. [Internet] [Thesis]. University of Missouri – Columbia; 2015. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/10355/47159.
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation
Council of Science Editors:
Wang T. Use of Score-based Tests in IRT Models. [Thesis]. University of Missouri – Columbia; 2015. Available from: http://hdl.handle.net/10355/47159
Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

University of Notre Dame
26.
Jeffrey M. Patton.
Some Consequences of Response Time Model Misspecification in
Educational Measurement</h1>.
Degree: PhD, Psychology, 2014, University of Notre Dame
URL: https://curate.nd.edu/show/n296ww74p1r
► Response times (RTs) on test items are a valuable source of information concerning examinees and the items themselves. As such, they have the potential…
(more)
▼ Response times (RTs) on test items are a
valuable source of information concerning examinees and the items
themselves. As such, they have the potential to improve a wide
variety of measurement activities. However, researchers have found
that empirical RT distributions can exhibit a variety of shapes
among the items within a single test. Though a number of
semiparametric and “flexible” parametric models are available, no
single model can accommodate all plausible shapes of empirical RT
distributions. Thus the goal of this research was to study a few of
the potential consequences of RT model misspecification in
educational measurement. In particular, two promising applications
of RT models were of interest: examinee ability estimation and
item
selection in computerized adaptive testing (CAT).
First, by jointly modeling RTs and
item responses, RTs can
be used as collateral information in the estimation of examinee
ability. This can be accomplished by embedding separate models for
RTs and
item responses in Level 1 of a hierarchical model and
allowing their parameters to correlate in Level 2. If the RT model
is misspecified, a potential drawback of this hierarchical
structure is that any negative impact on estimates of the RT model
parameters may, in turn, negatively impact ability estimates.
However, a simulation study found that estimates of the RT model
parameters were robust to misspecification of the RT model. In
turn, ability estimates were also robust.
Second, by considering the time intensity of items during
item selection in CAT, test completion times can be reduced without
sacrificing the precision of ability estimates. This can be done by
choosing items that maximize the ratio of
item information to the
examinee’s predicted RT. However, an RT model is needed to make the
prediction; if the RT model is misspecified, this method may not
perform as intended. A simulation study found that whether or not
the correct RT model was used to make the prediction had no bearing
on test completion times. Additionally, using a simple, average RT
as the prediction was just as effective as model-based prediction
in reducing test completion times.
Advisors/Committee Members: Ying Cheng, Committee Chair, Ke-Hai Yuan, Committee Member, Zhiyong Zhang, Committee Member, Scott E. Maxwell, Committee Member.
Subjects/Keywords: adaptive testing; model misfit; item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Patton, J. M. (2014). Some Consequences of Response Time Model Misspecification in
Educational Measurement</h1>. (Doctoral Dissertation). University of Notre Dame. Retrieved from https://curate.nd.edu/show/n296ww74p1r
Chicago Manual of Style (16th Edition):
Patton, Jeffrey M.. “Some Consequences of Response Time Model Misspecification in
Educational Measurement</h1>.” 2014. Doctoral Dissertation, University of Notre Dame. Accessed February 15, 2019.
https://curate.nd.edu/show/n296ww74p1r.
MLA Handbook (7th Edition):
Patton, Jeffrey M.. “Some Consequences of Response Time Model Misspecification in
Educational Measurement</h1>.” 2014. Web. 15 Feb 2019.
Vancouver:
Patton JM. Some Consequences of Response Time Model Misspecification in
Educational Measurement</h1>. [Internet] [Doctoral dissertation]. University of Notre Dame; 2014. [cited 2019 Feb 15].
Available from: https://curate.nd.edu/show/n296ww74p1r.
Council of Science Editors:
Patton JM. Some Consequences of Response Time Model Misspecification in
Educational Measurement</h1>. [Doctoral Dissertation]. University of Notre Dame; 2014. Available from: https://curate.nd.edu/show/n296ww74p1r

University of Notre Dame
27.
Errol J. Philip.
The 6th Vital Sign in Medicine: Evaluation of a
Comprehensive Model of Distress in Cancer Care</h1>.
Degree: PhD, Psychology, 2011, University of Notre Dame
URL: https://curate.nd.edu/show/3484zg66p9w
► Cancer is the second leading cause of mortality in America and has a profound impact upon individuals, families, health care providers, and society at…
(more)
▼ Cancer is the second leading cause of
mortality in America and has a profound impact upon individuals,
families, health care providers, and society at large. A
substantial minority of cancer patients experience clinically
significant symptoms of psychological distress, which can be
associated with a wide range of negative health outcomes. Currently
available screening measures narrowly define distress, are
relatively primitive tools and their relationship to patient
behavior is largely unknown. The current study sought to evaluate
the psychometric properties and clinical utility of Distress
Screening System (DSS), a comprehensive measure developed to
address these limitations. There were 492 individuals diagnosed
with cancer assessed in the current study through mail-out
questionnaires and follow-up phone interviews. The majority were
female (71.3%), married (59.6%) and Caucasian (68.6%) or African
American (20.7%), with a mean age of 61.41 years (SD = 12.92).
Preliminary and exploratory factor analysis revealed the DSS to
possess appropriate internal and concurrent validity, and to be
sufficiently unidimensional to proceed with further examination.
Item response analysis revealed moderate overall model fit, with
the majority of items performing well with adequate levels of
discrimination and difficulty across the distress continuum. The
DSS was most accurate in assessing moderate to high levels of
distress. Examination of clinical utility revealed no significant
advantage for the DSS in predicting quality of life or referral
preference. The DSS appears to be a psychometrically valid measure
of distress and provides the basis for a broader conceptualization
of this construct. The majority of items performed well within a
unidimensional
item response framework and may therefore be
suitable to utilize within a computerized assessment format.
Despite these results, the DSS did not demonstrate a significant
advantage in the prediction of participants’ quality of life or
referral preference. The current study provides a foundation for
future work examining the conceptualization of distress and
development of screening tools within an advanced psychometric
framework and computerized administration. Advances in empirical
measurement and clinical assessment are vital in addressing the
ongoing challenge of providing nationwide comprehensive cancer
care.
Advisors/Committee Members: Scott M. Monroe, Committee Member, Thomas V. Merluzzi, Committee Chair, Anita E. Kelly, Committee Member, Ying Cheng, Committee Member.
Subjects/Keywords: Distress Screening; Cancer; Item Response Theory; Survivorship
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Philip, E. J. (2011). The 6th Vital Sign in Medicine: Evaluation of a
Comprehensive Model of Distress in Cancer Care</h1>. (Doctoral Dissertation). University of Notre Dame. Retrieved from https://curate.nd.edu/show/3484zg66p9w
Chicago Manual of Style (16th Edition):
Philip, Errol J.. “The 6th Vital Sign in Medicine: Evaluation of a
Comprehensive Model of Distress in Cancer Care</h1>.” 2011. Doctoral Dissertation, University of Notre Dame. Accessed February 15, 2019.
https://curate.nd.edu/show/3484zg66p9w.
MLA Handbook (7th Edition):
Philip, Errol J.. “The 6th Vital Sign in Medicine: Evaluation of a
Comprehensive Model of Distress in Cancer Care</h1>.” 2011. Web. 15 Feb 2019.
Vancouver:
Philip EJ. The 6th Vital Sign in Medicine: Evaluation of a
Comprehensive Model of Distress in Cancer Care</h1>. [Internet] [Doctoral dissertation]. University of Notre Dame; 2011. [cited 2019 Feb 15].
Available from: https://curate.nd.edu/show/3484zg66p9w.
Council of Science Editors:
Philip EJ. The 6th Vital Sign in Medicine: Evaluation of a
Comprehensive Model of Distress in Cancer Care</h1>. [Doctoral Dissertation]. University of Notre Dame; 2011. Available from: https://curate.nd.edu/show/3484zg66p9w

University of Iowa
28.
Peterson, Jaime Leigh.
Multidimensional item response theory observed score equating methods for mixed-format tests.
Degree: PhD, Psychological and Quantitative Foundations, 2014, University of Iowa
URL: https://ir.uiowa.edu/etd/1379
► The purpose of this study was to build upon the existing MIRT equating literature by introducing a full multidimensional item response theory (MIRT) observed…
(more)
▼ The purpose of this study was to build upon the existing MIRT equating literature by introducing a full multidimensional
item response theory (MIRT) observed score equating method for mixed-format exams because no such methods currently exist. At this time, the MIRT equating literature is limited to full MIRT observed score equating methods for multiple-choice only exams and Bifactor observed score equating methods for mixed-format exams. Given the high frequency with which mixed-format exams are used and the accumulating evidence that some tests are not purely unidimensional, it was important to present a full MIRT equating method for mixed-format tests. The performance of the full MIRT observed score method was compared with the traditional equipercentile method, and unidimensional IRT (UIRT) observed score method, and Bifactor observed score method. With the Bifactor methods, group-specific factors were defined according to
item format or content subdomain. With the full MIRT methods, two- and four-dimensional models were included and correlations between latent abilities were freely estimated or set to zero. All equating procedures were carried out using three end-of-course exams: Chemistry, Spanish Language, and English Language and Composition. For all subjects, two separate datasets were created using pseudo-groups in order to have two separate equating criteria. The specific equating criteria that served as baselines for comparisons with all other methods were the theoretical Identity and the traditional equipercentile procedures. Several important conclusions were made. In general, the multidimensional methods were found to perform better for datasets that evidenced more multidimensionality, whereas unidimensional methods worked better for unidimensional datasets. In addition, the scale on which scores are reported influenced the comparative conclusions made among the studied methods. For performance classifications, which are most important to examinees, there typically were not large discrepancies among the UIRT, Bifactor, and full MIRT methods. However, this study was limited by its sole reliance on real data which was not very multidimensional and for which the true equating relationship was not known. Therefore, plans for improvements, including the addition of a simulation study to introduce a variety of dimensional data structures, are also discussed.
Advisors/Committee Members: Lee, Won-Chan (supervisor).
Subjects/Keywords: Bifactor; Dimensionality; Equating; Item Response Theory; Multidimensional Item Response Theory; Educational Psychology
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Peterson, J. L. (2014). Multidimensional item response theory observed score equating methods for mixed-format tests. (Doctoral Dissertation). University of Iowa. Retrieved from https://ir.uiowa.edu/etd/1379
Chicago Manual of Style (16th Edition):
Peterson, Jaime Leigh. “Multidimensional item response theory observed score equating methods for mixed-format tests.” 2014. Doctoral Dissertation, University of Iowa. Accessed February 15, 2019.
https://ir.uiowa.edu/etd/1379.
MLA Handbook (7th Edition):
Peterson, Jaime Leigh. “Multidimensional item response theory observed score equating methods for mixed-format tests.” 2014. Web. 15 Feb 2019.
Vancouver:
Peterson JL. Multidimensional item response theory observed score equating methods for mixed-format tests. [Internet] [Doctoral dissertation]. University of Iowa; 2014. [cited 2019 Feb 15].
Available from: https://ir.uiowa.edu/etd/1379.
Council of Science Editors:
Peterson JL. Multidimensional item response theory observed score equating methods for mixed-format tests. [Doctoral Dissertation]. University of Iowa; 2014. Available from: https://ir.uiowa.edu/etd/1379

University of Kansas
29.
Barri, Moatasim Asaad.
THE IMPACT OF ANCHOR ITEM EXPOSURE ON MEAN/SIGMA LINKING AND IRT TRUE SCORE EQUATING UNDER THE NEAT DESIGN.
Degree: M.S.Ed., Psychology & Research in Education, 2013, University of Kansas
URL: http://hdl.handle.net/1808/15087
► To compare examinees' true ability and their actual competence on the content being measured across different test administrations, test scores must be equated. One of…
(more)
▼ To compare examinees' true ability and their actual competence on the content being measured across different test administrations, test scores must be equated. One of the most common equating designs is called the nonequivalent anchor test (NEAT) design. This equating design requires two forms of a test, each of which is given to a group of examinees one year apart. The two forms have a set of items in common, usually called the anchor set, in order to control for differences in examinee ability. The anchor set can be treated as internal or external according to whether or not examinees' responses contribute to their total score. However, the anchor set is
subject to exposure when it is used repeatedly, which most likely becomes a serious threat to test fairness and validity. Therefore, from time to time, the items in the anchor set must be evaluated for exposure. This study employed a Monte Carlo investigation to evaluate the impact of internal anchor
item exposure on the equating process under the NEAT design. The study addressed a general scenario in which two forms of a small-scale dichotomously scored test were given to two small groups of examinees. Since mean/sigma linking and true score equating are main components of the equating process in the
item response theory (IRT), the recovery of equating true scores and linking coefficients, slope and intercept, were assessed under various combinations of testing conditions using bias and mean squared error (MSE). Three testing conditions were manipulated in this study: (a) the number of exposed anchor items, (b) the percentage of examinees with preknowledge of the exposed anchor items, and (c) the difference in the means of ability distributions of groups taking the original form and new form. In each combination of testing conditions, the simulation process was replicated 100 times. The study results indicated that anchor
item exposure caused all examinees to receive inflated equating true scores. When anchor items were
subject to low levels of exposure, the accuracy of equating true scores was still perturbing, while high levels of exposure distorted the test scores completely. The anchor
item exposure became a serious threat to the test fairness to the extent that unqualified examinees might receive an unfair benefit over qualified examinees who completed an unexposed test form.
Advisors/Committee Members: Frey, Bruce (advisor), Skorupski, William (cmtemember), Peyton, Vicki (cmtemember).
Subjects/Keywords: Educational tests & measurements; Anchor item exposure; Equating; Item response theory
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Barri, M. A. (2013). THE IMPACT OF ANCHOR ITEM EXPOSURE ON MEAN/SIGMA LINKING AND IRT TRUE SCORE EQUATING UNDER THE NEAT DESIGN. (Masters Thesis). University of Kansas. Retrieved from http://hdl.handle.net/1808/15087
Chicago Manual of Style (16th Edition):
Barri, Moatasim Asaad. “THE IMPACT OF ANCHOR ITEM EXPOSURE ON MEAN/SIGMA LINKING AND IRT TRUE SCORE EQUATING UNDER THE NEAT DESIGN.” 2013. Masters Thesis, University of Kansas. Accessed February 15, 2019.
http://hdl.handle.net/1808/15087.
MLA Handbook (7th Edition):
Barri, Moatasim Asaad. “THE IMPACT OF ANCHOR ITEM EXPOSURE ON MEAN/SIGMA LINKING AND IRT TRUE SCORE EQUATING UNDER THE NEAT DESIGN.” 2013. Web. 15 Feb 2019.
Vancouver:
Barri MA. THE IMPACT OF ANCHOR ITEM EXPOSURE ON MEAN/SIGMA LINKING AND IRT TRUE SCORE EQUATING UNDER THE NEAT DESIGN. [Internet] [Masters thesis]. University of Kansas; 2013. [cited 2019 Feb 15].
Available from: http://hdl.handle.net/1808/15087.
Council of Science Editors:
Barri MA. THE IMPACT OF ANCHOR ITEM EXPOSURE ON MEAN/SIGMA LINKING AND IRT TRUE SCORE EQUATING UNDER THE NEAT DESIGN. [Masters Thesis]. University of Kansas; 2013. Available from: http://hdl.handle.net/1808/15087

University of Iowa
30.
Wall, Nathan Lane.
Augmented testing and effects on item and proficiency estimates in different calibration designs.
Degree: PhD, Psychological and Quantitative Foundations, 2011, University of Iowa
URL: https://ir.uiowa.edu/etd/1100
► Broadening the term augmented testing to include a combination of multiple measures to assess examinee performance on a single construct, the issues of IRT…
(more)
▼ Broadening the term augmented testing to include a combination of multiple measures to assess examinee performance on a single construct, the issues of IRT
item parameter and proficiency estimates were investigated. The intent of this dissertation is to determine if different IRT calibration designs result in differences to
item and proficiency parameter estimates and to understand the nature of those differences. Examinees were sampled from a testing program in which each examinee was administered three mathematics assessments measuring a broad mathematics domain at the high school level. This sample of examinees was used to perform a real data analysis to investigate the
item and proficiency estimates. A simulation study was also conducted based upon the real data. The factors investigated for the real data study included three IRT calibration designs and two IRT models. The calibration designs included: separately calibrating each assessment, calibrating all assessments in one joint calibration, and separately calibrating items in three distinct content areas. Joint calibration refers to the use of IRT methodology to calibrate two or more tests, which have been administered to a single group, together so as to place all of the items on a common scale. The two IRT models were the one- and three-parameter logistic model. Also investigated were five proficiency estimators: maximum likelihood estimates, expected a posteriori, maximum a posteriori, summed-score EAP, and test characteristic curve estimates. The simulation study included the same calibration designs and IRT models but the data were simulated with varying levels of correlations among the proficiencies to determine the affect upon the
item parameter estimates. The main findings indicate that
item parameter and proficiency estimates are affected by the IRT calibration design. The discrimination parameter estimates of the three-parameter model were larger when calibrated under the joint calibration design for one assessment but not for the other two. Noting that equal
item discrimination is an assumption of the 1-PL model, this finding raises questions as to the degree of model fit when the 1-PL model is used. Items on a second assessment had lower difficulty parameters in the joint calibration design while the
item parameter estimates of the other two assessments were higher. Differences in proficiency estimates between calibration designs were also discovered, which were found to result in examinees being inconsistently classified into performance categories. Differences were observed in regards to the choice of IRT model. Finally, as the level of correlation among proficiencies increased in the simulation data, the differences observed in the
item parameter estimates were decreased. Based upon the findings, IRT
item parameter estimates resulting from differing calibrations designs should not be used interchangeably. Practitioners who use
item pools should base the pool refreshment calibration design upon the one used to…
Advisors/Committee Members: Kolen, Michael J. (supervisor).
Subjects/Keywords: Item Parameter Estimate Calibration; Item Response Theory; Proficiency Estimates; Educational Psychology
Record Details
Similar Records
Cite
Share »
Record Details
Similar Records
Cite
« Share





❌
APA ·
Chicago ·
MLA ·
Vancouver ·
CSE |
Export
to Zotero / EndNote / Reference
Manager
APA (6th Edition):
Wall, N. L. (2011). Augmented testing and effects on item and proficiency estimates in different calibration designs. (Doctoral Dissertation). University of Iowa. Retrieved from https://ir.uiowa.edu/etd/1100
Chicago Manual of Style (16th Edition):
Wall, Nathan Lane. “Augmented testing and effects on item and proficiency estimates in different calibration designs.” 2011. Doctoral Dissertation, University of Iowa. Accessed February 15, 2019.
https://ir.uiowa.edu/etd/1100.
MLA Handbook (7th Edition):
Wall, Nathan Lane. “Augmented testing and effects on item and proficiency estimates in different calibration designs.” 2011. Web. 15 Feb 2019.
Vancouver:
Wall NL. Augmented testing and effects on item and proficiency estimates in different calibration designs. [Internet] [Doctoral dissertation]. University of Iowa; 2011. [cited 2019 Feb 15].
Available from: https://ir.uiowa.edu/etd/1100.
Council of Science Editors:
Wall NL. Augmented testing and effects on item and proficiency estimates in different calibration designs. [Doctoral Dissertation]. University of Iowa; 2011. Available from: https://ir.uiowa.edu/etd/1100
◁ [1] [2] [3] [4] [5] … [14] ▶
.