Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

Sorted by: relevance · author · university · dateNew search

You searched for +publisher:"University of Illinois – Chicago" +contributor:("Smith, Jr., Everett V."). Showing records 1 – 3 of 3 total matches.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters


University of Illinois – Chicago

1. McNaughton, Tara M. Rater Stability in a High-Stakes Performance Assessment: A Longitudinal Investigation.

Degree: 2018, University of Illinois – Chicago

The certification of medical practitioners frequently includes a performance assessment to ensure competence. Although such assessments offer richer evaluations of examinee performance compared to other exam types, the reliance on expert judgement in evaluating examinees presents some concerns. The subjective nature of the rating task may allow factors unrelated to examinee performance to influence ratings, and raters may have idiosyncratic perceptions of performance levels. To assess inter- and intra-rater differences, I used the Many-Facet Rasch Measurement model to quantify rater severity and rating scale category use. Applying a partial credit model on the rater facet, I used rater category thresholds to calculate a category breadth measure to identify central tendency and extremism. This method compares favorably with other indices used to identify these rater effects. The category breadth method identifies a slightly larger proportion of raters as exhibiting effects while providing more precise feedback to raters and rater trainers. Using hierarchical linear models, I assessed the stability of rater severity and consistency measures longitudinally. Most raters demonstrated stable severity; however, a sizable minority did not. Therefore, caution is warranted when using rater severity in common element equating designs. Conversely, nearly all raters demonstrated stable consistency measures, suggesting that rater consistency does not improve with experience. More intensive training for new raters or the use of practice ratings as a screening tool for rater selection may be necessary to improve rating quality. Advisors/Committee Members: Smith, Jr., Everett V (advisor), Yin, Yue (committee member), Thomas , Michael K (committee member), Dobria, Lidia (committee member), Incrocci, Maria (committee member), Smith, Jr., Everett V (chair).

Subjects/Keywords: rater effects; rater severity; rating quality; Longitudinal; Rasch

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

McNaughton, T. M. (2018). Rater Stability in a High-Stakes Performance Assessment: A Longitudinal Investigation. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/22632

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

McNaughton, Tara M. “Rater Stability in a High-Stakes Performance Assessment: A Longitudinal Investigation.” 2018. Thesis, University of Illinois – Chicago. Accessed December 05, 2020. http://hdl.handle.net/10027/22632.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

McNaughton, Tara M. “Rater Stability in a High-Stakes Performance Assessment: A Longitudinal Investigation.” 2018. Web. 05 Dec 2020.

Vancouver:

McNaughton TM. Rater Stability in a High-Stakes Performance Assessment: A Longitudinal Investigation. [Internet] [Thesis]. University of Illinois – Chicago; 2018. [cited 2020 Dec 05]. Available from: http://hdl.handle.net/10027/22632.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

McNaughton TM. Rater Stability in a High-Stakes Performance Assessment: A Longitudinal Investigation. [Thesis]. University of Illinois – Chicago; 2018. Available from: http://hdl.handle.net/10027/22632

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation


University of Illinois – Chicago

2. Risk, Nicole M. The Impact of Item Parameter Drift in Computer Adaptive Testing (CAT).

Degree: 2015, University of Illinois – Chicago

A series of CAT simulations were conducted to evaluate the impact of item parameter drift (IPD) in computer adaptive testing (CAT). The simulations varied the amount and magnitude of IPD, as well as the size of the item pool. A baseline condition without the presence of drift was established and used to compare the results of the altered IPD conditions to the non-altered baseline condition. A number of criteria were used to evaluate the effects of IPD on measurement precision, classification, and test efficiency. These included bias, root mean square error (RMSE), absolute average difference (AAD), total percentages of misclassifcation, the number of false positives and false negatives, the total test lengths, and item exposure rates. The results revealed negligible differences when comparing the IPD conditions to the baseline condition for all measures of precision, classification accuracy, and test efficiency. Magnitude of drift appeared to have a larger impact on measurement precision than the number of items with drift. Advisors/Committee Members: Smith, Jr., Everett V. (advisor), Myford, Carol (committee member), Yin, Yue (committee member), Stahl, John (committee member), Lawless, Kimberly (committee member).

Subjects/Keywords: CAT; IPD; Rasch; Certification Testing

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Risk, N. M. (2015). The Impact of Item Parameter Drift in Computer Adaptive Testing (CAT). (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/19455

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

Risk, Nicole M. “The Impact of Item Parameter Drift in Computer Adaptive Testing (CAT).” 2015. Thesis, University of Illinois – Chicago. Accessed December 05, 2020. http://hdl.handle.net/10027/19455.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

Risk, Nicole M. “The Impact of Item Parameter Drift in Computer Adaptive Testing (CAT).” 2015. Web. 05 Dec 2020.

Vancouver:

Risk NM. The Impact of Item Parameter Drift in Computer Adaptive Testing (CAT). [Internet] [Thesis]. University of Illinois – Chicago; 2015. [cited 2020 Dec 05]. Available from: http://hdl.handle.net/10027/19455.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

Risk NM. The Impact of Item Parameter Drift in Computer Adaptive Testing (CAT). [Thesis]. University of Illinois – Chicago; 2015. Available from: http://hdl.handle.net/10027/19455

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

3. LaForte, Erica M. Validation of Score Interpretations for the BDI-2 Using Rasch Methodology.

Degree: 2014, University of Illinois – Chicago

Research has shown the positive impacts of early intervention for children who experience developmental delay. Several challenges exist for professionals tasked with identifying children with developmental delay, designing intervention programs, and tracking the progress of the children who receive intervention services. The Battelle Developmental Inventory, Second Edition (BDI-2) is a measure of early childhood development that can provide a psychometrically sound solution to several of the challenges facing early childhood educators. In this study, I use the validity framework proposed by Wolfe and Smith (2007) and Rasch measurement analyses to gather evidence relevant to the structural, substantive, and generalizability aspects of validity for the BDI-2 Gross Motor subdomain scores. The results of my analyses provide evidence to support the structural and generalizability aspects of validity for the BDI-2 Gross Motor subdomain scores. The Rasch model assumptions of undimensionality and local independence are met. The item and examinee separation indices and separation reliabilities are high. The evidence I gathered relevant to the substantive aspect of validity suggests that examiners may not have used the three-category BDI-2 scoring system as the test developer intended; however, an optimized two-category scoring system produced an examinee ability rank order that was nearly identical to the examinee ability rank order from the three-category scoring system. Additionally, I identified some anomalous examinee score strings in the dataset. Removal of these unexpected scores did not impact the rank-order of the item difficulty measures. Advisors/Committee Members: Smith, Jr., Everett V. (advisor), Myford, Carol (committee member), Maggin, Daniel (committee member), Schrank, Fredrick (committee member), Ledbetter, Mark (committee member).

Subjects/Keywords: BDI-2; Battelle Developmental Inventory, 2nd Edition; Rasch model; test validity

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

LaForte, E. M. (2014). Validation of Score Interpretations for the BDI-2 Using Rasch Methodology. (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/18766

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Chicago Manual of Style (16th Edition):

LaForte, Erica M. “Validation of Score Interpretations for the BDI-2 Using Rasch Methodology.” 2014. Thesis, University of Illinois – Chicago. Accessed December 05, 2020. http://hdl.handle.net/10027/18766.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

MLA Handbook (7th Edition):

LaForte, Erica M. “Validation of Score Interpretations for the BDI-2 Using Rasch Methodology.” 2014. Web. 05 Dec 2020.

Vancouver:

LaForte EM. Validation of Score Interpretations for the BDI-2 Using Rasch Methodology. [Internet] [Thesis]. University of Illinois – Chicago; 2014. [cited 2020 Dec 05]. Available from: http://hdl.handle.net/10027/18766.

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

Council of Science Editors:

LaForte EM. Validation of Score Interpretations for the BDI-2 Using Rasch Methodology. [Thesis]. University of Illinois – Chicago; 2014. Available from: http://hdl.handle.net/10027/18766

Note: this citation may be lacking information needed for this citation format:
Not specified: Masters Thesis or Doctoral Dissertation

.