Advanced search options

Advanced Search Options 🞨

Browse by author name (“Author name starts with…”).

Find ETDs with:

in
/  
in
/  
in
/  
in

Written in Published in Earliest date Latest date

Sorted by

Results per page:

You searched for +publisher:"University of Texas – Austin" +contributor:("Dodd, Barbara G"). One record found.

Search Limiters

Last 2 Years | English Only

No search limiters apply to these results.

▼ Search Limiters

1. Jensen, Mishan G. B. Extension of the item pocket method allowing for response review and revision to a computerized adaptive test using the generalized partial credit model.

Degree: PhD, Educational Psychology, 2017, University of Texas – Austin

Computerized Adaptive Testing (CAT) has increased in the last few decades, due in part to the increased use and availability of personal computers, but also partly due to the benefits of CATs. CATs provide increased measurement precision of ability estimates while decreasing the demand on examinees with shorter tests. This is accomplished by tailoring the test to each examinee and selecting items that are not too difficult or too easy based on the examinees’ interim ability estimate and responses to previous items. These benefits come at the cost of the flexibility to move through the test as an examinee would with a Paper and Pencil (P & P) test. The algorithms used in CATs for item selection and ability estimation require restrictions to response review and revision; however, a large portion of examinees desire options for review and revision of responses (Vispoel, Clough, Bleiler, Hendrickson, and Ihrig, 2002). Previous research has examined response review and revision in CATs with limited review and revision options and are limited to after all items had been administered. The development of the Item Pocket (IP) method (Han, 2013) has allowed for response review and revision during the test, relaxing the restrictions, while maintaining an acceptable level of measurement precision. This is achieved by creating an item pocket in which items are placed, which are excluded from use in the interim ability estimation and the item selection procedures. The initial simulation study was conducted by Han (2013) who investigated the use of the IP method using a dichotomously-scored fixed length test. The findings indicated that the IP method does not substantially decrease measurement precision and bias in the ability estimates were within acceptable ranges for operational tests. This simulation study extended the IP method to a CAT using polytomously-scored items using the Generalized Partial Credit model with exposure control and content balancing. The IP method was implemented in tests with three IP sizes (2, 3, and 4), two termination criteria (fixed and variable), two test lengths (15 and 20), and two item completion conditions (forced to answer and ignored) for items remaining in the IP at the end of the test. Additionally, four traditional CAT conditions, without implementing the IP method, were included in the design. Results found that the longer, 20 item IP method conditions using the forced answer method had higher measurement precision, with higher mean correlations between known and estimated theta, lower mean bias and RMSE, and measurement precision increased as IP size increased. The two item completion conditions (forced to answer and ignored) resulted in similar measurement precision. The variable length IP conditions resulted in comparable measurement precision as the corresponding fixed length IP conditions. The implications of the findings and the limitations with suggestions for future research are also discussed. Advisors/Committee Members: Whittaker, Tiffany A. (advisor), Beretvas, Susan N (committee member), Dodd, Barbara G (committee member), Hersh, Matthew A (committee member), Pituch, Keenan A (committee member).

Subjects/Keywords: Computerized adaptive testing; Response review; Response revision; Polytomous item response theory model; Generalized partial credit model

Record DetailsSimilar RecordsGoogle PlusoneFacebookTwitterCiteULikeMendeleyreddit

APA · Chicago · MLA · Vancouver · CSE | Export to Zotero / EndNote / Reference Manager

APA (6th Edition):

Jensen, M. G. B. (2017). Extension of the item pocket method allowing for response review and revision to a computerized adaptive test using the generalized partial credit model. (Doctoral Dissertation). University of Texas – Austin. Retrieved from http://hdl.handle.net/2152/63011

Chicago Manual of Style (16th Edition):

Jensen, Mishan G B. “Extension of the item pocket method allowing for response review and revision to a computerized adaptive test using the generalized partial credit model.” 2017. Doctoral Dissertation, University of Texas – Austin. Accessed June 06, 2020. http://hdl.handle.net/2152/63011.

MLA Handbook (7th Edition):

Jensen, Mishan G B. “Extension of the item pocket method allowing for response review and revision to a computerized adaptive test using the generalized partial credit model.” 2017. Web. 06 Jun 2020.

Vancouver:

Jensen MGB. Extension of the item pocket method allowing for response review and revision to a computerized adaptive test using the generalized partial credit model. [Internet] [Doctoral dissertation]. University of Texas – Austin; 2017. [cited 2020 Jun 06]. Available from: http://hdl.handle.net/2152/63011.

Council of Science Editors:

Jensen MGB. Extension of the item pocket method allowing for response review and revision to a computerized adaptive test using the generalized partial credit model. [Doctoral Dissertation]. University of Texas – Austin; 2017. Available from: http://hdl.handle.net/2152/63011

.