A Comparison of IRT and Rasch Procedures in a Mixed-Item Format Test

PDF Version Also Available for Download.

Description

This study investigated the effects of test length (10, 20 and 30 items), scoring schema (proportion of dichotomous ad polytomous scoring) and item analysis model (IRT and Rasch) on the ability estimates, test information levels and optimization criteria of mixed item format tests. Polytomous item responses to 30 items for 1000 examinees were simulated using the generalized partial-credit model and SAS software. Portions of the data were re-coded dichotomously over 11 structured proportions to create 33 sets of test responses including mixed item format tests. MULTILOG software was used to calculate the examinee ability estimates, standard errors, item and test ... continued below

Creation Information

Kinsey, Tari L. August 2003.

Context

This dissertation is part of the collection entitled: UNT Theses and Dissertations and was provided by UNT Libraries to Digital Library, a digital repository hosted by the UNT Libraries. It has been viewed 1997 times , with 26 in the last month . More information about this dissertation can be viewed below.

Who

People and organizations associated with either the creation of this dissertation or its content.

Chair

Committee Members

Publisher

Rights Holder

For guidance see Citations, Rights, Re-Use.

  • Kinsey, Tari L.

Provided By

UNT Libraries

Library facilities at the University of North Texas function as the nerve center for teaching and academic research. In addition to a major collection of electronic journals, books and databases, five campus facilities house just under six million cataloged holdings, including books, periodicals, maps, documents, microforms, audiovisual materials, music scores, full-text journals and books. A branch library is located at the University of North Texas Dallas Campus.

Contact Us

What

Descriptive information to help identify this dissertation. Follow the links below to find similar items on the Digital Library.

Degree Information

Description

This study investigated the effects of test length (10, 20 and 30 items), scoring schema (proportion of dichotomous ad polytomous scoring) and item analysis model (IRT and Rasch) on the ability estimates, test information levels and optimization criteria of mixed item format tests. Polytomous item responses to 30 items for 1000 examinees were simulated using the generalized partial-credit model and SAS software. Portions of the data were re-coded dichotomously over 11 structured proportions to create 33 sets of test responses including mixed item format tests. MULTILOG software was used to calculate the examinee ability estimates, standard errors, item and test information, reliability and fit indices. A comparison of IRT and Rasch item analysis procedures was made using SPSS software across ability estimates and standard errors of ability estimates using a 3 x 11 x 2 fixed factorial ANOVA. Effect sizes and power were reported for each procedure. Scheffe post hoc procedures were conducted on significant factos. Test information was analyzed and compared across the range of ability levels for all 66-design combinations. The results indicated that both test length and the proportion of items scored polytomously had a significant impact on the amount of test information produced by mixed item format tests. Generally, tests with 100% of the items scored polytomously produced the highest overall information. This seemed to be especially true for examinees with lower ability estimates. Optimality comparisons were made between IRT and Rasch procedures based on standard error rates for the ability estimates, marginal reliabilities and fit indices (-2LL). The only significant differences reported involved the standard error rates for both the IRT and Rasch procedures. This result must be viewed in light of the fact that the effect size reported was negligible. Optimality was found to be highest when longer tests and higher proportions of polytomous scoring were applied. Some indications were given that IRT procedures may produce slightly improved results in gathering available test information. Overall, significant differences were not found between the IRT and Rasch procedures when analyzing the mixed item format tests. Further research should be conducted in the areas of test difficulty, examinee test scores, and automated partial-credit scoring along with a comparison to other traditional psychometric measures and how they address challenges related to the mixed item format tests.

Language

Identifier

Unique identifying numbers for this dissertation in the Digital Library or other systems.

Collections

This dissertation is part of the following collection of related materials.

UNT Theses and Dissertations

Theses and dissertations represent a wealth of scholarly and artistic content created by masters and doctoral students in the degree-seeking process. Some ETDs in this collection are restricted to use by the UNT community.

What responsibilities do I have when using this dissertation?

When

Dates and time periods associated with this dissertation.

Creation Date

  • August 2003

Added to The UNT Digital Library

  • Feb. 15, 2008, 2:51 p.m.

Usage Statistics

When was this dissertation last used?

Yesterday: 0
Past 30 days: 26
Total Uses: 1,997

Interact With This Dissertation

Here are some suggestions for what to do next.

Start Reading

PDF Version Also Available for Download.

Citations, Rights, Re-Use

Kinsey, Tari L. A Comparison of IRT and Rasch Procedures in a Mixed-Item Format Test, dissertation, August 2003; Denton, Texas. (digital.library.unt.edu/ark:/67531/metadc4316/: accessed May 25, 2017), University of North Texas Libraries, Digital Library, digital.library.unt.edu; .