46 Matching Results

Search Results

Construct Validation and Measurement Invariance of the Athletic Coping Skills Inventory for Educational Settings

Description: The present study examined the factor structure and measurement invariance of the revised version of the Athletic Coping Skills Inventory (ACSI-28), following adjustment of the wording of items such that they were appropriate to assess Coping Skills in an educational setting. A sample of middle school students (n = 1,037) completed the revised inventory. An initial confirmatory factor analysis led to the hypothesis of a better fitting model with two items removed. Reliability of the subscales and the instrument as a whole was acceptable. Items were examined for sex invariance with differential item functioning (DIF) using item response theory, and five items were flagged for significant sex non-invariance. Following removal of these items, comparison of the mean differences between male and female coping scores revealed that there was no significant difference between the two groups. Further examination of the generalizability of the coping construct and the potential transfer of psychosocial skills between athletic and academic settings are warranted.
Date: May 2017
Creator: Sanguras, Laila Y
Partner: UNT Libraries

Using Posterior Predictive Checking of Item Response Theory Models to Study Invariance Violations

Description: The common practice for testing measurement invariance is to constrain parameters to be equal over groups, and then evaluate the model-data fit to reject or fail to reject the restrictive model. Posterior predictive checking (PPC) provides an alternative approach to evaluating model-data discrepancy. This paper explores the utility of PPC in estimating measurement invariance. The simulation results show that the posterior predictive p (PP p) values of item parameter estimates respond to various invariance violations, whereas the PP p values of item-fit index may fail to detect such violations. The current paper suggests comparing group estimates and restrictive model estimates with posterior predictive distributions in order to demonstrate the pattern of misfit graphically.
Date: May 2017
Creator: Xin, Xin
Partner: UNT Libraries

How Attitudes towards Statistics Courses and the Field of Statistics Predicts Statistics Anxiety among Undergraduate Social Science Majors: A Validation of the Statistical Anxiety Scale

Description: The aim of this study was to validate an instrument that can be used by instructors or social scientist who are interested in evaluating statistics anxiety. The psychometric properties of the English version of the Statistical Anxiety Scale (SAS) was examined through a confirmatory factor analysis of scores from a sample of 323 undergraduate social science majors enrolled in colleges and universities in the United States. In previous studies, the psychometric properties of the Spanish and Italian versions of the SAS were validated; however, the English version of the SAS had never been assessed. Inconsistent with previous studies, scores on the English version of the SAS did not produce psychometrically acceptable values of validity. However, the results of this study suggested the potential value of a revised two-factor model SAS to measure statistics anxiety. Additionally, the Attitudes Towards Statistics (ATS) scale was used to examine the convergent and discriminant validities of the two-factor SAS. As expected, the correlation between the two factors of the SAS and the two factors of the ATS uncovered a moderately negative correlation between examination anxiety and attitudes towards the course. Additionally, the results of a structural regression model of attitudes towards statistics as a predictor of statistics anxiety suggested that attitudes towards the course and attitudes towards the field of statistics moderately predicts examination anxiety with attitudes towards the course having the greatest influence. It is recommended that future studies examine the relationship between attitudes towards statistics, statistics anxiety, and other variables such as academic achievement and instructional style.
Date: August 2017
Creator: Obryant, Monique J
Partner: UNT Libraries

The Use Of Effect Size Estimates To Evaluate Covariate Selection, Group Separation, And Sensitivity To Hidden Bias In Propensity Score Matching.

Description: Covariate quality has been primarily theory driven in propensity score matching with a general adversity to the interpretation of group prediction. However, effect sizes are well supported in the literature and may help to inform the method. Specifically, I index can be used as a measure of effect size in logistic regression to evaluate group prediction. As such, simulation was used to create 35 conditions of I, initial bias and sample size to examine statistical differences in (a) post-matching bias reduction and (b) treatment effect sensitivity. The results of this study suggest these conditions do not explain statistical differences in percent bias reduction of treatment likelihood after matching. However, I and sample size do explain statistical differences in treatment effect sensitivity. Treatment effect sensitivity was lower when sample sizes and I increased. However, this relationship was mitigated within smaller sample sizes as I increased above I = .50.
Date: December 2011
Creator: Lane, Forrest C.
Partner: UNT Libraries

Stratified item selection and exposure control in unidimensional adaptive testing in the presence of two-dimensional data.

Description: It is not uncommon to use unidimensional item response theory (IRT) models to estimate ability in multidimensional data. Therefore it is important to understand the implications of summarizing multiple dimensions of ability into a single parameter estimate, especially if effects are confounded when applied to computerized adaptive testing (CAT). Previous studies have investigated the effects of different IRT models and ability estimators by manipulating the relationships between item and person parameters. However, in all cases, the maximum information criterion was used as the item selection method. Because maximum information is heavily influenced by the item discrimination parameter, investigating a-stratified item selection methods is tenable. The current Monte Carlo study compared maximum information, a-stratification, and a-stratification with b blocking item selection methods, alone, as well as in combination with the Sympson-Hetter exposure control strategy. The six testing conditions were conditioned on three levels of interdimensional item difficulty correlations and four levels of interdimensional examinee ability correlations. Measures of fidelity, estimation bias, error, and item usage were used to evaluate the effectiveness of the methods. Results showed either stratified item selection strategy is warranted if the goal is to obtain precise estimates of ability when using unidimensional CAT in the presence of two-dimensional data. If the goal also includes limiting bias of the estimate, Sympson-Hetter exposure control should be included. Results also confirmed that Sympson-Hetter is effective in optimizing item pool usage. Given these results, existing unidimensional CAT implementations might consider employing a stratified item selection routine plus Sympson-Hetter exposure control, rather than recalibrate the item pool under a multidimensional model.
Date: August 2009
Creator: Kalinowski, Kevin E.
Partner: UNT Libraries

An Investigation of the Effect of Violating the Assumption of Homogeneity of Regression Slopes in the Analysis of Covariance Model upon the F-Statistic

Description: The study seeks to determine the effect upon the F-statistic of violating the assumption of homogeneity of regression slopes in the one-way, fixed-effects analysis of covariance model. The study employs a Monte Carlo simulation technique to vary the degree of heterogeneity of regression slopes with varied sample sizes within experiments to determine the effect of such conditions. One hundred and eighty-three simulations were used.
Date: August 1972
Creator: McClaran, Virgil Rutledge
Partner: UNT Libraries

Establishing the utility of a classroom effectiveness index as a teacher accountability system.

Description: How to identify effective teachers who improve student achievement despite diverse student populations and school contexts is an ongoing discussion in public education. The need to show communities and parents how well teachers and schools improve student learning has led districts and states to seek a fair, equitable and valid measure of student growth using student achievement. This study investigated a two stage hierarchical model for estimating teacher effect on student achievement. This measure was entitled a Classroom Effectiveness Index (CEI). Consistency of this model over time, outlier influences in individual CEIs, variance among CEIs across four years, and correlations of second stage student residuals with first stage student residuals were analyzed. The statistical analysis used four years of student residual data from a state-mandated mathematics assessment (n=7086) and a state-mandated reading assessment (n=7572) aggregated by teacher. The study identified the following results. Four years of district grand slopes and grand intercepts were analyzed to show consistent results over time. Repeated measures analyses of grand slopes and intercepts in mathematics were statistically significant at the .01 level. Repeated measures analyses of grand slopes and intercepts in reading were not statistically significant. The analyses indicated consistent results over time for reading but not for mathematics. Data were analyzed to assess outlier effects. Nineteen statistically significant outliers in 15,378 student residuals were identified. However, the impact on individual teachers was extreme in eight of the 19 cases. Further study is indicated. Subsets of teachers in the same assignment at the same school for four consecutive years and for three consecutive years indicated CEIs were stable over time. There were no statistically significant differences in either mathematics or reading. Correlations between Level One student residuals and HLM residuals were statistically significant in reading and in mathematics. This implied that the second stage of ...
Date: May 2002
Creator: Bembry, Karen L.
Partner: UNT Libraries

A Comparison of Two Criterion-Referenced Item-Selection Techniques Utilizing Simulated Data with Item Pools that Vary in Degrees of Item Difficulty

Description: The problem of this study was to examine the equivalency of two different types of criterion-referenced item-selection techniques on simulated data as item pools varied in degrees of item difficulty. A pretest-posttest design was employed in which pass-fail scores were randomly generated for item pools of twenty-five items. From the item pools, the two techniques determined which items were to be used to make up twelve-item criterion-referenced tests. The twenty-five items also were rank ordered according to the discrimination power of the two techniques.
Date: May 1974
Creator: Davis, Robbie G.
Partner: UNT Libraries

The Supply and Demand of Physician Assistants in the United States: A Trend Analysis

Description: The supply of non-physician clinicians (NPCs), such as physician assistant (PAs), could significantly influence demand requirements in medical workforce projections. This study predicts supply of and demand for PAs from 2006 to 2020. The PA supply model utilized the number of certified PAs, the educational capacity (at 10% and 25% expansion) with assumed attrition rates, and retirement assumptions. Gross domestic product (GDP) chained in 2000 dollar and US population were utilized in a transfer function trend analyses with the number of PAs as the dependent variable for the PA demand model. Historical analyses revealed strong correlations between GDP and US population with the number of PAs. The number of currently certified PAs represents approximately 75% of the projected demand. At 10% growth, the supply and demand equilibrium for PAs will be reached in 2012. A 25% increase in new entrants causes equilibrium to be met one year earlier. Robust application trends in PA education enrollment (2.2 applicants per seat for PAs is the same as for allopathic medical school applicants) support predicted increases. However, other implications for the PA educational institutions include recruitment and retention of qualified faculty, clinical site maintenance and diversity of matriculates. Further research on factors affecting the supply and demand for PAs is needed in the areas of retirement age rates, gender, and lifestyle influences. Specialization trends and visit intensity levels are potential variables.
Date: May 2007
Creator: Orcutt, Venetia L.
Partner: UNT Libraries

A Quantitative Modeling Approach to Examining High School, Pre-Admission, Program, Certification and Career Choice Variables in Undergraduate Teacher Preparation Programs

Description: The purpose of this study was to examine if there is an association between effective supervision and communication competence in divisions of student affairs at Christian higher education institutions. The investigation examined chief student affairs officers (CSAOs) and their direct reports at 45 institutions across the United States using the Synergistic Supervision Scale and the Communication Competence Questionnaire. A positive significant association was found between the direct report's evaluation of the CSAO's level of synergistic supervision and the direct report's evaluation of the CSAO's level of communication competence. The findings of this study will advance the supervision and communication competence literature while informing practice for student affairs professionals. This study provides a foundation of research in the context specific field of student affairs where there has been a dearth of literature regarding effective supervision. This study can be used as a platform for future research to further the understanding of characteristics that define effective supervision.
Date: December 2007
Creator: Williams, Cynthia Savage
Partner: UNT Libraries

A comparison of traditional and IRT factor analysis.

Description: This study investigated the item parameter recovery of two methods of factor analysis. The methods researched were a traditional factor analysis of tetrachoric correlation coefficients and an IRT approach to factor analysis which utilizes marginal maximum likelihood estimation using an EM algorithm (MMLE-EM). Dichotomous item response data was generated under the 2-parameter normal ogive model (2PNOM) using PARDSIM software. Examinee abilities were sampled from both the standard normal and uniform distributions. True item discrimination, a, was normal with a mean of .75 and a standard deviation of .10. True b, item difficulty, was specified as uniform [-2, 2]. The two distributions of abilities were completely crossed with three test lengths (n= 30, 60, and 100) and three sample sizes (N = 50, 500, and 1000). Each of the 18 conditions was replicated 5 times, resulting in 90 datasets. PRELIS software was used to conduct a traditional factor analysis on the tetrachoric correlations. The IRT approach to factor analysis was conducted using BILOG 3 software. Parameter recovery was evaluated in terms of root mean square error, average signed bias, and Pearson correlations between estimated and true item parameters. ANOVAs were conducted to identify systematic differences in error indices. Based on many of the indices, it appears the IRT approach to factor analysis recovers item parameters better than the traditional approach studied. Future research should compare other methods of factor analysis to MMLE-EM under various non-normal distributions of abilities.
Date: December 2004
Creator: Kay, Cheryl Ann
Partner: UNT Libraries

Spatial Ability, Motivation, and Attitude of Students as Related to Science Achievement

Description: Understanding student achievement in science is important as there is an increasing reliance of the U.S. economy on math, science, and technology-related fields despite the declining number of youth seeking college degrees and careers in math and science. A series of structural equation models were tested using the scores from a statewide science exam for 276 students from a suburban north Texas public school district at the end of their 5th grade year and the latent variables of spatial ability, motivation to learn science and science-related attitude. Spatial ability was tested as a mediating variable on motivation and attitude; however, while spatial ability had statistically significant regression coefficients with motivation and attitude, spatial ability was found to be the sole statistically significant predictor of science achievement for these students explaining 23.1% of the variance in science scores.
Date: May 2011
Creator: Bolen, Judy Ann
Partner: UNT Libraries

Structural Validity and Item Functioning of the LoTi Digital-Age Survey.

Description: The present study examined the structural construct validity of the LoTi Digital-Age Survey, a measure of teacher instructional practices with technology in the classroom. Teacher responses (N = 2840) from across the United States were used to assess factor structure of the instrument using both exploratory and confirmatory analyses. Parallel analysis suggests retaining a five-factor solution compared to the MAP test that suggests retaining a three-factor solution. Both analyses (EFA and CFA) indicate that changes need to be made to the current factor structure of the survey. The last two factors were composed of items that did not cover or accurately measure the content of the latent trait. Problematic items, such as items with crossloadings, were discussed. Suggestions were provided to improve the factor structure, items, and scale of the survey.
Date: May 2011
Creator: Mehta, Vandhana
Partner: UNT Libraries

Attenuation of the Squared Canonical Correlation Coefficient Under Varying Estimates of Score Reliability

Description: Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability. Monte Carlo simulation methodology was used to fulfill the purpose of this study. Initially, data populations with various manipulated conditions were generated (N = 100,000). Subsequently, 500 random samples were drawn with replacement from each population, and data was subjected to canonical correlation analyses. The canonical correlation results were then analyzed using descriptive statistics and an ANOVA design to determine under which condition(s) the squared canonical correlation coefficient was most attenuated when compared to population Rc2 values. This information was analyzed and used to determine what effect, if any, the different conditions considered in this study had on Rc2. The results from this Monte Carlo investigation clearly illustrated the importance of score reliability when interpreting study results. As evidenced by the outcomes presented, the more measurement error (lower reliability) present in the variables included in an analysis, the more attenuation experienced by the effect size(s) produced in the analysis, in this case Rc2. These results also demonstrated the role between and within set correlation, variable set size, and sample size played in the attenuation levels of the squared canonical correlation coefficient.
Date: August 2010
Creator: Wilson, Celia M.
Partner: UNT Libraries

Convergent Validity of Variables Residualized By a Single Covariate: the Role of Correlated Error in Populations and Samples

Description: This study examined the bias and precision of four residualized variable validity estimates (C0, C1, C2, C3) across a number of study conditions. Validity estimates that considered measurement error, correlations among error scores, and correlations between error scores and true scores (C3) performed the best, yielding no estimates that were practically significantly different than their respective population parameters, across study conditions. Validity estimates that considered measurement error and correlations among error scores (C2) did a good job in yielding unbiased, valid, and precise results. Only in a select number of study conditions were C2 estimates unable to be computed or produced results that had sufficient variance to affect interpretation of results. Validity estimates based on observed scores (C0) fared well in producing valid, precise, and unbiased results. Validity estimates based on observed scores that were only corrected for measurement error (C1) performed the worst. Not only did they not reliably produce estimates even when the level of modeled correlated error was low, C1 produced values higher than the theoretical limit of 1.0 across a number of study conditions. Estimates based on C1 also produced the greatest number of conditions that were practically significantly different than their population parameters.
Date: May 2013
Creator: Nimon, Kim
Partner: UNT Libraries

An Empirical Comparison of Random Number Generators: Period, Structure, Correlation, Density, and Efficiency

Description: Random number generators (RNGs) are widely used in conducting Monte Carlo simulation studies, which are important in the field of statistics for comparing power, mean differences, or distribution shapes between statistical approaches. Statistical results, however, may differ when different random number generators are used. Often older methods have been blindly used with no understanding of their limitations. Many random functions supplied with computers today have been found to be comparatively unsatisfactory. In this study, five multiplicative linear congruential generators (MLCGs) were chosen which are provided in the following statistical packages: RANDU (IBM), RNUN (IMSL), RANUNI (SAS), UNIFORM(SPSS), and RANDOM (BMDP). Using a personal computer (PC), an empirical investigation was performed using five criteria: period length before repeating random numbers, distribution shape, correlation between adjacent numbers, density of distributions and normal approach of random number generator (RNG) in a normal function. All RNG FORTRAN programs were rewritten into Pascal which is more efficient language for the PC. Sets of random numbers were generated using different starting values. A good RNG should have the following properties: a long enough period; a well-structured pattern in distribution; independence between random number sequences; random and uniform distribution; and a good normal approach in the normal distribution. Findings in this study suggested that the above five criteria need to be examined when conducting a simulation study with large enough sample sizes and various starting values because the RNG selected can affect the statistical results. Furthermore, a study for purposes of indicating reproducibility and validity should indicate the source of the RNG, the type of RNG used, evaluation results of the RNG, and any pertinent information related to the computer used in the study. Recommendations for future research are suggested in the area of other RNGs and methods not used in this study, such as additive, combined, ...
Date: August 1995
Creator: Bang, Jung Woong
Partner: UNT Libraries

A Comparison of Traditional Norming and Rasch Quick Norming Methods

Description: The simplicity and ease of use of the Rasch procedure is a decided advantage. The test user needs only two numbers: the frequency of persons who answered each item correctly and the Rasch-calibrated item difficulty, usually a part of an existing item bank. Norms can be computed quickly for any specific group of interest. In addition, once the selected items from the calibrated bank are normed, any test, built from the item bank, is automatically norm-referenced. Thus, it was concluded that the Rasch quick norm procedure is a meaningful alternative to traditional classical true score norming for test users who desire normative data.
Date: August 1993
Creator: Bush, Joan Spooner
Partner: UNT Libraries

Measurement Disturbance Effects on Rasch Fit Statistics and the Logit Residual Index

Description: The effects of random guessing as a measurement disturbance on Rasch fit statistics (unweighted total, weighted total, and unweighted ability between) and the Logit Residual Index (LRI) were examined through simulated data sets of varying sample sizes, test lengths, and distribution types. Three test lengths (25, 50, and 100), three sample sizes (25, 50, and 100), two item difficulty distributions (normal and uniform), and three levels of guessing (no guessing (0%), 25%, and 50%) were used in the simulations, resulting in 54 experimental conditions. The mean logit person ability for each experiment was +1. Each experimental condition was simulated once in an effort to approximate what could happen on the single administration of a four option per item multiple choice test to a group of relatively high ability persons. Previous research has shown that varying item and person parameters have no effect on Rasch fit statistics. Consequently, these parameters were used in the present study to establish realistic test conditions, but were not interpreted as effect factors in determining the results of this study.
Date: August 1997
Creator: Mount, Robert E. (Robert Earl)
Partner: UNT Libraries

The Generalization of the Logistic Discriminant Function Analysis and Mantel Score Test Procedures to Detection of Differential Testlet Functioning

Description: Two procedures for detection of differential item functioning (DIF) for polytomous items were generalized to detection of differential testlet functioning (DTLF). The methods compared were the logistic discriminant function analysis procedure for uniform and non-uniform DTLF (LDFA-U and LDFA-N), and the Mantel score test procedure. Further analysis included comparison of results of DTLF analysis using the Mantel procedure with DIF analysis of individual testlet items using the Mantel-Haenszel (MH) procedure. Over 600 chi-squares were analyzed and compared for rejection of null hypotheses. Samples of 500, 1,000, and 2,000 were drawn by gender subgroups from the NELS:88 data set, which contains demographic and test data from over 25,000 eighth graders. Three types of testlets (totalling 29) from the NELS:88 test were analyzed for DTLF. The first type, the common passage testlet, followed the conventional testlet definition: items grouped together by a common reading passage, figure, or graph. The other two types were based upon common content and common process. as outlined in the NELS test specification.
Date: August 1994
Creator: Kinard, Mary E.
Partner: UNT Libraries

Outliers and Regression Models

Description: The mitigation of outliers serves to increase the strength of a relationship between variables. This study defined outliers in three different ways and used five regression procedures to describe the effects of outliers on 50 data sets. This study also examined the relationship among the shape of the distribution, skewness, and outliers.
Date: May 1992
Creator: Mitchell, Napoleon
Partner: UNT Libraries

A Comparison of Two Differential Item Functioning Detection Methods: Logistic Regression and an Analysis of Variance Approach Using Rasch Estimation

Description: Differential item functioning (DIF) detection rates were examined for the logistic regression and analysis of variance (ANOVA) DIF detection methods. The methods were applied to simulated data sets of varying test length (20, 40, and 60 items) and sample size (200, 400, and 600 examinees) for both equal and unequal underlying ability between groups as well as for both fixed and varying item discrimination parameters. Each test contained 5% uniform DIF items, 5% non-uniform DIF items, and 5% combination DIF (simultaneous uniform and non-uniform DIF) items. The factors were completely crossed, and each experiment was replicated 100 times. For both methods and all DIF types, a test length of 20 was sufficient for satisfactory DIF detection. The detection rate increased significantly with sample size for each method. With the ANOVA DIF method and uniform DIF, there was a difference in detection rates between discrimination parameter types, which favored varying discrimination and decreased with increased sample size. The detection rate of non-uniform DIF using the ANOVA DIF method was higher with fixed discrimination parameters than with varying discrimination parameters when relative underlying ability was unequal. In the combination DIF case, there was a three-way interaction among the experimental factors discrimination type, relative ability, and sample size for both detection methods. The error rate for the ANOVA DIF detection method decreased as test length increased and increased as sample size increased. For both methods, the error rate was slightly higher with varying discrimination parameters than with fixed. For logistic regression, the error rate increased with sample size when relative underlying ability was unequal between groups. The logistic regression method detected uniform and non-uniform DIF at a higher rate than the ANOVA DIF method. Because the type of DIF present in real data is rarely known, the logistic regression method is recommended for ...
Date: August 1995
Creator: Whitmore, Marjorie Lee Threet
Partner: UNT Libraries

A Comparison of Three Criteria Employed in the Selection of Regression Models Using Simulated and Real Data

Description: Researchers who make predictions from educational data are interested in choosing the best regression model possible. Many criteria have been devised for choosing a full or restricted model, and also for selecting the best subset from an all-possible-subsets regression. The relative practical usefulness of three of the criteria used in selecting a regression model was compared in this study: (a) Mallows' C_p, (b) Amemiya's prediction criterion, and (c) Hagerty and Srinivasan's method involving predictive power. Target correlation matrices with 10,000 cases were simulated so that the matrices had varying degrees of effect sizes. The amount of power for each matrix was calculated after one or two predictors was dropped from the full regression model, for sample sizes ranging from n = 25 to n = 150. Also, the null case, when one predictor was uncorrelated with the other predictors, was considered. In addition, comparisons for regression models selected using C_p and prediction criterion were performed using data from the National Educational Longitudinal Study of 1988.
Date: December 1994
Creator: Graham, D. Scott
Partner: UNT Libraries

The Effect of Psychometric Parallelism among Predictors on the Efficiency of Equal Weights and Least Squares Weights in Multiple Regression

Description: There are several conditions for applying equal weights as an alternative to least squares weights. Psychometric parallelism, one of the conditions, has been suggested as a necessary and sufficient condition for equal-weights aggregation. The purpose of this study is to investigate the effect of psychometric parallelism among predictors on the efficiency of equal weights and least squares weights. Target correlation matrices with 10,000 cases were simulated so that the matrices had varying degrees of psychometric parallelism. Five hundred samples with six ratios of observation to predictor = 5/1, 10/1, 20/1, 30/1, 40/1, and 50/1 were drawn from each population. The efficiency is interpreted as the accuracy and the predictive power estimated by the weighting methods. The accuracy is defined by the deviation between the population R² and the sample R² . The predictive power is referred to as the population cross-validated R² and the population mean square error of prediction. The findings indicate there is no statistically significant relationship between the level of psychometric parallelism and the accuracy of least squares weights. In contrast, the correlation between the level of psychometric parallelism and the accuracy of equal weights is significantly negative. Under different conditions, the minimum p value of χ² for testing psychometric parallelism among predictors is also different in order to prove equal weights more powerful than least squares weights. The higher the number of predictors is, the higher the minimum p value. The higher the ratio of observation to predictor is, the higher the minimum p value. The higher the magnitude of intercorrelations among predictors is, the lower the minimum p value. This study demonstrates that the most frequently used levels of significance, 0.05 and 0.01, are no longer the only p values for testing the null hypotheses of psychometric parallelism among predictors when replacing least squares weights ...
Date: May 1996
Creator: Zhang, Desheng
Partner: UNT Libraries

A Comparison of IRT and Rasch Procedures in a Mixed-Item Format Test

Description: This study investigated the effects of test length (10, 20 and 30 items), scoring schema (proportion of dichotomous ad polytomous scoring) and item analysis model (IRT and Rasch) on the ability estimates, test information levels and optimization criteria of mixed item format tests. Polytomous item responses to 30 items for 1000 examinees were simulated using the generalized partial-credit model and SAS software. Portions of the data were re-coded dichotomously over 11 structured proportions to create 33 sets of test responses including mixed item format tests. MULTILOG software was used to calculate the examinee ability estimates, standard errors, item and test information, reliability and fit indices. A comparison of IRT and Rasch item analysis procedures was made using SPSS software across ability estimates and standard errors of ability estimates using a 3 x 11 x 2 fixed factorial ANOVA. Effect sizes and power were reported for each procedure. Scheffe post hoc procedures were conducted on significant factos. Test information was analyzed and compared across the range of ability levels for all 66-design combinations. The results indicated that both test length and the proportion of items scored polytomously had a significant impact on the amount of test information produced by mixed item format tests. Generally, tests with 100% of the items scored polytomously produced the highest overall information. This seemed to be especially true for examinees with lower ability estimates. Optimality comparisons were made between IRT and Rasch procedures based on standard error rates for the ability estimates, marginal reliabilities and fit indices (-2LL). The only significant differences reported involved the standard error rates for both the IRT and Rasch procedures. This result must be viewed in light of the fact that the effect size reported was negligible. Optimality was found to be highest when longer tests and higher proportions of polytomous ...
Date: August 2003
Creator: Kinsey, Tari L.
Partner: UNT Libraries