Search Results

Establishing the utility of a classroom effectiveness index as a teacher accountability system.
How to identify effective teachers who improve student achievement despite diverse student populations and school contexts is an ongoing discussion in public education. The need to show communities and parents how well teachers and schools improve student learning has led districts and states to seek a fair, equitable and valid measure of student growth using student achievement. This study investigated a two stage hierarchical model for estimating teacher effect on student achievement. This measure was entitled a Classroom Effectiveness Index (CEI). Consistency of this model over time, outlier influences in individual CEIs, variance among CEIs across four years, and correlations of second stage student residuals with first stage student residuals were analyzed. The statistical analysis used four years of student residual data from a state-mandated mathematics assessment (n=7086) and a state-mandated reading assessment (n=7572) aggregated by teacher. The study identified the following results. Four years of district grand slopes and grand intercepts were analyzed to show consistent results over time. Repeated measures analyses of grand slopes and intercepts in mathematics were statistically significant at the .01 level. Repeated measures analyses of grand slopes and intercepts in reading were not statistically significant. The analyses indicated consistent results over time for reading but not for mathematics. Data were analyzed to assess outlier effects. Nineteen statistically significant outliers in 15,378 student residuals were identified. However, the impact on individual teachers was extreme in eight of the 19 cases. Further study is indicated. Subsets of teachers in the same assignment at the same school for four consecutive years and for three consecutive years indicated CEIs were stable over time. There were no statistically significant differences in either mathematics or reading. Correlations between Level One student residuals and HLM residuals were statistically significant in reading and in mathematics. This implied that the second stage of …
Ability Estimation Under Different Item Parameterization and Scoring Models
A Monte Carlo simulation study investigated the effect of scoring format, item parameterization, threshold configuration, and prior ability distribution on the accuracy of ability estimation given various IRT models. Item response data on 30 items from 1,000 examinees was simulated using known item parameters and ability estimates. The item response data sets were submitted to seven dichotomous or polytomous IRT models with different item parameterization to estimate examinee ability. The accuracy of the ability estimation for a given IRT model was assessed by the recovery rate and the root mean square errors. The results indicated that polytomous models produced more accurate ability estimates than the dichotomous models, under all combinations of research conditions, as indicated by higher recovery rates and lower root mean square errors. For the item parameterization models, the one-parameter model out-performed the two-parameter and three-parameter models under all research conditions. Among the polytomous models, the partial credit model had more accurate ability estimation than the other three polytomous models. The nominal categories model performed better than the general partial credit model and the multiple-choice model with the multiple-choice model the least accurate. The results further indicated that certain prior ability distributions had an effect on the accuracy of ability estimation; however, no clear order of accuracy among the four prior distribution groups was identified due to an interaction between prior ability distribution and threshold configuration. The recovery rate was lower when the test items had categories with unequal threshold distances, were close at one end of the ability/difficulty continuum, and were administered to a sample of examinees whose population ability distribution was skewed to the same end of the ability continuum.
The Supply and Demand of Physician Assistants in the United States: A Trend Analysis
The supply of non-physician clinicians (NPCs), such as physician assistant (PAs), could significantly influence demand requirements in medical workforce projections. This study predicts supply of and demand for PAs from 2006 to 2020. The PA supply model utilized the number of certified PAs, the educational capacity (at 10% and 25% expansion) with assumed attrition rates, and retirement assumptions. Gross domestic product (GDP) chained in 2000 dollar and US population were utilized in a transfer function trend analyses with the number of PAs as the dependent variable for the PA demand model. Historical analyses revealed strong correlations between GDP and US population with the number of PAs. The number of currently certified PAs represents approximately 75% of the projected demand. At 10% growth, the supply and demand equilibrium for PAs will be reached in 2012. A 25% increase in new entrants causes equilibrium to be met one year earlier. Robust application trends in PA education enrollment (2.2 applicants per seat for PAs is the same as for allopathic medical school applicants) support predicted increases. However, other implications for the PA educational institutions include recruitment and retention of qualified faculty, clinical site maintenance and diversity of matriculates. Further research on factors affecting the supply and demand for PAs is needed in the areas of retirement age rates, gender, and lifestyle influences. Specialization trends and visit intensity levels are potential variables.
Comparisons of Improvement-Over-Chance Effect Sizes for Two Groups Under Variance Heterogeneity and Prior Probabilities
The distributional properties of improvement-over-chance, I, effect sizes derived from linear and quadratic predictive discriminant analysis (PDA) and from logistic regression analysis (LRA) for the two-group univariate classification were examined. Data were generated under varying levels of four data conditions: population separation, variance pattern, sample size, and prior probabilities. None of the indices provided acceptable estimates of effect for all the conditions examined. There were only a small number of conditions under which both accuracy and precision were acceptable. The results indicate that the decision of which method to choose is primarily determined by variance pattern and prior probabilities. Under variance homogeneity, any of the methods may be recommended. However, LRA is recommended when priors are equal or extreme and linear PDA is recommended when priors are moderate. Under variance heterogeneity, selecting a recommended method is more complex. In many cases, more than one method could be used appropriately.
A Comparison of IRT and Rasch Procedures in a Mixed-Item Format Test
This study investigated the effects of test length (10, 20 and 30 items), scoring schema (proportion of dichotomous ad polytomous scoring) and item analysis model (IRT and Rasch) on the ability estimates, test information levels and optimization criteria of mixed item format tests. Polytomous item responses to 30 items for 1000 examinees were simulated using the generalized partial-credit model and SAS software. Portions of the data were re-coded dichotomously over 11 structured proportions to create 33 sets of test responses including mixed item format tests. MULTILOG software was used to calculate the examinee ability estimates, standard errors, item and test information, reliability and fit indices. A comparison of IRT and Rasch item analysis procedures was made using SPSS software across ability estimates and standard errors of ability estimates using a 3 x 11 x 2 fixed factorial ANOVA. Effect sizes and power were reported for each procedure. Scheffe post hoc procedures were conducted on significant factos. Test information was analyzed and compared across the range of ability levels for all 66-design combinations. The results indicated that both test length and the proportion of items scored polytomously had a significant impact on the amount of test information produced by mixed item format tests. Generally, tests with 100% of the items scored polytomously produced the highest overall information. This seemed to be especially true for examinees with lower ability estimates. Optimality comparisons were made between IRT and Rasch procedures based on standard error rates for the ability estimates, marginal reliabilities and fit indices (-2LL). The only significant differences reported involved the standard error rates for both the IRT and Rasch procedures. This result must be viewed in light of the fact that the effect size reported was negligible. Optimality was found to be highest when longer tests and higher proportions of polytomous …
A comparison of traditional and IRT factor analysis.
This study investigated the item parameter recovery of two methods of factor analysis. The methods researched were a traditional factor analysis of tetrachoric correlation coefficients and an IRT approach to factor analysis which utilizes marginal maximum likelihood estimation using an EM algorithm (MMLE-EM). Dichotomous item response data was generated under the 2-parameter normal ogive model (2PNOM) using PARDSIM software. Examinee abilities were sampled from both the standard normal and uniform distributions. True item discrimination, a, was normal with a mean of .75 and a standard deviation of .10. True b, item difficulty, was specified as uniform [-2, 2]. The two distributions of abilities were completely crossed with three test lengths (n= 30, 60, and 100) and three sample sizes (N = 50, 500, and 1000). Each of the 18 conditions was replicated 5 times, resulting in 90 datasets. PRELIS software was used to conduct a traditional factor analysis on the tetrachoric correlations. The IRT approach to factor analysis was conducted using BILOG 3 software. Parameter recovery was evaluated in terms of root mean square error, average signed bias, and Pearson correlations between estimated and true item parameters. ANOVAs were conducted to identify systematic differences in error indices. Based on many of the indices, it appears the IRT approach to factor analysis recovers item parameters better than the traditional approach studied. Future research should compare other methods of factor analysis to MMLE-EM under various non-normal distributions of abilities.
A Quantitative Modeling Approach to Examining High School, Pre-Admission, Program, Certification and Career Choice Variables in Undergraduate Teacher Preparation Programs
The purpose of this study was to examine if there is an association between effective supervision and communication competence in divisions of student affairs at Christian higher education institutions. The investigation examined chief student affairs officers (CSAOs) and their direct reports at 45 institutions across the United States using the Synergistic Supervision Scale and the Communication Competence Questionnaire. A positive significant association was found between the direct report's evaluation of the CSAO's level of synergistic supervision and the direct report's evaluation of the CSAO's level of communication competence. The findings of this study will advance the supervision and communication competence literature while informing practice for student affairs professionals. This study provides a foundation of research in the context specific field of student affairs where there has been a dearth of literature regarding effective supervision. This study can be used as a platform for future research to further the understanding of characteristics that define effective supervision.
Bias and Precision of the Squared Canonical Correlation Coefficient under Nonnormal Data Conditions
This dissertation: (a) investigated the degree to which the squared canonical correlation coefficient is biased in multivariate nonnormal distributions and (b) identified formulae that adjust the squared canonical correlation coefficient (Rc2) such that it most closely approximates the true population effect under normal and nonnormal data conditions. Five conditions were manipulated in a fully-crossed design to determine the degree of bias associated with Rc2: distribution shape, variable sets, sample size to variable ratios, and within- and between-set correlations. Very few of the condition combinations produced acceptable amounts of bias in Rc2, but those that did were all found with first function results. The sample size to variable ratio (n:v)was determined to have the greatest impact on the bias associated with the Rc2 for the first, second, and third functions. The variable set condition also affected the accuracy of Rc2, but for the second and third functions only. The kurtosis levels of the marginal distributions (b2), and the between- and within-set correlations demonstrated little or no impact on the bias associated with Rc2. Therefore, it is recommended that researchers use n:v ratios of at least 10:1 in canonical analyses, although greater n:v ratios have the potential to produce even less bias. Furthermore,because it was determined that b2 did not impact the accuracy of Rc2, one can be somewhat confident that, with marginal distributions possessing homogenous kurtosis levels ranging anywhere from -1 to 8, Rc2 will likely be as accurate as that resulting from a normal distribution. Because the majority of Rc2 estimates were extremely biased, it is recommended that all Rc2 effects, regardless of which function from which they result, be adjusted using an appropriate adjustment formula. If no rationale exists for the use of another formula, the Rozeboom-2 would likely be a safe choice given that it produced the greatest …
Investigating the hypothesized factor structure of the Noel-Levitz Student Satisfaction Inventory: A study of the student satisfaction construct.
College student satisfaction is a concept that has become more prevalent in higher education research journals. Little attention has been given to the psychometric properties of previous instrumentation, and few studies have investigated the structure of current satisfaction instrumentation. This dissertation: (a) investigated the tenability of the theoretical dimensional structure of the Noel-Levitz Student Satisfaction Inventory™ (SSI), (b) investigated an alternative factor structure using explanatory factor analyses (EFA), and (c) used multiple-group CFA procedures to determine whether an alternative SSI factor structure would be invariant for three demographic variables: gender (men/women), race/ethnicity (Caucasian/Other), and undergraduate classification level (lower level/upper level). For this study, there was little evidence for the multidimensional structure of the SSI. A single factor, termed General Satisfaction with College, was the lone unidimensional construct that emerged from the iterative CFA and EFA procedures. A revised 20-item model was developed, and a series of multigroup CFAs were used to detect measurement invariance for three variables: student gender, race/ethnicity, and class level. No measurement invariance was noted for the revised 20-item model. Results for the invariance tests indicated equivalence across the comparison groups for (a) the number of factors, (b) the pattern of indicator-factor loadings, (c) the factor loadings, and (d) the item error variances. Because little attention has been given to the psychometric properties of the satisfaction instrumentation, it is recommended that further research continue on the SSI and any additional instrumentation developed to measure student satisfaction. It is possible that invariance issues may explain a portion of the inconsistent findings noted in the review of literature. Although measurement analyses are a time-consuming process, they are essential for understanding the psychometrics characterized by a set of scores obtained from a survey, or any other form of assessment instrument.
Stratified item selection and exposure control in unidimensional adaptive testing in the presence of two-dimensional data.
It is not uncommon to use unidimensional item response theory (IRT) models to estimate ability in multidimensional data. Therefore it is important to understand the implications of summarizing multiple dimensions of ability into a single parameter estimate, especially if effects are confounded when applied to computerized adaptive testing (CAT). Previous studies have investigated the effects of different IRT models and ability estimators by manipulating the relationships between item and person parameters. However, in all cases, the maximum information criterion was used as the item selection method. Because maximum information is heavily influenced by the item discrimination parameter, investigating a-stratified item selection methods is tenable. The current Monte Carlo study compared maximum information, a-stratification, and a-stratification with b blocking item selection methods, alone, as well as in combination with the Sympson-Hetter exposure control strategy. The six testing conditions were conditioned on three levels of interdimensional item difficulty correlations and four levels of interdimensional examinee ability correlations. Measures of fidelity, estimation bias, error, and item usage were used to evaluate the effectiveness of the methods. Results showed either stratified item selection strategy is warranted if the goal is to obtain precise estimates of ability when using unidimensional CAT in the presence of two-dimensional data. If the goal also includes limiting bias of the estimate, Sympson-Hetter exposure control should be included. Results also confirmed that Sympson-Hetter is effective in optimizing item pool usage. Given these results, existing unidimensional CAT implementations might consider employing a stratified item selection routine plus Sympson-Hetter exposure control, rather than recalibrate the item pool under a multidimensional model.
Determination of the Optimal Number of Strata for Bias Reduction in Propensity Score Matching.
Previous research implementing stratification on the propensity score has generally relied on using five strata, based on prior theoretical groundwork and minimal empirical evidence as to the suitability of quintiles to adequately reduce bias in all cases and across all sample sizes. This study investigates bias reduction across varying number of strata and sample sizes via a large-scale simulation to determine the adequacy of quintiles for bias reduction under all conditions. Sample sizes ranged from 100 to 50,000 and strata from 3 to 20. Both the percentage of bias reduction and the standardized selection bias were examined. The results show that while the particular covariates in the simulation met certain criteria with five strata that greater bias reduction could be achieved by increasing the number of strata, especially with larger sample sizes. Simulation code written in R is included.
Attenuation of the Squared Canonical Correlation Coefficient Under Varying Estimates of Score Reliability
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability. Monte Carlo simulation methodology was used to fulfill the purpose of this study. Initially, data populations with various manipulated conditions were generated (N = 100,000). Subsequently, 500 random samples were drawn with replacement from each population, and data was subjected to canonical correlation analyses. The canonical correlation results were then analyzed using descriptive statistics and an ANOVA design to determine under which condition(s) the squared canonical correlation coefficient was most attenuated when compared to population Rc2 values. This information was analyzed and used to determine what effect, if any, the different conditions considered in this study had on Rc2. The results from this Monte Carlo investigation clearly illustrated the importance of score reliability when interpreting study results. As evidenced by the outcomes presented, the more measurement error (lower reliability) present in the variables included in an analysis, the more attenuation experienced by the effect size(s) produced in the analysis, in this case Rc2. These results also demonstrated the role between and within set correlation, variable set size, and sample size played in the attenuation levels of the squared canonical correlation coefficient.
A Hierarchical Regression Analysis of the Relationship Between Blog Reading, Online Political Activity, and Voting During the 2008 Presidential Campaign
The advent of the Internet has increased access to information and impacted many aspects of life, including politics. The present study utilized Pew Internet & American Life survey data from the November 2008 presidential election time period to investigate the degree to which political blog reading predicted online political discussion, online political participation, whether or not a person voted, and voting choice, over and above the predication that could be explained by demographic measures of age, education level, gender, income, marital status, race/ethnicity, and region. Ordinary least squares hierarchical regression revealed that political blog reading was positively and statistically significantly related to online political discussion and online political participation. Hierarchical logistic regression analysis indicated that the odds of a political blog reader voting were 1.98 the odds of a nonreader voting, but vote choice was not predicted by reading political blogs. These results are interpreted within the uses and gratifications framework and the understanding that blogs add an interpersonal communication aspect to a mass medium. As more people use blogs and the nature of the blog-reading audience shifts, continuing to track and describe the blog audience with valid measures will be important for researchers and practitioners alike. Subsequent potential effects of political blog reading on engagement, discussion, and participation will be important to understand as these effects could impact the political landscape of this country and, therefore, the world.
Structural Validity and Item Functioning of the LoTi Digital-Age Survey.
The present study examined the structural construct validity of the LoTi Digital-Age Survey, a measure of teacher instructional practices with technology in the classroom. Teacher responses (N = 2840) from across the United States were used to assess factor structure of the instrument using both exploratory and confirmatory analyses. Parallel analysis suggests retaining a five-factor solution compared to the MAP test that suggests retaining a three-factor solution. Both analyses (EFA and CFA) indicate that changes need to be made to the current factor structure of the survey. The last two factors were composed of items that did not cover or accurately measure the content of the latent trait. Problematic items, such as items with crossloadings, were discussed. Suggestions were provided to improve the factor structure, items, and scale of the survey.
Spatial Ability, Motivation, and Attitude of Students as Related to Science Achievement
Understanding student achievement in science is important as there is an increasing reliance of the U.S. economy on math, science, and technology-related fields despite the declining number of youth seeking college degrees and careers in math and science. A series of structural equation models were tested using the scores from a statewide science exam for 276 students from a suburban north Texas public school district at the end of their 5th grade year and the latent variables of spatial ability, motivation to learn science and science-related attitude. Spatial ability was tested as a mediating variable on motivation and attitude; however, while spatial ability had statistically significant regression coefficients with motivation and attitude, spatial ability was found to be the sole statistically significant predictor of science achievement for these students explaining 23.1% of the variance in science scores.
Missing Data Treatments at the Second Level of Hierarchical Linear Models
The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing data, (b) percentage of missing data, and (c) Level-2 sample size. Listwise deletion outperformed all other methods across all study conditions in the estimation of both fixed-effects and variance components. The model-based procedures evaluated, EM and MI, outperformed the other traditional MDTs, mean and group mean substitution, in the estimation of the variance components, outperforming mean substitution in the estimation of the fixed-effects as well. Group mean substitution performed well in the estimation of the fixed-effects, but poorly in the estimation of the variance components. Data in the current study were modeled as missing completely at random (MCAR). Further research is suggested to compare the performance of model-based versus traditional MDTs, specifically listwise deletion, when data are missing at random (MAR), a condition that is more likely to occur in practical research settings.
Parent Involvement and Science Achievement: A Latent Growth Curve Analysis
This study examined science achievement growth across elementary and middle school and parent school involvement using the Early Childhood Longitudinal Study – Kindergarten Class of 1998 – 1999 (ECLS-K). The ECLS-K is a nationally representative kindergarten cohort of students from public and private schools who attended full-day or half-day kindergarten class in 1998 – 1999. The present study’s sample (N = 8,070) was based on students that had a sampling weight available from the public-use data file. Students were assessed in science achievement at third, fifth, and eighth grades and parents of the students were surveyed at the same time points. Analyses using latent growth curve modeling with time invariant and varying covariates in an SEM framework revealed a positive relationship between science achievement and parent involvement at eighth grade. Furthermore, there were gender and racial/ethnic differences in parents’ school involvement as a predictor of science achievement. Findings indicated that students with lower initial science achievement scores had a faster rate of growth across time. The achievement gap between low and high achievers in earth, space and life sciences lessened from elementary to middle school. Parents’ involvement with school usually tapers off after elementary school, but due to parent school involvement being a significant predictor of eighth grade science achievement, later school involvement may need to be supported and better implemented in secondary schooling.
The Use Of Effect Size Estimates To Evaluate Covariate Selection, Group Separation, And Sensitivity To Hidden Bias In Propensity Score Matching.
Covariate quality has been primarily theory driven in propensity score matching with a general adversity to the interpretation of group prediction. However, effect sizes are well supported in the literature and may help to inform the method. Specifically, I index can be used as a measure of effect size in logistic regression to evaluate group prediction. As such, simulation was used to create 35 conditions of I, initial bias and sample size to examine statistical differences in (a) post-matching bias reduction and (b) treatment effect sensitivity. The results of this study suggest these conditions do not explain statistical differences in percent bias reduction of treatment likelihood after matching. However, I and sample size do explain statistical differences in treatment effect sensitivity. Treatment effect sensitivity was lower when sample sizes and I increased. However, this relationship was mitigated within smaller sample sizes as I increased above I = .50.
An Investigation of the Effect of Violating the Assumption of Homogeneity of Regression Slopes in the Analysis of Covariance Model upon the F-Statistic
The study seeks to determine the effect upon the F-statistic of violating the assumption of homogeneity of regression slopes in the one-way, fixed-effects analysis of covariance model. The study employs a Monte Carlo simulation technique to vary the degree of heterogeneity of regression slopes with varied sample sizes within experiments to determine the effect of such conditions. One hundred and eighty-three simulations were used.
Convergent Validity of Variables Residualized By a Single Covariate: the Role of Correlated Error in Populations and Samples
This study examined the bias and precision of four residualized variable validity estimates (C0, C1, C2, C3) across a number of study conditions. Validity estimates that considered measurement error, correlations among error scores, and correlations between error scores and true scores (C3) performed the best, yielding no estimates that were practically significantly different than their respective population parameters, across study conditions. Validity estimates that considered measurement error and correlations among error scores (C2) did a good job in yielding unbiased, valid, and precise results. Only in a select number of study conditions were C2 estimates unable to be computed or produced results that had sufficient variance to affect interpretation of results. Validity estimates based on observed scores (C0) fared well in producing valid, precise, and unbiased results. Validity estimates based on observed scores that were only corrected for measurement error (C1) performed the worst. Not only did they not reliably produce estimates even when the level of modeled correlated error was low, C1 produced values higher than the theoretical limit of 1.0 across a number of study conditions. Estimates based on C1 also produced the greatest number of conditions that were practically significantly different than their population parameters.
An Empirical Comparison of Random Number Generators: Period, Structure, Correlation, Density, and Efficiency
Random number generators (RNGs) are widely used in conducting Monte Carlo simulation studies, which are important in the field of statistics for comparing power, mean differences, or distribution shapes between statistical approaches. Statistical results, however, may differ when different random number generators are used. Often older methods have been blindly used with no understanding of their limitations. Many random functions supplied with computers today have been found to be comparatively unsatisfactory. In this study, five multiplicative linear congruential generators (MLCGs) were chosen which are provided in the following statistical packages: RANDU (IBM), RNUN (IMSL), RANUNI (SAS), UNIFORM(SPSS), and RANDOM (BMDP). Using a personal computer (PC), an empirical investigation was performed using five criteria: period length before repeating random numbers, distribution shape, correlation between adjacent numbers, density of distributions and normal approach of random number generator (RNG) in a normal function. All RNG FORTRAN programs were rewritten into Pascal which is more efficient language for the PC. Sets of random numbers were generated using different starting values. A good RNG should have the following properties: a long enough period; a well-structured pattern in distribution; independence between random number sequences; random and uniform distribution; and a good normal approach in the normal distribution. Findings in this study suggested that the above five criteria need to be examined when conducting a simulation study with large enough sample sizes and various starting values because the RNG selected can affect the statistical results. Furthermore, a study for purposes of indicating reproducibility and validity should indicate the source of the RNG, the type of RNG used, evaluation results of the RNG, and any pertinent information related to the computer used in the study. Recommendations for future research are suggested in the area of other RNGs and methods not used in this study, such as additive, combined, …
A Comparison of Two Differential Item Functioning Detection Methods: Logistic Regression and an Analysis of Variance Approach Using Rasch Estimation
Differential item functioning (DIF) detection rates were examined for the logistic regression and analysis of variance (ANOVA) DIF detection methods. The methods were applied to simulated data sets of varying test length (20, 40, and 60 items) and sample size (200, 400, and 600 examinees) for both equal and unequal underlying ability between groups as well as for both fixed and varying item discrimination parameters. Each test contained 5% uniform DIF items, 5% non-uniform DIF items, and 5% combination DIF (simultaneous uniform and non-uniform DIF) items. The factors were completely crossed, and each experiment was replicated 100 times. For both methods and all DIF types, a test length of 20 was sufficient for satisfactory DIF detection. The detection rate increased significantly with sample size for each method. With the ANOVA DIF method and uniform DIF, there was a difference in detection rates between discrimination parameter types, which favored varying discrimination and decreased with increased sample size. The detection rate of non-uniform DIF using the ANOVA DIF method was higher with fixed discrimination parameters than with varying discrimination parameters when relative underlying ability was unequal. In the combination DIF case, there was a three-way interaction among the experimental factors discrimination type, relative ability, and sample size for both detection methods. The error rate for the ANOVA DIF detection method decreased as test length increased and increased as sample size increased. For both methods, the error rate was slightly higher with varying discrimination parameters than with fixed. For logistic regression, the error rate increased with sample size when relative underlying ability was unequal between groups. The logistic regression method detected uniform and non-uniform DIF at a higher rate than the ANOVA DIF method. Because the type of DIF present in real data is rarely known, the logistic regression method is recommended for …
A Comparison of Traditional Norming and Rasch Quick Norming Methods
The simplicity and ease of use of the Rasch procedure is a decided advantage. The test user needs only two numbers: the frequency of persons who answered each item correctly and the Rasch-calibrated item difficulty, usually a part of an existing item bank. Norms can be computed quickly for any specific group of interest. In addition, once the selected items from the calibrated bank are normed, any test, built from the item bank, is automatically norm-referenced. Thus, it was concluded that the Rasch quick norm procedure is a meaningful alternative to traditional classical true score norming for test users who desire normative data.
Outliers and Regression Models
The mitigation of outliers serves to increase the strength of a relationship between variables. This study defined outliers in three different ways and used five regression procedures to describe the effects of outliers on 50 data sets. This study also examined the relationship among the shape of the distribution, skewness, and outliers.
The Generalization of the Logistic Discriminant Function Analysis and Mantel Score Test Procedures to Detection of Differential Testlet Functioning
Two procedures for detection of differential item functioning (DIF) for polytomous items were generalized to detection of differential testlet functioning (DTLF). The methods compared were the logistic discriminant function analysis procedure for uniform and non-uniform DTLF (LDFA-U and LDFA-N), and the Mantel score test procedure. Further analysis included comparison of results of DTLF analysis using the Mantel procedure with DIF analysis of individual testlet items using the Mantel-Haenszel (MH) procedure. Over 600 chi-squares were analyzed and compared for rejection of null hypotheses. Samples of 500, 1,000, and 2,000 were drawn by gender subgroups from the NELS:88 data set, which contains demographic and test data from over 25,000 eighth graders. Three types of testlets (totalling 29) from the NELS:88 test were analyzed for DTLF. The first type, the common passage testlet, followed the conventional testlet definition: items grouped together by a common reading passage, figure, or graph. The other two types were based upon common content and common process. as outlined in the NELS test specification.
The Effect of Psychometric Parallelism among Predictors on the Efficiency of Equal Weights and Least Squares Weights in Multiple Regression
There are several conditions for applying equal weights as an alternative to least squares weights. Psychometric parallelism, one of the conditions, has been suggested as a necessary and sufficient condition for equal-weights aggregation. The purpose of this study is to investigate the effect of psychometric parallelism among predictors on the efficiency of equal weights and least squares weights. Target correlation matrices with 10,000 cases were simulated so that the matrices had varying degrees of psychometric parallelism. Five hundred samples with six ratios of observation to predictor = 5/1, 10/1, 20/1, 30/1, 40/1, and 50/1 were drawn from each population. The efficiency is interpreted as the accuracy and the predictive power estimated by the weighting methods. The accuracy is defined by the deviation between the population R² and the sample R² . The predictive power is referred to as the population cross-validated R² and the population mean square error of prediction. The findings indicate there is no statistically significant relationship between the level of psychometric parallelism and the accuracy of least squares weights. In contrast, the correlation between the level of psychometric parallelism and the accuracy of equal weights is significantly negative. Under different conditions, the minimum p value of χ² for testing psychometric parallelism among predictors is also different in order to prove equal weights more powerful than least squares weights. The higher the number of predictors is, the higher the minimum p value. The higher the ratio of observation to predictor is, the higher the minimum p value. The higher the magnitude of intercorrelations among predictors is, the lower the minimum p value. This study demonstrates that the most frequently used levels of significance, 0.05 and 0.01, are no longer the only p values for testing the null hypotheses of psychometric parallelism among predictors when replacing least squares weights …
Measurement Disturbance Effects on Rasch Fit Statistics and the Logit Residual Index
The effects of random guessing as a measurement disturbance on Rasch fit statistics (unweighted total, weighted total, and unweighted ability between) and the Logit Residual Index (LRI) were examined through simulated data sets of varying sample sizes, test lengths, and distribution types. Three test lengths (25, 50, and 100), three sample sizes (25, 50, and 100), two item difficulty distributions (normal and uniform), and three levels of guessing (no guessing (0%), 25%, and 50%) were used in the simulations, resulting in 54 experimental conditions. The mean logit person ability for each experiment was +1. Each experimental condition was simulated once in an effort to approximate what could happen on the single administration of a four option per item multiple choice test to a group of relatively high ability persons. Previous research has shown that varying item and person parameters have no effect on Rasch fit statistics. Consequently, these parameters were used in the present study to establish realistic test conditions, but were not interpreted as effect factors in determining the results of this study.
A Comparison of Three Criteria Employed in the Selection of Regression Models Using Simulated and Real Data
Researchers who make predictions from educational data are interested in choosing the best regression model possible. Many criteria have been devised for choosing a full or restricted model, and also for selecting the best subset from an all-possible-subsets regression. The relative practical usefulness of three of the criteria used in selecting a regression model was compared in this study: (a) Mallows' C_p, (b) Amemiya's prediction criterion, and (c) Hagerty and Srinivasan's method involving predictive power. Target correlation matrices with 10,000 cases were simulated so that the matrices had varying degrees of effect sizes. The amount of power for each matrix was calculated after one or two predictors was dropped from the full regression model, for sample sizes ranging from n = 25 to n = 150. Also, the null case, when one predictor was uncorrelated with the other predictors, was considered. In addition, comparisons for regression models selected using C_p and prediction criterion were performed using data from the National Educational Longitudinal Study of 1988.
A Monte Carlo Study of the Robustness and Power of Analysis of Covariance Using Rank Transformation to Violation of Normality with Restricted Score Ranges for Selected Group Sizes
The study seeks to determine the robustness and power of parametric analysis of covariance and analysis of covariance using rank transformation to violation of the assumption of normality. The study employs a Monte Carlo simulation procedure with varying conditions of population distribution, group size, equality of group size, scale length, regression slope, and Y-intercept. The procedure was performed on raw data and ranked data with untied ranks and tied ranks.
The Effectiveness of a Mediating Structure for Writing Analysis Level Test Items From Text Based Instruction
This study is concerned with the effect of placing text into a mediated structure form upon the generation of test items for analysis level domain referenced test construction. The item writing methodology used is the linguistic (operationally defined) item writing technology developed by Bormuth, Finn, Roid, Haladyna and others. This item writing methodology is compared to 1) the intuitive method based on Bloom's definition of analysis level test questions and 2) the intuitive with keywords identified method of item writing. A mediated structure was developed by coordinating or subordinating sentences in an essay by following five simple grammatical rules. Three test writers each composed a ten-item test using each of the three methodologies based on a common essay. Tests were administered to 102 Composition 1 community college students. Students were asked to read the essay and complete one test form. Test forms by writer and method were randomly delivered. Analysis of variance showed no significant differences among either methods or writers. Item analysis showed no method of item writing resulting in items of consistent difficulty among test item writers. While the results of this study show no significant difference from the intuitive, traditional methods of item writing, analysis level test item generation using a mediating structure may yet prove useful to the classroom teacher with access to a computer. All three test writers agree that test items were easier to write using the generative rules and mediated structure. Also, some relief was felt by the writers in that the method theoretically assured that an analysis level item was written.
A Monte Carlo Analysis of Experimentwise and Comparisonwise Type I Error Rate of Six Specified Multiple Comparison Procedures When Applied to Small k's and Equal and Unequal Sample Sizes
The problem of this study was to determine the differences in experimentwise and comparisonwise Type I error rate among six multiple comparison procedures when applied to twenty-eight combinations of normally distributed data. These were the Least Significant Difference, the Fisher-protected Least Significant Difference, the Student Newman-Keuls Test, the Duncan Multiple Range Test, the Tukey Honestly Significant Difference, and the Scheffe Significant Difference. The Spjøtvoll-Stoline and Tukey—Kramer HSD modifications were used for unequal n conditions. A Monte Carlo simulation was used for twenty-eight combinations of k and n. The scores were normally distributed (µ=100; σ=10). Specified multiple comparison procedures were applied under two conditions: (a) all experiments and (b) experiments in which the F-ratio was significant (0.05). Error counts were maintained over 1000 repetitions. The FLSD held experimentwise Type I error rate to nominal alpha for the complete null hypothesis. The FLSD was more sensitive to sample mean differences than the HSD while protecting against experimentwise error. The unprotected LSD was the only procedure to yield comparisonwise Type I error rate at nominal alpha. The SNK and MRT error rates fell between the FLSD and HSD rates. The SSD error rate was the most conservative. Use of the harmonic mean of the two unequal sample n's (HSD-TK) yielded uniformly better results than use of the minimum n (HSD-SS). Bernhardson's formulas controlled the experimentwise Type I error rate of the LSD and MRT to nominal alpha, but pushed the HSD below the 0.95 confidence interval. Use of the unprotected HSD produced fewer significant departures from nominal alpha. The formulas had no effect on the SSD.
The Analysis of the Accumulation of Type II Error in Multiple Comparisons for Specified Levels of Power to Violation of Normality with the Dunn-Bonferroni Procedure: a Monte Carlo Study
The study seeks to determine the degree of accumulation of Type II error rates, while violating the assumptions of normality, for different specified levels of power among sample means. The study employs a Monte Carlo simulation procedure with three different specified levels of power, methodologies, and population distributions. On the basis of the comparisons of actual and observed error rates, the following conclusions appear to be appropriate. 1. Under the strict criteria for evaluation of the hypotheses, Type II experimentwise error does accumulate at a rate that the probability of accepting at least one null hypothesis in a family of tests, when in theory all of the alternate hypotheses are true, is high, precluding valid tests at the beginning of the study. 2. The Dunn-Bonferroni procedure of setting the critical value based on the beta value per contrast did not significantly reduce the probability of committing a Type II error in a family of tests. 3. The use of an adequate sample size and orthogonal contrasts, or limiting the number of pairwise comparisons to the number of means, is the best method to control for the accumulation of Type II errors. 4. The accumulation of Type II error is irrespective of distributions.
The Characteristics and Properties of the Threshold and Squared-Error Criterion-Referenced Agreement Indices
Educators who use criterion-referenced measurement to ascertain the current level of performance of an examinee in order that the examinee may be classified as either a master or a nonmaster need to know the accuracy and consistency of their decisions regarding assignment of mastery states. This study examined the sampling distribution characteristics of two reliability indices that use the squared-error agreement function: Livingston's k^2(X,Tx) and Brennan and Kane's M(C). The sampling distribution characteristics of five indices that use the threshold agreement function were also examined: Subkoviak's Pc. Huynh's p and k. and Swaminathan's p and k. These seven methods of calculating reliability were also compared under varying conditions of sample size, test length, and criterion or cutoff score. Computer-generated data provided randomly parallel test forms for N = 2000 cases. From this, 1000 samples were drawn, with replacement, and each of the seven reliability indices was calculated. Descriptive statistics were collected for each sample set and examined for distribution characteristics. In addition, the mean value for each index was compared to the population parameter value of consistent mastery/nonmastery classifications. The results indicated that the sampling distribution characteristics of all seven reliability indices approach normal characteristics with increased sample size. The results also indicated that Huynh's p was the most accurate estimate of the population parameter, with the smallest degree of negative bias. Swaminathan's p was the next best estimate of the population parameter, but it has the disadvantage of requiring two test administrations, while Huynh's p index only requires one administration.
A Comparison of Three Methods of Detecting Test Item Bias
This study compared three methods of detecting test item bias, the chi-square approach, the transformed item difficulties approach, and the Linn-Harnish three-parameter item response approach which is the only Item Response Theory (IRT) method that can be utilized with minority samples relatively small in size. The items on two tests which measured writing and reading skills were examined for evidence of sex and ethnic bias. Eight sets of samples, four from each test, were randomly selected from the population (N=7287) of sixth, seventh, and eighth grade students enrolled in a large, urban school district in the southwestern United States. Each set of samples, male/female, White/Hispanic, White/Black, and White/White, contained 800 examinees in the majority group and 200 in the minority group. In an attempt to control differences in ability that may have existed between the various population groups, examinees with scores greater or less than two standard deviations from their group's mean were eliminated. Ethnic samples contained equal numbers of each sex. The White/White sets of samples were utilized to provide baseline bias estimates because the tests could not logically be biased against these groups. Bias indices were then calculated for each set of samples with each of the three methods. Findings of this study indicate that the percent agreement between the Linn-Harnish IRT method and the chisquare and transformed difficulties methods is similar to that found in previous studies comparing the latter approaches with other IRT methods requiring large minority samples. Therefore, it appears that the Linn-Harnish IRT approach can be used in lieu of other more restrictive IRT methods. Ethnic bias appears to exist in the two tests as measured by the large mean bias indices for the White/Hispanic and White/Black samples. Little sex bias was found as evidenced by the low mean bias indices of the male/ …
The Effects of the Ratio of Utilized Predictors to Original Predictors on the Shrinkage of Multiple Correlation Coefficients
This study dealt with shrinkage in multiple correlation coefficients computed for sample data when these coefficients are compared to the multiple correlation coefficients for populations and the effect of the ratio of utilized predictors to original predictors on the shrinkage in R square. The study sought to provide the rationale for selection of the shrinkage formula when the correlations between the predictors and the criterion are known and determine which of the three shrinkage formulas (Browne, Darlington, or Wherry) will yield the R square from sample data that is closest to the R square for the population data.
Comparison of Methods for Computation and Cumulation of Effect Sizes in Meta-Analysis
This study examined the statistical consequences of employing various methods of computing and cumulating effect sizes in meta-analysis. Six methods of computing effect size, and three techniques for combining study outcomes, were compared. Effect size metrics were calculated with one-group and pooled standardizing denominators, corrected for bias and for unreliability of measurement, and weighted by sample size and by sample variance. Cumulating techniques employed as units of analysis the effect size, the study, and an average study effect. In order to determine whether outcomes might vary with the size of the meta-analysis, mean effect sizes were also compared for two smaller subsets of studies. An existing meta-analysis of 60 studies examining the effectiveness of computer-based instruction was used as a data base for this investigation. Recomputation of the original study data under the six different effect size formulas showed no significant difference among the metrics. Maintaining the independence of the data by using only one effect size per study, whether a single or averaged effect, produced a higher mean effect size than averaging all effect sizes together, although the difference did not reach statistical significance. The sampling distribution of effect size means approached that of the population of 60 studies for subsets consisting of 40 studies, but not for subsets of 20 studies. Results of this study indicated that the researcher may choose any of the methods for effect size calculation or cumulation without fear of biasing the outcome of the metaanalysis. If weighted effect sizes are to be used, care must be taken to avoid giving undue influence to studies which may have large sample sizes, but not necessarily be the most meaningful, theoretically representative, or elegantly designed. It is important for the researcher to locate all relevant studies on the topic under investigation, since selective or even random …
A State-Wide Survey on the Utilization of Instructional Technology by Public School Districts in Texas
Effective utilization of instructional technology can provide a valuable method for the delivery of a school program, and enable a teacher to individualize according to student needs. Implementation of such a program is costly and requires careful planning and adequate staff development for school personnel. This study examined the degree of commitment by Texas school districts to the use of the latest technologies in their efforts to revolutionize education. Quantitative data were collected by using a survey that included five informational areas: (1) school district background, (2) funding for budget, (3) staff, (4) technology hardware, and (5) staff development. The study included 137 school districts representing the 5 University Interscholastic League (UIL) classifications (A through AAAAA). The survey was mailed to the school superintendents requesting that the persons most familiar with instructional technology be responsible for completing the questionnaires. Analysis of data examined the relationship between UIL classification and the amount of money expended on instructional technology. Correlation coefficients were determined between teachers receiving training in the use of technology and total personnel assigned to technology positions. Coefficients were calculated between a district providing a plan fortechnology and employment of a coordinator for instructional technology. Significance was established at the .05 level. A significant relationship was determined between the total district budget and the amount of money allocated to instructional technology. There was a significant relationship between the number of teachers receiving training in technology and the number of personnel assigned to technology positions. A significant negative relationship was determined between the district having a long-range plan for technology and the employment of a full-time coordinator for one of the subgroups. An attempt was made to provide information concerning the effort by local school districts to provide technology for instructional purposes. Progress has been made, although additional funds will be …
An Empirical Investigation of Marascuilo's Ú₀ Test with Unequal Sample Sizes and Small Samples
The study seeks to determine the effect upon the Marascuilo Ú₀ statistic of violating the small sample assumption. The study employed a Monte Carlo simulation technique to vary the degree of sample size and unequal sample sizes within experiments to determine the effect of such conditions, Twenty-two simulations, with 1200 trials each, were used. The following conclusion appeared to be appropriate: The Marascuilo Ú₀ statistic should not be used with small sample sizes and it is recommended that the statistic be used only if sample sizes are larger than ten.
Short-to-Medium Term Enrollment Projection Based on Cycle Regression Analysis
Short-to-medium projections were made of student semester credit hour enrollments for North Texas State University and the Texas Public and Senior Colleges and Universities (as defined by the Coordinating Board, Texas College and University System). Undergraduate, Graduate, Doctorate, Total, Education, Liberal Arts, and Business enrollments were projected. Fall + Spring, Fall, Summer I + Summer II, Summer I were time periods for which projections were made. A new regression analysis called "cycle regression" which employs nonlinear regression techniques to extract multifrequential phenomena from time-series data was employed for the analysis of the enrollment data. The heuristic steps employed in cycle regression analysis are similar to those used in fitting polynomial models. A trend line and one or more sin waves (cycles) are simultaneously estimated using a partial F test. The process of adding cycle(s) to the model continues until no more significant terms can be estimated.
A comparison of the Effects of Different Sizes of Ceiling Rules on the Estimates of Reliability of a Mathematics Achievement Test
This study compared the estimates of reliability made using one, two, three, four, five, and unlimited consecutive failures as ceiling rules in scoring a mathematics achievement test which is part of the Iowa Tests of Basic Skill (ITBS), Form 8. There were 700 students randomly selected from a population (N=2640) of students enrolled in the eight grades in a large urban school district in the southwestern United States. These 700 students were randomly divided into seven subgroups so that each subgroup had 100 students. The responses of all those students to three subtests of the mathematics achievement battery, which included mathematical concepts (44 items), problem solving (32 items), and computation (45 items), were analyzed to obtain the item difficulties and a total score for each student. The items in each subtest then were rearranged based on the item difficulties from the highest to the lowest value. In each subgroup, the method using one, two, three, four, five, and unlimited consecutive failures as the ceiling rules were applied to score the individual responses. The total score for each individual was the sum of the correct responses prior to the point described by the ceiling rule. The correct responses after the ceiling rule were not part of the total score. The estimate of reliability in each method was computed by alpha coefficient of the SPSS-X. The results of this study indicated that the estimate of reliability using two, three, four, and five consecutive failures as the ceiling rules were an improvement over the methods using one and unlimited consecutive failures.
A Comparison of Some Continuity Corrections for the Chi-Squared Test in 3 x 3, 3 x 4, and 3 x 5 Tables
This study was designed to determine whether chis-quared based tests for independence give reliable estimates (as compared to the exact values provided by Fisher's exact probabilities test) of the probability of a relationship between the variables in 3 X 3, 3 X 4 , and 3 X 5 contingency tables when the sample size is 10, 20, or 30. In addition to the classical (uncorrected) chi-squared test, four methods for continuity correction were compared to Fisher's exact probabilities test. The four methods were Yates' correction, two corrections attributed to Cochran, and Mantel's correction. The study was modeled after a similar comparison conducted on 2 X 2 contingency tables and published by Michael Haber.
Willingness of Educators to Participate in a Descriptive Research Study as a Function of a Monetary Incentive
The problem considered involved assessing willingness of educators to participate in a study offering monetary incentives. Determination of willingness was implemented by sending educators a packet requesting return of a postcard to indicate willingness to participate. The purpose was twofold: to determine the effect of a monetary incentive upon willingness of educators to participate in a research study, and to analyze implications for mail questionnaire studies. A sample of 600 educators was chosen from directories of eleven public schools in north Texas. It included equal numbers of male and female teachers and male and female administrators. Subjects were assigned to one of twelve groups. No two from a school were assigned to different levels of the inducement variable.
Effect of Rater Training and Scale Type on Leniency and Halo Error in Student Ratings of Faculty
The purpose of this study was to determine if leniency and halo error in student ratings could be reduced by training the student raters and by using a Behaviorally Anchored Rating Scale (BARS) rather than a Likert scale. Two hypotheses were proposed. First, the ratings collected from the trained raters would contain less halo and leniency error than those collected from the untrained raters. Second, within the group of trained raters the BARS would contain less halo and leniency error than the Likert instrument.
An Application of Ridge Regression to Educational Research
Behavioral data are frequently plagued with highly intercorrelated variables. Collinearity is an indication of insufficient information in the model or in the data. It, therefore, contributes to the unreliability of the estimated coefficients. One result of collinearity is that regression weights derived in one sample may lead to poor prediction in another model. One technique which was developed to deal with highly intercorrelated independent variables is ridge regression. It was first proposed by Hoerl and Kennard in 1970 as a method which would allow the data analyst to both stabilize his estimates and improve upon his squared error loss. The problem of this study was the application of ridge regression in the analysis of data resulting from educational research.
An Empirical Investigation of Tukey's Honestly Significant Difference Test with Variance Heterogeneity and Equal Sample Sizes, Utilizing Box's Coefficient of Variance Variation
This study sought to determine boundary conditions for robustness of the Tukey HSD statistic when the assumptions of homogeneity of variance were violated. Box's coefficient of variance variation, C^2 , was utilized to index the degree of variance heterogeneity. A Monte Carlo computer simulation technique was employed to generate data under controlled violation of the homogeneity of variance assumption. For each sample size and number of treatment groups condition, an analysis of variance F-test was computed, and Tukey's multiple comparison technique was calculated. When the two additional sample size cases were added to investigate the large sample sizes, the Tukey test was found to be conservative when C^2 was set at zero. The actual significance level fell below the lower limit of the 95 per cent confidence interval around the 0.05 nominal significance level.
A Comparison of Three Item Selection Methods in Criterion-Referenced Tests
This study compared three methods of selecting the best discriminating test items and the resultant test reliability of mastery/nonmastery classifications. These three methods were (a) the agreement approach, (b) the phi coefficient approach, and (c) the random selection approach. Test responses from 1,836 students on a 50-item physical science test were used, from which 90 distinct data sets were generated for analysis. These 90 data sets contained 10 replications of the combination of three different sample sizes (75, 150, and 300) and three different numbers of test items (15, 25, and 35). The results of this study indicated that the agreement approach was an appropriate method to be used for selecting criterion-referenced test items at the classroom level, while the phi coefficient approach was an appropriate method to be used at the district and/or state levels. The random selection method did not have similar characteristics in selecting test items and produced the lowest reliabilities, when compared with the agreement and the phi coefficient approaches.
The Robustness of O'Brien's r Transformation to Non-Normality
A Monte Carlo simulation technique was employed in this study to determine if the r transformation, a test of homogeneity of variance, affords adequate protection against Type I error over a range of equal sample sizes and number of groups when samples are obtained from normal and non-normal distributions. Additionally, this study sought to determine if the r transformation is more robust than Bartlett's chi-square to deviations from normality. Four populations were generated representing normal, uniform, symmetric leptokurtic, and skewed leptokurtic distributions. For each sample size (6, 12, 24, 48), number of groups (3, 4, 5, 7), and population distribution condition, the r transformation and Bartlett's chi-square were calculated. This procedure was replicated 1,000 times; the actual significance level was determined and compared to the nominal significance level of .05. On the basis of the analysis of the generated data, the following conclusions are drawn. First, the r transformation is generally robust to violations of normality when the size of the samples tested is twelve or larger. Second, in the instances where a significant difference occurred between the actual and nominal significance levels, the r transformation produced (a) conservative Type I error rates if the kurtosis of the parent population were 1.414 or less and (b) an inflated Type I error rate when the index of kurtosis was three. Third, the r transformation should not be used if sample size is smaller than twelve. Fourth, the r transformation is more robust in all instances to non-normality, but the Bartlett test is superior in controlling Type I error when samples are from a population with a normal distribution. In light of these conclusions, the r transformation may be used as a general utility test of homogeneity of variances when either the distribution of the parent population is unknown or is known …
Boundary Conditions of Several Variables Relative to the Robustness of Analysis of Variance Under Violation of the Assumption of Homogeneity of Variances
The purpose of this study is to determine boundary conditions associated with the number of treatment groups (K), the common treatment group sample size (n), and an index of the extent to which the assumption of equality of treatment population variances is violated (Q) with regard to user confidence in application of the one-way analysis of variance F-test for determining equality of treatment population means. The study concludes that the analysis of variance F-test is robust when the number of treatment groups is less than seven and when the extreme ratio of variances is less than 1:5, but when the violation of the assumption is more severe or the number of treatment groups is seven or more, serious discrepancies between actual and nominal significance levels occur. It was also concluded that for seven treatment groups confidence in the application of the analysis of variance should be limited to the values of Q and n so that n is greater than or equal to 10 In (1/2)Q. For nine treatment groups, it was concluded that confidence be limited to those values of Q and n so that n is greater than or equal to (-2/3) + 12 ln (1/2)Q. No definitive boundary could be developed for analyses with five treatment groups.
Cross Categorical Scoring: An Approach to Treating Sociometric Data
The purpose of this study was to use a cross categorical scoring method for sociometric data focusing upon those individuals who have made the selections. A cross category selection was defined as choosing an individual on a sociometric instrument who was not within one's own classification. The classifications used for this study were sex, race, and perceived achievement level. A cross category score was obtained by summing the number of cross category selections. The conclusions below are the result of this study. Cross categorical scoring provides a useful method of scoring sociometric data. This method successfully focuses on those individuals who make sociometric choices rather than those who receive them. Each category utilized provides a unique contribution. The categories used in this study were sex, race, and achievement level. These are, however, only reflective of any number of variables which could be used. The categories must be chosen to reflect the needs of the particular study in which they are included. Multiple linear regression analysis can be used in order to provide the researcher with enough scope to handle numerous nominal and ordinal independent variables simultaneously. The sociometric criterion or question does make a difference in the results on cross categorical scores. Therefore, in a group that has more than one identifiable activity, a question pertaining to each activity should be included.
Influence of Item Response Theory and Type of Judge on a Standard Set Using the Iterative Angoff Standard Setting Method
The purpose of this investigation was to determine the influence of item response theory and different types of judges on a standard. The iterative Angoff standard setting method was employed by all judges to determine a cut-off score for a public school district-wide criterion-reformed test. The analysis of variance of the effect of judge type and standard setting method on the central tendency of the standard revealed the existence of an ordinal interaction between judge type and method. Without any knowledge of p-values, one judge group set an unrealistic standard. A significant disordinal interaction was found concerning the effect of judge type and standard setting method on the variance of the standard. A positive covariance was detected between judges' minimum pass level estimates and empirical item information. With both p-values and b-values, judge groups had mean minimum pass levels that were positively correlated (ranging from .77 to .86), regardless of the type of information given to the judges. No differences in correlations were detected between different judge types or different methods. The generalizability coefficients and phi indices for 12 judges included in any method or judge type were acceptable (ranging from .77 to .99). The generalizability coefficient and phi index for all 24 judges were quite high (.99 and .96, respectively).
Back to Top of Screen