You limited your search to:

  Partner: UNT Libraries
 Decade: 1990-1999
 Degree Discipline: Applied Technology, Training and Development
 Collection: UNT Theses and Dissertations
Reactions and learning as predictors of job performance in a United States Air Force technical training program
This study is based on Kirkpatrick's (1996) four level evaluation model. The study assessed the correlation between and among three levels of data that resulted from evaluation processes used in the U.S. Air Force technical training. The three levels of evaluation included trainee reaction (Level 1), test scores (Level 2), and job performance (Level 3). Level 1 data was obtained from the results of a 20 item survey that employed a 5-point Likert scale rating. Written test scores were used for Level 2 data. The Level 3 data was collected from supervisors of new graduates using a 5-point Likert scale survey. The study was conducted on an existing database of Air Force technical training graduates. The subjects were trainees that graduated since the process of collecting and storing Levels 1 and 2 data in computerized database began. All subjects for this study graduated between March 1997 and January 1999. A total of 188 graduates from five Air Force specialties were included. Thirty-four cases were from a single course in the aircrew protection specialty area; 12 were from a single course in the munitions and weapons specialty area; and 142 were from three separate courses in the manned aerospace maintenance specialty area. Pearson product moment correlation coefficients were computed to determine the correlation coefficients between Levels 1 and 2; Level 1 and 3; Level 2 and 3 for each subject course. Multiple linear regression was used to determine the relationship between the composite of Levels 1 and 2 and Level 3. There were significant correlation coefficients between Levels 1 and 2 and Levels 2 and 3 for only one of the five courses. The linear regression analysis revealed no significant correlation using the composite of Levels 1 and 2 as a predictor of Level 3.
The Relationship Between Time-on-Task in Computer-Aided Instruction and the Progress of Developmental Reading Students at the University of Texas at Brownsville and Texas Southmost College
This research sought to determine what relationship exists between time-on-task in computer-aided instruction (CAI) using Destinations courseware and progress in reading ability of developmental reading students as indicated by the reading portion of the Texas Academic Skills Program (TASP) test. Time-on-task is the time during which a student actively works on Destinations activities, as recorded by the software management system. TASP, an exam required of all students in Texas public colleges, assesses reading, math, and writing skills. The population was made up of 482 students who took the TASP exam before and after CAI and who used Destinations CAI for remediation of reading skills. Null hypotheses were explored using Pearson correlation and linear multiple regression. The findings for the null hypotheses were the following: Ho1 - Correlation and linear regression correlation showed that time-on-task in Destinations CAI had no significant effect on the TASP scores of the population studied. Ho2 - Correlation and linear regression correlation showed that females made significantly better gains on the TASP test from CAI than males. Ho3 - Correlation and linear regression correlation showed that low-achiever students made no better gains on the TASP test from time-on-task in CAI than high-achiever students. Difference between the two group's gains was not statistically significant. Ho4 - The regression equations predicted the gain in TASP reading scores for less than 1% of the population studied. Only the regression equations for male students and female students separately were statistically significant. The researcher recommends replication of this study each semester to determine the effectiveness of CAI. Regular and systematic evaluation using pretest and posttest data will provide benchmarks so that the value of changes in instructional methods can be measured. This method of research can help to clarify questions that should be answered through other research methods.