Description: Data-driven decision making is an integral part of higher education and it needs to be rooted in strong methodological and statistical practices. Key practices include the use and interpretation of effect sizes as well as a correct understanding of null hypothesis significance testing (NHST). Therefore, effect size reporting and interpreting practices in higher education journal articles represent an important area of inquiry. This study examined effect size reporting and interpretation practices of published quantitative studies in three core higher education journals: Journal of Higher Education, Review of Higher Education, and Research in Higher Education. The review covered a three-year publication period between 2013 and 2015. Over the three-year span, a total of 249 articles were published by the three journals. The number of articles published across the three years did not vary appreciably. The majority of studies employed quantitative methods (71.1%), about a quarter of them used qualitative methods (25.7%), and the remaining 3.2% used mixed methods. Seventy-three studies were removed from further analysis because they did not feature any quantitative analyses. The remaining 176 quantitative articles represented the sample pool. Overall, 52.8% of the 176 studies in the final analysis reported effect size measures as part of their major findings. Of the 93 articles reporting effect sizes, 91.4% of them interpreted effect sizes for their major findings. The majority of studies that interpreted effect sizes also provided a minimal level of interpretation (60.2% of the 91.4%). Additionally, 26.9% of articles provided average effect size interpretation, and the remaining 4.3% of studies provided strong interpretation and discussed their findings in light of previous studies in their field.
Date: August 2016
Creator: Stafford, Mehary T.