Abstract

Academic publishing depends on the peer review process to guarantee quality, scholarly rigor, and credibility, but the traditional peer review system receives mounting criticism because of its inefficient operations, biased judgments, and irregular standards. This study explores how artificial intelligence (AI) can improve journal peer review systems to solve a variety of issues, including reducing reviewer exhaustion, minimizing bias, improving consistency, and accelerating the review process. Current literature shows AI as a promising yet controversial solution, which may provide structured feedback with improved consistency and efficiency. Our study examined 52 human reviews together with 26 AI-generated reviews from EdTechnica--An Open Encyclopedia of Educational Technology. We used exploratory mixed-methods research, employing both qualitative and quantitative methods to evaluate review quality from each source. The evaluation rubric-based prompt underwent extensive refinement to produce AI reviews, which were assessed across eight criteria: thoroughness, contextuality, depth, evaluative nature, supportiveness, consistency, and efficiency. Results indicate that AI reviews outperformed human reviews in accuracy, thoroughness, and supportiveness, while human reviews demonstrated better depth and contextual comprehension. Results suggest that a combination of AI with human reviewers' disciplinary expertise represents the optimal solution to enhance review quality and sustainability.

Degree

MS

College and Department

David O. McKay School of Education; Instructional Psychology and Technology

Rights

https://lib.byu.edu/about/copyright/

Date Submitted

2025-12-09

Document Type

Thesis

Keywords

artificial intelligence, peer review, human review, academic publishing

Language

english

Included in

Education Commons

Share

COinS