Reliability of ChatGPT-5.0 as an Automated Essay Scoring Tool: What Matters?
Keywords: Automated essay scoring, Reliability measurements, Prompting engineering, Data feeding
Abstract
The aims of the present study were twofold: Exploring the variabilities of ChatGPT-5.0’s capabilities of rubric-based essay scoring across three prompting designs and two essay feeding methods; and testing the reliability of ChatGPT-generated scores against human ratings. Drawing upon three reliability measurements, including: Spearman’s correlations, Intraclass Correlations (Koraishi, 2024; Bui & Barrot, 2025) and quadratic weighted kappa (QWK) ( Mizumoto & Eguchi, 2023; Poole & Coss, 2024), the findings revealed that although the reliability coefficients ranged from moderate to substantial, the essay scoring abilities of ChatGPT-5.0 depended greatly upon users’ expertise to engineer prompts and their choices of essay feeding. This study highlights the importance of continued effort in the validation of this technology as an automated essay scoring tool and emphasizes the irreplaceability of human professional judgment in this field.