AI Prompt Writing Rubric: A Validity and Reliability Study

 

ABSTRACT

This study introduces the development and validation of an analytical rubric designed to teach sixth-grade students how to write effective prompts. An initial draft rubric was developed based on a literature review on ChatGPT and prompt engineering, as well as the opinions of seven experts. The rubric was piloted with 32 sixth-grade students. We re-evaluated content validity, assessed construct validity through factor analysis, and measured internal consistency through Cronbach’s alpha. During validation, four items were removed due to low common variance, and item 10 was excluded for redundancy. The final version demonstrated robust construct validity and internal consistency. Moreover, the Fleiss’ kappa value of 0.29 showed fair to moderate interrater agreement. Implications for Practice or Policy: This section presents implications for educators, policymakers, students, and researchers: (1) Policymakers can create assessment tools aligned with AI-integrated curricula, using the developed rubric as a guide. (2) Educators can use the rubric for lesson planning, assessing prior knowledge, and measuring skill development. (3) Researchers may build foundational K-12 assessment studies based on this work. (4) Students can enhance their AI communication by writing clearer, more polite, and purposeful prompts, thereby improving their written expression and self-assessment skills.