EVALUATING THE VALIDITY AND RELIABILITY OF SPEAKING RUBRICS IN MULTILINGUAL CONTEXTS

Authors

  • Toshpolatova Sabina Author
  • Knonsaidova Maktuba Author

Keywords:

speaking rubrics, validity, reliability, multilingual assessment, language testing, rater agreement, oral performance

Abstract

This thesis investigates the validity and reliability of speaking assessment rubrics used in multilingual educational settings. As classrooms become increasingly diverse, language instructors face the challenge of evaluating oral performance fairly and consistently across speakers of multiple linguistic backgrounds. The study examines how rubric design, rater training, and cultural-linguistic assumptions affect the accuracy and fairness of speaking assessments. Through a mixed-methods approach combining rubric analysis, inter-rater reliability testing, and qualitative interviews with assessors, the research identifies key sources of validity threats and proposes evidence-based recommendations for improving rubric quality. Findings suggest that culturally neutral descriptors, clear performance benchmarks, and systematic rater calibration are critical to achieving equitable and consistent speaking assessment in multilingual contexts.

Author Biographies

Published

2026-04-27