Name
Transforming Assessment: Using AI to Scale Dialogue in Speaking Tests
Speakers
Yasin Karatay, Cambridge University Press & Assessment
Evelina Galaczi, Cambridge University Press & Assessment
Yoichi Matsuyama, Equmenopolis, Inc.
Evelina Galaczi, Cambridge University Press & Assessment
Yoichi Matsuyama, Equmenopolis, Inc.



Description
Effective communication in a foreign language (L2) is crucial but challenging to assess at scale because of the tension between quality and cost. Traditional computer-based L2 English speaking tests use monologic tasks that lack interactivity, while human-delivered interviews, though rich in interaction, are costly and hard to scale. This session explores how generative AI-powered spoken dialog systems (SDSs) offer a scalable and valid alternative for assessing L2 speaking skills. We present research on a prototype SDS that simulates examiner behavior and a fully operational multimodal SDS used in educational settings. Findings highlight user experience, reliability, and construct relevance. We conclude by reflecting on the role of AI in enabling accessible, cost-effective, and high-quality speaking assessments—advocating a shift toward assessing real-world communicative ability, aligned with best assessment practice guidelines.
Session Type
Presentation
Session Area
Certification/Licensure, Education, Industrial/Organizational, Workforce Skills Credentialing
Primary Topic
Test Administration and Delivery