Name
Transforming Assessment: Using AI to Scale Dialogue in Speaking Tests
Description

Effective communication in a foreign language (L2) is crucial but challenging to assess at scale because of the tension between quality and cost. Traditional computer-based L2 English speaking tests use monologic tasks that lack interactivity, while human-delivered interviews, though rich in interaction, are costly and hard to scale. This session explores how generative AI-powered spoken dialog systems (SDSs) offer a scalable and valid alternative for assessing L2 speaking skills. We present research on a prototype SDS that simulates examiner behavior and a fully operational multimodal SDS used in educational settings. Findings highlight user experience, reliability, and construct relevance. We conclude by reflecting on the role of AI in enabling accessible, cost-effective, and high-quality speaking assessments—advocating a shift toward assessing real-world communicative ability, aligned with best assessment practice guidelines.

Date & Time
Monday, March 2, 2026, 10:15 AM - 11:00 AM
Location Name
Celestin G - 3rd Fl