Name
Leveraging LLMs for Item Generation in Specialized Assessment Domains
Speakers
Nikole Gregg, National Commission on Certification of Physician Assistants
Brittany Corrigan, National Commission on Certification of Physician Assistants
Joseph Betts, National Council of State Boards of Nursing (NCSBN)
William Muntean, National Council of State Boards of Nursing (NCSBN)
Brittany Corrigan, National Commission on Certification of Physician Assistants
Joseph Betts, National Council of State Boards of Nursing (NCSBN)
William Muntean, National Council of State Boards of Nursing (NCSBN)


Description
Generative AI methods using Large Language Models (LLMs) have emerged as tools to alleviate the resource intensive demands of generating item content. However, the use of LLMs for Automated Item Generation (AIG) may underperform for highly domain-specific exams (e.g., medicine). In this presentation, two national medical certifying organizations discuss their implementation of various generative AI approaches to AIG. Specifically, we discuss the impact of different LLMs, prompting strategies, and the use of Retrieval Augmented Generation (RAG) on content accuracy, content relevance, and item development efficiency gains. Additionally, we discuss how Subject Matter Experts interacted with AIG tools to generate items, and their review and revisions of generated content. Attendees interested in: 1) optimizing resource allocation in specialized assessment domains through generative AIG approaches, and 2) increasing the involvement of SMEs in AIG processes will find value in this session.
Session Type
Presentation
Session Area
Certification/Licensure, Education, Industrial/Organizational, Workforce Skills Credentialing
Primary Topic
Test Development and Psychometrics