It’s been almost three years now since ChatGPT was released to the public, and gained 100 million users in under 2 months, faster than any other social media platform. In that time, the capabilities and performance levels of the various large language models (ChatGPT, Anthropic, DeepMind, LLaMa and others) has increased dramatically, so much so that many experts are predicting that “AGI” or Artificial General Intelligence may be anywhere from 2 to 10 years away on the horizon. In this session, we will look specifically at what has changed about generative AI from the time large language models were first available to the public to the approach of the three year anniversary mark of their first adoption, including new methods and research on how to assess AI’s test-taking abilities. As AI-driven tools grow increasingly sophisticated, both professionals in their fields as well as subject matter experts in testing organizations must understand the implications for job roles and tasks, test security, content development, and validity of assessments. Attendees will gain insights into how generative AI is being leveraged both by professionals in their fields and by exam developers and discuss potential ethical and regulatory challenges posed by these advanced technologies. Real-world examples and frameworks will equip attendees to proactively address AI-related disruptions in their respective fields.
Learning Objectives:
Identify specific ways generative AI is reshaping content creation, test security, and validity in high-stakes credentialing assessments.
Learn about how some professionals in their respective fields are applying AI to their work and practice
Apply best practices and innovative strategies to integrate generative AI effectively and responsibly into credentialing programs.