An organisation specialising in Executive Development localised their entire portfolio into 7 European languages. Their portfolio includes leadership and development assessments, 360° and pulse feedback surveys, learning materials and follow-up activities based on client assessment and survey results.
As a separate component, there was also the user interface of their executive development platform.
They partnered with a provider who could ensure consistency across their platform and who proposes different approaches for different types of content, thus leveraging technological efficiencies on an ad hoc basis.
The assignment specifications included direct access to the platform files, so that platform updates could be implemented during development sprints: when new features become available or new patches are released,the different language versions can be updated without cumbersome exports or file exchanges.
The Case Study presented here is the resulting collaboration.
The first stage of the collaborative effort was a workflow analysis: the types of materials that needed to be translated were classified according to separate but interconnected workflows depending on the characteristics of each content type.
1. Human Translation with Subject-Matter Expert Review
The first materials to be translated/adapted were psychometric assessments and related surveys. Given that machine translation is not suitable in its current form for maintaining the important psychometric properties of this text type in translation, it was agreed that these materials would undergo a fully human-driven process.
A customised, secure version of an online CAT tool was used to fetch the organisation's materials directly from a GitHub repository. Translators were then able to see the text to be translated without an exchange of files. The translators and revisors received training to access the materials and instruction on key principles of assessment and survey translation. Following this training, the materials underwent an initial translation and revision by the linguists.
Then, Subject-Matter Experts (SMEs) in the field of psychometrics -- I/O psychologists or psychometricians who are native speakers of the target languages and with professional fluency of English -- reviewed these translations to ensure proper usage of terminology and that the psychometric properties of the items were maintained in the translation. The translations could be exported from the online environment into a format that SMEs who do not have experience with CAT tools could comfortably use.
Finally, proofreaders implemented the SME edits and did a final pass of the translations to ensure that no minor linguistic issues such as typos or punctuation errors remained in the final delivery. The translations were synced to the client’s repository ensuring that translations were all in the desired format without exchange of files.
2. The associated reports as well as all platform UI elements underwent a standard single translation and revision process. Since these materials are in the same GitHub repository as the assessments, they were accessible to translators in the same CAT tool and could be checked for consistency across all the client’s materials.
3. Content such as learning materials and follow-up activities for the client’s end-users was added to the overall portfolio. This material was organised in a separate system from the ongoing work on the assessment and platform content, which made it possible to set up a dedicated workflow to better leverage the AI and MT advances and translate the content on-the-fly.
As a first step, a sample of the learning materials was run through three different MT systems and evaluated by human translators in terms of accuracy and fluency. This was the basis for the selection of the most appropriate MT engine per target language, as not all languages have similar quality outputs across commonly used engines. Once the appropriate MT was selected, a larger sample was run first through the selected MT and then through an AI Quality Estimation (AIQE) tool COMET to evaluate the quality of the MT output on a scale of 0 to 1. This translation was post-edited to human-quality output by our trained linguists and the edit distance of their work was compared against the AIQE score to create a quality threshold.
The quality threshold is unique per target language, and this approach helped the linguists to focus on areas that were more likely to be problematic for the MT.
This collaboration, which now has a cruising speed with bi-weekly check-in meetings and a service-level agreement (SLA) started with a thorough review of the organisation’s source materials. It is essential to determine the specific requirements for each text type of content.
The peak periods included implementation of both human-driven workflows with subject-matter expertise and AI and machine translation assisted workflows to process high-volume on-the-fly content. The organisation now has a robust, replicable translation and localisation process for its entire Executive Development portfolio.
Their materials continue to be updated after several years of collaboration.