Skip to main content
All CollectionsUSER GUIDEWONDA AI
AI-Powered Assessments (private beta)
AI-Powered Assessments (private beta)

Gain insights into simulation conversations and evaluate performance and skills with AI-driven assessment

Quentin Veillas avatar
Written by Quentin Veillas
Updated over a week ago

OVERVIEW

With its latest AI-powered assessment feature, Wonda provides a powerful tool to evaluate performance and enable in-depth assessment of learning participants’ mastery against specific criteria.

This feature is particularly valuable for providing feedback to participants at scale and assessing simulation performance across participants.

As the creator of any simulation that includes an AI companion, you can now define, in details, up to 6 different criteria and test the robustness of your assessment on multiple sample sessions before going live.

As a trainee engaging into a simulation, you can now get an immediate feedback which you can generate directly after you have completed your simulation.

This Assessment report includes:

A summary of the trainee’s overall performance and improvement areas

A detailed review of the Trainee’s specific strengths and feedback for each criterion

An individual score for each criteria (1-5) and a mean score for the cohort

A downloadable version of the report including the associated original logs of the discussion the ai-powered assessment is based on.

Image 1: A report generated based on a participant’s discussion with a negotiation simulation

The feature works with existing AI-powered simulations and is currently in Beta.

Upcoming improvements include:

  • Option to generate report beyond the current limitation of 20 max reports to be generated for each simulation

  • Option for learners to share (or not to share) the report and the full logs of the trainee's discussion with its instructor.

TABLE OF CONTENT

In this article, you will discover:

1. How to access the AI Assessment Wizard

The AI Assessment Wizard is currently in private beta.

Once activated, you can find it here:

[Experience > Analytics page > ✨ AI Assessment (beta) tab]

Gif 1: Accessing the AI assessment feature

2. Defining Assessment Criteria

To generate reports, you will need to provide criteria that will serve as the basis for the generative model to create a personalized assessment.

You can provide these criteria via the AI Assessment Wizard, accessible by clicking on [Create a New Assessment], and then on [Edit].

Using the assessment wizard, you can edit:

  • The title and description for your assessment

  • Between 3 to 6 criteria, each with a title and description paragraph to explain the assessment methodology and focus of your choice.

Image 2: Filling out the criteria

Note: The assessment is performed by the gpt-4o-2024-05-13 model.

Once the criteria have been entered, you can save them by clicking the [SAVE ASSESSMENT] button.

Image 3: Save your criteria

3. Understanding the Assessment Reports

To generate reports, simply click the [GENERATE THE LATEST REPORTS] button.

This will generate the last 20 reports based on the last 20 discussions with the AI Character.

Image 4: Reports generated by the AI Assessments

Simply click on a card to open the detailed report.

Image 5: A report generated based on Terrance’s discussion in the negotiation simulation


You now have all the basics to use the AI Assessment feature.

As it is new and currently in beta, we would greatly appreciate your feedback on what you like and how we might improve it. To do so, simply use the feedback option provided in the interface or contact us here using the messenger.

You may also be interested in exploring Analytics Dashboards to learn about other engagement metrics available.

Did this answer your question?