This feature is currently in an Early Access phase
If you're interested in learning more, contact Gladly Support.
To use the Simulator for test scenarios, you’ll need to understand how to configure each field, run tests, and review results.
Create a scenario
The Simulator panel opens from within any Gladly Agent page. How you create a new scenario depends on whether you've created one before.
Create your first scenario
When you open the Simulator panel for the first time, the Getting Started with Scenarios view provides an overview of key fields: Customer Goal, Additional details, and Success Criteria.
From a Gladly Agent page, click Simulator in the top navigation.
In the Simulator panel, click Create Manually.
.png?sv=2022-11-02&spr=https&st=2026-04-03T21%3A56%3A44Z&se=2026-04-03T22%3A10%3A44Z&sr=c&sp=r&sig=VC57Y1iePGSbMf%2BL9%2BDcbzlhPC%2BzMRz8ZU%2FSd3Ptmvc%3D)
Create additional scenarios
Once you've created your first scenario, opening the Simulator panel will display any existing scenarios for that Gladly Agent. Click any scenario to open and edit it, or create a new one.
From a Gladly Agent page, click Simulator in the top navigation.
In the Simulator panel, click New Scenario.

The scenario form has two sections: Customer Setup and Test Configuration.
Customer Setup
Scenario title (optional)
Use this section to provide a short label to identify this scenario in the list. If left blank, the Customer goal is shown in the Scenarios list instead.

Customer goal (required)
Use this section to describe what the simulated Customer is trying to accomplish. Write this in first person, starting with "I want to..."
The Customer goal guides the simulated Customer's behavior throughout the Conversation.

Initial message from customer (required)
Use this section to input the Customer's first message to Gladly. Write it the way a real Customer would engage–short and natural.

Additional details from the customer (optional)
Use this section to include any facts the simulated Customer knows and can share if Gladly asks. Think of this as details the Customer might have available to them, but wouldn’t volunteer without being prompted.

The simulated Customer shares this information only when Gladly asks for it, unless you enable Allow proactive pushback.
Allow proactive pushback (optional)
When selected, the simulated Customer can volunteer one relevant fact from the Additional details section, if Gladly declines a request. This is useful for testing scenarios where a Customer might push back on a policy.

Test Configuration
Customer data available to Gladly (optional)
Use this section to include any other facts about the Customer or your organization that are true in this scenario. For example, information about order status, account history, or product availability. This differs from Additional details, which are things the Customer knows and can share.

Success criteria (required)
Use this section to add a list of yes/no questions that define what a successful interaction looks like. Every criterion must pass for the test to pass overall.
Write each criterion as an objective observer watching the Conversation. Use third person and focus on what's observable in the transcript.

Click Add Criterion to add multiple criteria.
Expect handoff (optional)
By default, the Simulator treats any handoff to a human Agent as a test failure. Check this box when specifically testing to ensure that Gladly AI correctly escalates to a team member for a given scenario.
When selected, the test keeps running until the handoff occurs (or the Conversation reaches its turn limit). The test passes only when both the success criteria are met and the handoff happens.

Save a scenario
Click Save to save the scenario. A "Scenario saved" confirmation appears when the save is successful.
.png?sv=2022-11-02&spr=https&st=2026-04-03T21%3A56%3A44Z&se=2026-04-03T22%3A10%3A44Z&sr=c&sp=r&sig=VC57Y1iePGSbMf%2BL9%2BDcbzlhPC%2BzMRz8ZU%2FSd3Ptmvc%3D)
Run tests
Run a single scenario
Click any scenario in the Scenarios list to open it, then click Run to run it individually.

Run all scenarios
Click Run All from the Scenarios list to test every scenario for a Gladly Agent at once.

What happens during a run
When you start a test, the Simulator:
Creates a simulated Customer session.
Sends the initial message to Gladly.
Waits for Gladly to respond.
Evaluates whether the success criteria are met.
If criteria aren't met, generates a natural follow-up Customer message and continues the Conversation.
Repeats until all criteria pass, a handoff occurs, or the Conversation reaches 20 turns.
The 20-turn limit prevents tests from running indefinitely
If the Gladly Agent can't satisfy the success criteria within 20 exchanges, the test fails.
Cancel a run
You can cancel a running test at any time. The cancellation takes effect after the current step completes.
Review results
Each completed test shows one of the following outcomes:
Result | Description |
|---|---|
Pass | All success criteria were met (and handoff expectations were satisfied, if applicable) |
Fail | Success criteria were not met after 20 turns, or an unexpected handoff occurred |
Error | Something went wrong during the test (i.e. a technical issue with mock data generation) |
Timeout | Gladly didn't respond within the expected time window |
Canceled | You canceled the test before it completed |
Pass and Fail are normal test outcomes
Error, Timeout, and Canceled indicate something interrupted the test before a verdict was reached.
Reading the results
For each completed run, you can view:
Pass/fail status — The overall test result
Conversation transcript — The full back-and-forth between the simulated customer and Gladly
Per-criterion evaluation — Each success criterion shown individually with its pass/fail status and an explanation
The per-criterion breakdown is especially useful for debugging failed tests. It tells you exactly which criteria weren't met and why.
View conversation
Click View conversation on any test result to open the full Conversation within Gladly Team, where you can review how AI responded turn by turn. Click Resume in tester to reopen the Conversation in the Simulator test view.
Run history
Each scenario stores a history of past runs. This makes it easy to track whether Guide changes improved or regressed test results over time.