We're updating help docs to reflect our new product naming. Gladly Sidekick (AI) is now called just Gladly, and Gladly Hero (the Platform) is now Gladly Team. Some articles may display outdated names while we update everything. Thank you for your patience! Learn more

Test Gladly Agent Behavior with the Simulator

Prev Next

This feature is currently in an Early Access phase

If you're interested in learning more, contact Gladly Support.

The Simulator lets you create automated test scenarios to verify that your Gladly Agent handles Customer Conversations as expected. Rather than manually testing every Guide change, you can write test cases that describe a Customer situation, run them with one click, and get clear results.

Use the Simulator to validate your Gladly Agent's behavior whenever you update a Guide, add a new workflow, or adjust a configuration.

Gladly platform interface showing customer interaction simulation and configuration options.

How it works

The Simulator follows a three-step process:

Create a scenario

Describe a simulated Customer, including what they know and what they're trying to accomplish. Then define success criteria: specific yes/no questions that determine whether the test passed.

Run the test

The Simulator runs a real Conversation against your Gladly Agent. The simulated Customer sends messages, your Gladly Agent responds using its actual guide logic, and the system evaluates the Conversation.

Review results

Each test produces a pass/fail result along with the full Conversation transcript and a per-criterion breakdown showing what passed and what didn't.

Key concepts

Scenario – A test case describing a simulated customer interaction. Each scenario includes a customer goal, an initial message, details the customer can share, and success criteria for grading the result.

Test run – One execution of a scenario. The Simulator sends messages, waits for your agent to respond, and records everything that happens during the conversation.

Success criteria – A set of yes/no questions that define what success looks like for a given scenario. Every criterion must pass for the test to pass. There is no partial credit.

Customer data available to Gladly – Facts about external systems that should be true during the test (for example, order status, account history, or product availability). This ensures simulated data is realistic and accurate.

Expect handoff – A setting that tells the Simulator whether the correct outcome for this scenario is a handoff to a human Agent. By default, any handoff causes the test to fail. Enable this when you're specifically testing that your Gladly Agent escalates a scenario correctly.

What the Simulator tests

The Simulator tests your Gladly Agent's actual Conversational logic. When a simulated Customer sends a message, Gladly processes it through the actual Guide configuration, the same decision-making used in production. This allows the Simulator to validate your Gladly Agent's ability to navigate Guides, ask clarifying questions, and formulate appropriate responses.

External service calls (such as order lookups, shipping queries, or refund processing) are replaced with simulated data during tests. This keeps tests safe and consistent while no real orders will be canceled, no real refunds will be issued, and no real customer data is accessed. The Simulator automatically generates realistic mock data based on your scenario's preconditions.

Knowledge base searches run against real content (Public Answers and configured URLs) during tests, since these are read-only operations that don't modify any data.

Where to find the Simulator

The Simulator panel is accessible within Guides. Click Simulator in the top navigation of any Gladly Agent. The panel opens on the right side of the screen and shows your list of existing scenarios.