| REQUIRED USER ROLE Administrator | PERMISSION OVERVIEW View permissions by role |
A/B testing allows you to split incoming Conversation traffic across two different Gladly Agent configurations. You can compare two different Gladly Agents or test two versions of the same agent to evaluate which setup performs better before committing to a change.
A/B testing is configured per Entry Point, giving you control over which specific Channels and Entry Points are part of a test. You can run tests at any traffic split you choose, and end the test at any time.

Before you begin
A/B testing is set up from the Connect Entry Points page and requires at least one entry point already connected to a Gladly Agent. If you haven't done this yet, see Connect or Disconnect Gladly Agents from an Entry Point.
Set up an A/B test
From the Guides page, click the Gladly Agent name dropdown at the top of the page, then select Connect Entry Points.

On the Connect Entry Points page, locate the Entry Point you want to test.
Click
on the Entry Point row.
Select Add A/B Testing.

In the Add A/B Testing modal, configure the two legs of your test. For each row, select the Agent and the Version you want to include, then set the traffic percentage.

Agent: Select any live Gladly Agent. Both legs can use the same agent or different Gladly Agents.
Version: Select any saved version of the chosen Gladly Agent. Versions are listed by timestamp, with the most recently saved version at the top. The currently deployed version is labeled Default (deployed).
%: Set the traffic split between the two legs. The two percentages must add up to 100. Adjusting one value automatically updates the other.

Click Add.
The Entry Point row updates to show A/B Testing in the Gladly Agents column and the configured traffic split in the Traffic Volume column, for example, "60% Retalè Home / 40% Retalè Outdoors."

Click Save to apply your changes.
Edit an active A/B test
To adjust the Gladly Agents, versions, or traffic split for an active test:
On the Connect Entry Points page, locate the entry point with an active A/B test.
Click
on the Entry Point row.
Select Edit A/B Testing.
Update the Gladly Agent, version, or percentage values as needed.
Click Add to confirm, then click Save.
End an A/B test
To stop an A/B test and return the Entry Point to a single Gladly Agent version at 100% traffic:
On the Connect Entry Points page, locate the Entry Point with an active A/B test.
Click
on the Entry Point row.
Select Remove.
Click Save.
The Entry Point returns to its previous single-Gladly Agent assignment at 100% traffic volume.
Review A/B test results
Once an A/B test is running, results are available in two places:
Gladly Improvement Opportunities dashboard
The Gladly Improvement Opportunities dashboard includes two filters specifically for A/B testing: Entry Point and Experiment Traffic Split.
Use the Entry Point filter to narrow the dashboard to a specific entry point where a test is running. Use the Experiment Traffic Split filter to select a specific test.
Once filtered, the existing dashboard tiles update to reflect only the sessions from that experiment. Each Gladly Agent version in the test appears as a separate row across tiles, making it possible to compare resolution rates, handoff rates, response rates, and recontact rates side by side.
The Gladly AI Agent Outcomes tile and the Handoff Opportunities Prioritization table are particularly useful for assessing which configuration is performing better.
Historical data is not available prior to creating a test
Entry Point and Experiment Traffic Split data is only available for sessions created after the A/B testing feature was enabled. Historical session data will not include these fields.
Gladly AI Sessions report
The Gladly AI Sessions report, accessible via Reporting in Gladly Team, also includes Entry Point and Traffic Split as filterable fields. This report provides session-level data and can be exported as a CSV for more granular analysis outside of the dashboard.