We're updating help docs to reflect our new product naming. Gladly Sidekick (AI) is now called just Gladly, and Gladly Hero (the Platform) is now Gladly Team. Some articles may display outdated names while we update everything. Thank you for your patience! Learn more

Set Up A/B Testing

Prev Next
REQUIRED USER ROLE 
Administrator
PERMISSION OVERVIEW
View permissions by role

A/B testing allows you to split incoming Conversation traffic across two different Gladly Agent configurations. You can compare two different Gladly Agents or test two versions of the same agent to evaluate which setup performs better before committing to a change.

A/B testing is configured per Entry Point, giving you control over which specific Channels and Entry Points are part of a test. You can run tests at any traffic split you choose, and end the test at any time.

Connect Gladly Agents to various entry points including email and SMS options.

Before you begin

A/B testing is set up from the Connect Entry Points page and requires at least one entry point already connected to a Gladly Agent. If you haven't done this yet, see Connect or Disconnect Gladly Agents from an Entry Point.

Set up an A/B test

  1. From the Guides page, click the Gladly Agent name dropdown at the top of the page, then select Connect Entry Points.
    Menu options for Retale Outdoors with highlighted 'Connect Entry Points' feature.

  2. On the Connect Entry Points page, locate the Entry Point you want to test.

  3. Click on the Entry Point row.

  4. Select Add A/B Testing.
    Connect Gladly agents to various entry points, including A/B testing options.

  5. In the Add A/B Testing modal, configure the two legs of your test. For each row, select the Agent and the Version you want to include, then set the traffic percentage.
    Interface for adding A/B testing with version selection and percentage allocation options.

    • Agent: Select any live Gladly Agent. Both legs can use the same agent or different Gladly Agents.

    • Version: Select any saved version of the chosen Gladly Agent. Versions are listed by timestamp, with the most recently saved version at the top. The currently deployed version is labeled Default (deployed).

    • %: Set the traffic split between the two legs. The two percentages must add up to 100. Adjusting one value automatically updates the other.
      A/B testing setup showing agent versions and percentage distribution for analysis.

  6. Click Add.

The Entry Point row updates to show A/B Testing in the Gladly Agents column and the configured traffic split in the Traffic Volume column, for example, "60% Retalè Home / 40% Retalè Outdoors."

Overview of entry points and traffic volume for Retale agents and A/B testing.

Click Save to apply your changes.

Edit an active A/B test

To adjust the Gladly Agents, versions, or traffic split for an active test:

  1. On the Connect Entry Points page, locate the entry point with an active A/B test.

  2. Click on the Entry Point row.

  3. Select Edit A/B Testing.

  4. Update the Gladly Agent, version, or percentage values as needed.

  5. Click Add to confirm, then click Save.

End an A/B test

To stop an A/B test and return the Entry Point to a single Gladly Agent version at 100% traffic:

  1. On the Connect Entry Points page, locate the Entry Point with an active A/B test.

  2. Click on the Entry Point row.

  3. Select Remove.

  4. Click Save.

The Entry Point returns to its previous single-Gladly Agent assignment at 100% traffic volume.

Review A/B test results

Once an A/B test is running, results are available in two places:

Gladly Improvement Opportunities dashboard

The Gladly Improvement Opportunities dashboard includes two filters specifically for A/B testing: Entry Point and Experiment Traffic Split.

Use the Entry Point filter to narrow the dashboard to a specific entry point where a test is running. Use the Experiment Traffic Split filter to select a specific test.

Once filtered, the existing dashboard tiles update to reflect only the sessions from that experiment. Each Gladly Agent version in the test appears as a separate row across tiles, making it possible to compare resolution rates, handoff rates, response rates, and recontact rates side by side.

The Gladly AI Agent Outcomes tile and the Handoff Opportunities Prioritization table are particularly useful for assessing which configuration is performing better.

Historical data is not available prior to creating a test

Entry Point and Experiment Traffic Split data is only available for sessions created after the A/B testing feature was enabled. Historical session data will not include these fields.

Gladly AI Sessions report

The Gladly AI Sessions report, accessible via Reporting in Gladly Team, also includes Entry Point and Traffic Split as filterable fields. This report provides session-level data and can be exported as a CSV for more granular analysis outside of the dashboard.