We're updating help docs to reflect our new product naming. Gladly Sidekick (AI) is now called just Gladly, and Gladly Hero (the Platform) is now Gladly Team. Some articles may display outdated names while we update everything. Thank you for your patience! Learn more

Review and Inspect AI Responses

Prev Next

The Review and Inspect and views within the AI Conversation Review panel give Admins and Team Managers a behind-the-scenes view of how Gladly AI processes Customer messages, makes decisions, and generates responses. Use these tools to understand why a specific response was generated, why a Conversation was handed off to an Agent, and to provide feedback that helps your team continuously improve AI performance.

The Conversations Review panel is accessible within Gladly AI Conversations, visible only to select permissions

Review and Inspect views are visible to users with Admin or Team Manager permissions. They are not visible to Agents.

Chat conversation about wool allergy and suitable coat recommendations for the user.

Conversation Review panel overview

Open a Conversation within the Gladly AI Conversations page and click on any AI-generated response to open the Conversation Review panel and inspect response details. The panel is comprised of two key tabs:

Tab

What it shows

When it appears

Review

Feedback forms and feedback history for the Conversation.

Always visible by default before clicking on a specific AI response.

Inspect

Inclusive of the Summary and Debug views that clarify AI steps taken and decision-making.

The Summary view details the overarching path AI followed.

The Debug view offers an in-depth technical look at why AI made certain decisions.

Opens by default when you click on any AI-generated response within the Conversation.

Note: The Inspect tab is grayed out until you select an AI response.

For Conversations that took place before February 24, 2026, the Inspect tab will not appear because those sessions predate the algorithm components data format.

Access the Conversation Review panel

  1. Click .

  2. Select Gladly AI Conversations.

  3. Click on a Conversation where Gladly AI has engaged.

  4. The Review tab within the Conversation Review panel will be open by default.

  5. Click on any AI response within the Conversation. The Conversation Review panel will automatically switch to the Inspect tab, which displays a blue dot when active.

    The Inspect tab is hidden before clicking on a response

    Click on a specific AI-generated response to engage Inspect tab and populate details about the selected reply.

  6. Within the Inspect tab, toggle between two views: Summary and Debug.

Use the Summary view

Summary is the default view within the Inspect tab. It presents a clear, step-by-step walkthrough of how Gladly AI arrived at its response — which Guide it followed, what actions it took, and why it responded the way it did.

What the Summary view shows

The Summary view is organized into the following labeled sections:

  • Profile — The active Gladly Agent name, displayed as a clickable link to the configuration page.

  • Guide — The Guide that AI used to reply to the Customer, displayed as a clickable link to the Guide editor.

  • Sections — A visual breadcrumb showing the path AI took as it moved through the Guide, section by section.

    • For example: Customer is asking a question → Look up the customer type before answering customer question → Answer the customer question with Renter Portal Answers.

    • Section navigation: Each section that Gladly AI visited is displayed as an expandable row. Expand a section to see:

      • Action — The action that ran within the section, displayed as a clickable link. For example: "Knowledge sources."

      • Sources — The specific Public Answers retrieved during knowledge lookups, listed individually.

        Sources is present only for Questions & Recommendations Guides or other Guides where the Knowledge Sources action has been configured  

        For Guides that do not use this action, the Sources component will not appear.

      • Evaluated Rules — Rules that were evaluated within the section and which ones executed.  

      • Guidance Used — Instructions or advice that Gladly AI followed based content set up within How to speak to customers.

      • What Happened Next — This portion of the Summary view consists of three potential outputs:

        • Gladly Responded — The AI-generated response text sent to the Customer for that section.

        • Navigated To — The section or Guide that AI jumped to based on an evaluated rule or set of guidance.

        • Handed Off — AI handed the Conversation off to an Agent due to a configured transfer condition, or a failed quality check.

  • Quality Check — An overall pass or fail badge, followed by a detailed table listing each individual quality check. The table includes three columns: Quality Check, Status, and Rationale. Each check shows one of the following statuses:

    • Pass — The response met this quality standard. A brief rationale explains why.

    • Disabled — This check was not active for the current configuration.

    • Fail — The response did not meet this quality standard. The rationale explains what was flagged.

Summary of retail guides with highlighted sections and recommendations for customers.

How section traversal works

Gladly AI processes a Customer message by navigating through sections within a Guide. Each section can run an action — such as looking up order status or searching the knowledge base — evaluate rules to decide what to do next, and generate a response based on the section instructions and gathered context.

The Summary view shows this traversal in order, so you can follow the AI decision path from start to finish. System-level steps like "Go Back," which returns to a parent Guide, and "Handoff," which transfers the Conversation to an Agent, are also displayed.

Use the Debug view

The Debug view within the Inspect tab provides the complete, ordered trace of every algorithm component that fired as AI composed a reply to the Customer, or executed a different action (e.g. handed off, moved to another section or Guide, etc). It is designed for in-depth technical investigation when you need to see raw inputs, outputs, and timestamps for each step.

How to access the Debug view

  1. From the Gladly AI Conversations page, click on a Conversation where AI has engaged.

  2. Click on any AI response within the Conversation. The Conversation Review panel will open to the Inspect tab.

  3. Select the Debug at the top of the Inspect tab.

The Conversation Review panel resets to the Summary view whenever you select a different AI response.

Simply click Debug on the Inspect tab to display the technical specifications for each AI response.

What the Debug view shows

The Debug view is organized into the following labeled sections:

  • Trace Summary — Shows the Gladly Agent name (displayed under “Profile”), Guide name, and associated Customer messages, along with badges for all components referenced during the Customer exchange.

    “Algorithm Components Referenced” portion of this section contains clickable badges

    Quality checks are consolidated into a single badge. Clicking a badge scrolls the component card list to the selected component and auto-expands its input and output sections, making it easy to jump directly to a specific step.

  • Contextual Labels — Between component cards, color-coded labels mark important transitions in the AI processing pipeline:

    Label

    Color

    Description

    New run triggered by [Customer message]

    Green

    A new Customer message arrived and triggered a fresh processing run

    Navigated to Guide: [Guide name]

    Purple

    AI switched to a different Guide

    Navigated to Section: [Section name]

    Blue

    AI moved to a new section within the current Guide

    Go Back / Handoff

    Orange

    A system-level transition back to the main Guide or a transfer to a human Agent

  • Component Cards — One card per algorithm step, rendered in execution order with contextual labels between them.

Component cards

These are the components the algorithm uses to generate responses and take actions. Components fall into two categories — quality checks that evaluate the AI response before it reaches the Customer, and components that handle actions and general logic that occurs throughout AI’s decision-making process.

The Trace Summary section includes a clickable badge for each algorithm component involved in generating the response. Click a badge to jump to its component card, and hover over the card name to view its description.

Each card displays:

  • Execution sequence number — The step's position in Gladly AI’s processing order.

  • Component type — The component name with a tooltip explaining what the component does.

  • Pass/fail badge — Assigned to individual checks for the quality check component.

  • Timestamp — When the component executed.

  • Collapsible Input section — The data that was fed into the component.

  • Collapsible Output section — The result the component produced.

  • Edit links — For certain configurable fields, a direct link to the relevant setting in the Gladly Agent or Guide editor so you can make adjustments immediately.

Understand turns and runs

Each algorithm trace is associated with a turn — a Customer message and the AI response to it. Within a single turn, there can be multiple runs:

  • Run 0 is the initial processing of the Customer message.

  • Run 1, 2, ... occur if additional Customer messages arrive while the AI is still processing a previous one.

  • Both Summary and Debug views clearly label run boundaries so you can see exactly which Customer message triggered each processing pass.

Review Gladly AI performance

The Review tab is always visible in the Conversation Review panel and is selected by default when you open a Conversation. It provides a structured way for Admins and Team Managers to evaluate how Gladly AI performed and leave feedback that can inform ongoing quality improvements.

Review an AI-generated response

  1. Open a Conversation on the Gladly AI Conversations page.

  2. With the Review tab selected in the Conversation Review panel, you will see the feedback form for the Conversation.

  3. Use the thumbs up or thumbs down buttons to indicate whether Gladly AI performed well or needs improvement.

  4. Add optional comments to provide specific context — for example, what the AI did well, where it fell short, or what should be changed in the Guide configuration.

  5. Previously submitted feedback and comments are visible from the Reviews page, selectable from the main menu, so your team can track review patterns over time.

View all reviews in the Reviews log

All reviews submitted through the Review tab are collected in a centralized Reviews page. To access it,

  1. Click .

  2. Select Reviews.

The Reviews page displays a table with the following columns:

Column

Description

Rating

The thumbs up or thumbs down rating for the review

Comment

The feedback comment left by the reviewer

Last Updated

The date and time the review was submitted or last modified

By

The name and email of the reviewer, with a link to open the original Conversation

The table is paginated, so you can use the Previous and Next buttons to browse through all submitted reviews.

The Reviews log gives Admins and Team Managers a single place to see every review that has been submitted across all Gladly AI Conversations. Use it to spot recurring themes, track how AI performance evolves over time, and maintain a running record of quality assessments. For example, if multiple reviewers flag similar issues with a particular Guide or response pattern, the Reviews log makes those trends easy to identify so you can prioritize configuration updates.

The Reviews log is designed as an internal tool for your team — it is not sent to or acted on by Gladly

Think of it as your organization's own quality journal for Gladly AI, helping Admins and Team Managers stay aligned on what is working well and where there is room to improve.

Tips for using the Review tab

Use the following scenarios to provide feedback for Gladly AI:

  • After reviewing a resolved Conversation — Confirm that Gladly AI correctly interpreted the Customer inquiry, followed the right Guide, and generated an appropriate response.

  • After reviewing a handoff — Document whether the handoff was justified or whether the Guide or quality check configuration should be adjusted.

  • During routine quality assurance — Use the Review tab as part of your regular QA workflow to capture patterns across multiple Conversations and drive continuous improvement.

Use Review and Inspect tabs together for a complete AI Conversation review

The Review and Inspect tabs work together to form a complete quality assurance workflow. Consider this sample flow to use the Conversation Review panel holistically:

  1. Start with Review — Open a Conversation and use the Review tab to assess the overall outcome. Did Gladly AI resolve the inquiry, or did it hand off?

  2. Dig deeper with Inspect — Click an AI response to open the Inspect tab. Use the Summary view to follow the decision path, and switch to the Debug view if you need to investigate a specific component.

  3. Leave feedback — Return to the Review tab to provide a thumbs up or thumbs down rating and add comments that capture what you found.

  4. Take action — Use the direct links in the Inspect tab to navigate to the relevant Gladly Agent, Guide, or general AI settings and make configuration changes based on your findings.