| REQUIRED USER ROLE Administrator, Team Manager, or Analyst | PERMISSION OVERVIEW View permissions by role |
“Sidekick” in Gladly AI data
Please note that some Gladly AI reports may use “Sidekick”, the former name for Gladly AI, in certain data fields and report names.
The Sidekick (Gladly AI) Guides and Answers Performance Dashboard complements the summary information that you’ll find within Journeys and the Answer Performance page by providing more visibility into how Gladly AI is using the instructions authored in Guides and the information available as Public Answers to engage with Customers. You can improve Gladly AI’s performance by augmenting Answer content, adding new Answers, or creating Guides to handle additional workflows.
Adding information surfaced by Suggested Answers as Public Answers increases Gladly AI’s ability to respond to and resolve a greater number of Customer questions. Use the Sidekick Guides and Answer Performance Dashboard to measure the impact of your changes and see how new Answers you have added are leading to improvements in Gladly AI’s Resolution rates.
Before you start
We recommend you review the following before using any OOTB Dashboards.
Review Overview of OOTB Dashboards.
Get familiar with Time Anchors. Every report tile in the dashboard uses a specific time anchor to aggregate data, so you must understand how this concept is utilized.
Metrics noted on this page are also available through Insight Builder.
Core Concepts
Review the foundational definitions below to understand better the metrics used in the dashboard:
Journeys – Journeys provides teams with clear, actionable insight into how Gladly AI is performing. It surfaces patterns from real customer Conversations, identifies automation gaps, and highlights opportunities to improve your knowledge base and AI workflows.
Answer Performance – The Answers performance page is a view accessible within Journeys and shows the Public Answers Gladly AI referenced in order to generate a response to the Customer.
Suggested Answers – The Suggested Answers page displays a set of Answers that Gladly AI has automatically generated based on Agent replies to Customer inquiries that it was not able to fully resolve.
Gladly Agent – Multi-brand organizations may use different advice, terminology, and brand tone that is specific to each sub-brand. Those instructions are captured in the Gladly Agent, which then applies to multiple Guides.
Access the Sidekick Guides and Answers Performance Dashboard
Click
on the top left corner of the screen.
Click Reports.
Under the Gladly AI category, click Sidekick Guides and Answers Performance Dashboard.
Set the dashboard filters.
Click
on the top right corner of the dashboard to refresh dashboard data bounded by the filter. Note – After the initial dashboard load,
changes to
. If you change the filter, click refresh again to reload the dashboard.
Sidekick Guides and Answers Performance dashboard tiles
Sidekick Guide Performance over time

The first tile in the dashboard shows the count of Conversations where Gladly AI attempted to respond to the Customer using a Guide.
The green line shows the proportion of those Conversations where Gladly AI Resolved the Customer inquiry without directly handing off to an Agent.
The purple line shows the proportion of those Conversations where Gladly AI started to help the Customer, sent at least one AI-generated response, but then handed the Conversation off to an Agent.
The purple line is representative of “Assisted Conversations”
Conversations where Gladly AI provided at least one response to the Customer, but then handed off to an Agent are referred to as “Assisted Conversations.”
As you add information surfaced in Suggested Answers as Public Answers and configure new Guides, you will typically see the proportion of Gladly AI Resolutions (denoted by the green line) increase over time.
Sidekick Profile (Gladly Agent) Performance

The second chart shows usage and performance for a given Gladly Agent that has been configured. It is based on the count of Conversations and is filtered for the date range selected in the top-level dashboard filter.
Each bar represents the count of unique Conversations where Gladly AI has referenced a Guide within a Gladly Agent when attempting to respond to the Customer. Each bar within the chart displays the following:
Assisted Conversations
Conversations where Gladly AI sent at least one AI-generated response before handing off to an Agent.
Resolved Conversations
Conversations where Gladly AI resolved the Customer’s inquiry without handing off to an Agent.
Conversations Not Resolved or Assisted
Conversations where Gladly AI referenced a Guide within the Gladly Agent but neither Assisted or Resolved.
Values within the ‘Conversations Not Resolved or Assisted’ column do not count as Billable
Conversations that were neither Assisted or Resolved are not considered Billable Conversations.
Guide Performance - Week over Week

The third tile in the dashboard shows week-over-week performance for each Guide that has been configured within the last three weeks. Use the information in this table to understand how frequently Gladly AI references each Guide, what proportion of those Conversations Gladly AI has resolved without needing to hand off to an Agent, and what proportion of those Conversations Gladly AI has helped with, but needed to hand off to an Agent to complete the interaction.
Answer Performance - Week over Week

The final tile in the dashboard shows week-over-week performance for each Answer in your company’s Public Answers repository that Gladly AI has referenced within the last three weeks.
Use the information in this table to see how frequently Gladly AI leverages each Answer, what proportion of those Conversations Gladly AI has resolved without needing to directly hand off to an Agent, and what proportion of those Conversations resulted in a hand off to an Agent.
FAQ
What does “No answers” mean, and how could Gladly AI be resolving any Customer inquiries without using information contained in any of my company’s Answers?
The row “No answers” includes Conversations where Gladly AI did not find any relevant Public Answer to use when generating a response to the Customer. Gladly AI may have responded using information contained in the How to speak to customers tile.
For example, Gladly AI would respond to a basic question such as "What is your name?”, based on the generic guidance provided without needing to reference more specific information contained in your Public Answers.
My team and I recently added new Answers to our Public Answer repository. How can I see how those Answers are performing?
Every time you add an Answer to your Public Answers respository, you will see the Answer title, along with the count of times that Gladly AI referenced that Answer when attempting to respond to a Customer, and the proportion of those Conversations that Gladly AI Assisted (Handed off) and Resolved without directly handing off to an Agent.