Analytics reports provide comprehensive insights into your platform usage, including metrics on conversations, messages, ratings, contacts, and agent activities across customizable time periods.

Reports are powerful analytics tools that help you understand how your conversational AI applications are performing. They provide detailed metrics and insights across various aspects of your platform usage, from user engagement to bot performance.

Generating Reports

To generate a report, you need to make a POST request to the report endpoint with the specific report ID you want to retrieve. Each report has unique input requirements and generates structured output data tailored to the metric being analyzed.

Reports are identified by their unique report ID, which you can discover by listing all available reports. Each report processes your request based on the input parameters you provide and returns comprehensive analytics data including current values, historical comparisons, and time-series breakdowns where applicable.

Most reports accept a periodDays parameter that allows you to specify the time window for the analysis. This parameter defaults to 30 days if not provided, giving you flexibility to analyze trends over different time periods such as weekly (7 days), monthly (30 days), quarterly (90 days), or custom durations.

Understanding Report Output

Report responses typically include several key components that help you understand both current performance and trends over time. The value field represents the current metric for the specified period, while the change field shows the difference compared to the previous equivalent period, helping you identify growth or decline patterns.

Many reports also include a breakdown array that provides day-by-day data points within your specified period. This granular data enables you to create visualizations, identify patterns, and understand how metrics fluctuate over time rather than just seeing aggregate totals.

Report Categories

The platform offers several categories of reports to help you analyze different aspects of your conversational AI system:

Engagement Reports track user interactions including total conversations, active contacts, and message volumes. These metrics help you understand how users are engaging with your bots and whether engagement is growing over time.

Performance Reports provide insights into bot behavior, including bot response counts, agent actions taken, and average messages per conversation. These metrics help you optimize your bot's configuration and understand its operational efficiency.

Quality Reports focus on user satisfaction through rating metrics, including total ratings received and breakdowns of positive versus negative feedback. These reports are essential for understanding user sentiment and identifying areas for improvement.

Contact Reports help you track your user base growth and activity levels, showing both total unique contacts and active contacts within specified time periods.

Use Cases

Reports can be integrated into dashboards, automated monitoring systems, or business intelligence tools. For example, you might query engagement reports daily to monitor platform health, track quality reports to identify declining satisfaction scores, or analyze performance reports to optimize bot configurations.

By combining multiple reports, you can build a comprehensive understanding of your platform's performance and make data-driven decisions about improvements, scaling, and feature development.

Important: Report data is calculated in real-time based on your current database state, so metrics reflect the most up-to-date information available at the time of the request.

Reports are structured analytics queries that process your platform activity data and return structured metrics with period comparisons, breakdowns, and ranked lists. Each report is identified by a stable ID and accepts typed input parameters described below.

Generating Reports

Use the batch generate endpoint to run one or more reports in a single call. The request body is a map of report ID → input object. All reports in a single request execute in parallel.

Responses use the same map structure. Each key resolves either to the report data or to an error object if that specific report failed.

Common Input Parameters

Most reports accept a single periodDays field that sets the look-back window. The previous equivalent window is computed automatically and used to calculate the change field in the output.

ParameterTypeDefaultDescription
periodDaysinteger30Number of days to analyse

Bot-specific reports additionally require a botId string.

Common Output Fields

FieldTypeDescription
valuenumberMetric total for the current period
changenumberDifference versus the previous equivalent period
periodstringHuman-readable label, e.g. "last 30 days"
breakdownarrayOptional day-by-day { date, total } entries

Reports are powerful analytical tools that transform your platform's raw activity data into meaningful insights. Each report is designed to answer specific questions about your usage patterns, performance metrics, or resource consumption, helping you understand trends, optimize operations, and demonstrate value.

Generating Reports

The report generation endpoint allows you to request multiple reports in a single API call. Each report is identified by a unique report ID, and you provide the necessary input parameters for that specific report type. The system processes all requested reports in parallel and returns results for each.

To generate reports, send a POST request with an object where each key is a report ID (obtained from the platform/report/list endpoint) and each value contains the input parameters required by that specific report. The endpoint returns a similarly structured response with results or errors for each requested report.

The response contains an object with the same report IDs as keys. Each value is either the successfully generated report data or an error object indicating why that specific report failed to generate. This design allows partial success - you'll receive data for all reports that completed successfully, even if some failed.

Discovering Available Reports

Before generating reports, use the /api/v1/platform/report/list endpoint to discover which reports are available on the platform. Each report has a unique identifier (ID), a descriptive name, and documentation about what input parameters it requires and what data it returns.

Different reports may require different input parameters. Common parameters include time period specifications (like periodDays for the number of days to analyze), resource filters (like botId to analyze a specific bot), or analysis options (like granularity for data aggregation level).

Report Input Parameters

Each report type accepts specific input parameters that control what data is analyzed and how it's aggregated. While parameter requirements vary by report, common patterns include:

  • Time Periods: Most reports accept periodDays (number of days to analyze) or explicit startDate/endDate parameters
  • Resource Filters: Many reports allow filtering by specific resources like botId, datasetId, or integrationId
  • Aggregation Options: Some reports support different aggregation levels like hourly, daily, weekly, or monthly

Consult the report list endpoint's response to understand the specific input schema for each report type. Reports validate their inputs and return clear error messages if required parameters are missing or invalid.

Batch Report Generation

Requesting multiple reports in a single API call is more efficient than making separate requests. The endpoint processes reports in parallel, reducing overall latency and minimizing the number of API calls needed to gather comprehensive analytics.

When batching reports, consider grouping related reports that cover the same time period or analyze the same resources. This approach provides a cohesive view of your platform activity and ensures consistent data across different analytical dimensions.

Error Handling and Partial Success

The batch report generation endpoint uses a partial success model. If one report encounters an error (due to invalid parameters, insufficient data, or processing issues), other reports in the same request continue to execute independently. Each report in the response indicates either success (with data) or failure (with an error message).

This design ensures you can always retrieve available data, even if some specific reports can't be generated. Review each report's response to identify which succeeded and which require attention or parameter adjustments.

Report Processing Performance

Report generation involves analyzing platform activity data, which can take several seconds depending on the time period, data volume, and complexity of the analysis. Longer time periods and accounts with high activity will generally require more processing time.

For optimal performance:

  • Request only the reports you need rather than generating all available reports
  • Use appropriate time periods - shorter periods generate faster while still providing useful insights
  • Cache generated reports for reuse rather than regenerating the same analysis repeatedly
  • Schedule resource-intensive reports during off-peak hours if running them regularly

Best Practices

  • Validate Report IDs: Use the list endpoint to discover valid report IDs before attempting to generate reports
  • Understand Input Requirements: Review each report's input schema to ensure you provide all required parameters correctly
  • Monitor for Errors: Check response objects for error fields and handle failures gracefully in automated workflows
  • Regular Analysis: Generate reports on a consistent schedule to track trends and identify patterns over time
  • Store Historical Data: Archive report results for long-term trend analysis and compliance requirements

Important: Report data reflects platform activity as recorded in the analytics systems. There may be a brief delay (typically a few minutes) between events occurring and appearing in generated reports due to data processing pipelines.

Discovering Available Reports

Before generating a report, you need to know which reports are available on the platform. The list endpoint provides a complete catalog of all report types you can access, including their identifiers, names, and descriptions.

Each report in the registry has a unique identifier (ID) that you use when fetching the actual report data. The list endpoint returns metadata about each report without executing any analytics queries, making it a lightweight operation suitable for building user interfaces or documentation.

The response includes an array of report objects, each containing the report ID, a human-readable name, a description of what the report measures, and timestamp information indicating when the report type was created and last updated.

Use the id field from the list response when making requests to the fetch endpoint to generate specific reports. The descriptive information helps you understand what each report measures and choose the appropriate reports for your analytics needs.

This endpoint is particularly useful when building dynamic dashboards or administrative interfaces where users need to select from available report types. You can cache the list of reports since new report types are only added during platform updates.

Dataset Records Report

ID: cm7k3m5n8k000008jq7h9e5b1a

Returns the total number of records stored across one or more datasets, with a per-dataset breakdown. Useful for monitoring knowledge-base size and validating import pipelines.

Input

ParameterTypeDescription
datasetIdsstring[]IDs of datasets to include in the count

Output

FieldTypeDescription
totalRecordsnumberAggregate record count
breakdownarrayPer-dataset { datasetId, records } entries

Total Ratings Report

ID: clr3m5n8k000008jq7h9e5b1a

Counts all ratings (thumbs up + thumbs down) received within the period, with a signed change versus the previous period and a daily breakdown.

Input: periodDays

Output

FieldTypeDescription
valuenumberTotal ratings
changenumberChange vs previous period
thumbsUpnumberCount of positive ratings
thumbsDownnumberCount of negative ratings
breakdownarrayDaily { date, total } entries

Thumbs Up Report

ID: clr3m5n8k000108jq3c4d7f2b

Counts only positive ratings with period-over-period change.

Input: periodDays

Output: Standard metric fields (value, change, period).

Thumbs Down Report

ID: clr3m5n8k000208jq8e5f6g3c

Counts only negative ratings with period-over-period change.

Input: periodDays

Output: Standard metric fields (value, change, period).

Total Contacts Report

ID: clr3m5n8k000308jq1h7i8j4d

Returns the all-time count of unique contacts on the account. Does not accept a time window because the metric is cumulative.

Input: (none required)

Output

FieldTypeDescription
valuenumberTotal unique contacts
periodstringAlways "all time"

Active Contacts Report

ID: clr3m5n8k000408jq9i8j9k5e

Counts contacts that initiated at least one conversation within the period, with a daily breakdown and period-over-period change.

Input: periodDays

Output: Standard metric fields with breakdown.

Total Conversations Report

ID: clr3m5n8k000508jq2j9k0l6f

Counts conversations started within the period with daily granularity and period-over-period change.

Input: periodDays

Output: Standard metric fields with breakdown.

Total Messages Report

ID: clr3m5n8k000608jq3k0l1m7g

Counts all messages (user + bot + activity) within the period.

Input: periodDays

Output: Standard metric fields with breakdown.

User Messages Report

ID: clr3m5n8k000708jq4l1m2n8h

Counts only user-originated messages (type user).

Input: periodDays

Output: Standard metric fields with breakdown.

Bot Messages Report

ID: clr3m5n8k000808jq5m2n3o9i

Counts only agent/bot responses (type bot).

Input: periodDays

Output: Standard metric fields with breakdown.

Activity Messages Report

ID: clr3m5n8k000908jq6n3o4p0j

Counts agent actions and tool-call events (type activity).

Input: periodDays

Output: Standard metric fields with breakdown.

Average User Messages per Conversation Report

ID: clr3m5n8k000a08jq7o4p5q1k

Mean number of user messages across all conversations in the period. Useful as a proxy for conversation depth and engagement quality.

Input: periodDays

Output

FieldTypeDescription
valuenumberAverage user messages per conversation
periodstringAnalysed time window

Average Bot Messages per Conversation Report

ID: clr3m5n8k000b08jq8p5q6r2l

Mean number of bot responses per conversation in the period.

Input: periodDays

Output

FieldTypeDescription
valuenumberAverage bot messages per conversation
periodstringAnalysed time window

Average Actions per Conversation Report

ID: clr3m5n8k000c08jq9q6r7s3m

Mean number of agent actions (tool calls, ability invocations) per conversation in the period.

Input: periodDays

Output

FieldTypeDescription
valuenumberAverage actions per conversation
periodstringAnalysed time window

Comprehensive Overview Report

ID: clr3m5n8k000d08jqar7s8t4n

Combines ratings, contacts, conversations, and messages into a single response. Each entry in the data array includes an optional details object with a metric summary, a chart (line series), and a ranked list of related contacts or actions.

Input: periodDays

Output

FieldTypeDescription
dataarrayArray of { title, description, value, change, period, details } items

Metrics included: Total Ratings, Thumbs Up, Thumbs Down, Total Users, Active Users, Total Conversations, Total Messages, Total User Requests, Total Agent Responses, Total Agent Actions, and the three averages.

Bot Stats Report

ID: clr3m5n8k000e08jqbs0t1u5o

Core performance snapshot for a single bot covering conversations, messages, token consumption, ratings, and overall sentiment signal.

Input

ParameterTypeDefaultDescription
botIdstringID of the bot to analyse
periodDaysinteger30Look-back window

Output

FieldTypeDescription
totalConversationsnumberConversations in period
totalMessagesnumberMessages in period
totalTokensnumberTokens consumed in period
totalRatingsnumberRatings received
thumbsUpnumberPositive rating count
thumbsDownnumberNegative rating count
sentimentSignalstringpositive, negative, neutral, or unknown
periodstringAnalysed time window

Alerts Report

ID: clr3m5n8k000f08jqcs1u2v6p

Account-level alert system that monitors usage spikes (tokens, conversations, messages), database resource limits (datasets, records, skillsets, abilities, files), overall sentiment degradation, and significant activity increases.

Input: periodDays

Output

FieldTypeDescription
alertsarray{ type, severity, title, message, metric } entries
summaryobject{ totalAlerts, criticalCount, warningCount, infoCount }
periodstringAnalysed time window

Alert types: usageSpike, limit, sentiment, activity, negativeFeedback. Severity levels: info (20%+ spike), warning (50%+ spike or 80%+ limit), critical (100%+ spike or limit reached).

Bot Performance Report

ID: clr3m5n8k000g08jqdt1u2v7q

Period-over-period comparison for a bot across conversations, messages, tokens, and ratings, each with daily breakdown charts.

Input: botId, periodDays

Output

FieldTypeDescription
conversationsobject{ current, previous, change, breakdown }
messagesobject{ current, previous, change, breakdown }
tokensobject{ current, previous, change, breakdown }
ratingsobject{ thumbsUp, thumbsDown, total, change, sentimentSignal, breakdown }
periodstringAnalysed time window

Bot Conversation Quality Report

ID: clr3m5n8k000h08jqeu2v3w8r

Analyses conversation depth distribution, abandonment rate (single-turn conversations), token efficiency, and the most-used action types.

Input: botId, periodDays

Output

FieldTypeDescription
avgMessagesPerConversationobject{ user, bot, activity } averages
conversationDepthobjectBuckets: singleTurn, short (2-3), medium (4-10), long (10+)
totalConversationsnumberTotal conversations analysed
abandonmentRatenumberPercentage of single-turn conversations
avgTokensPerConversationnumberMean tokens per conversation
avgTokensPerMessagenumberMean tokens per message
topActionsarrayTop { type, name, count } action entries
periodstringAnalysed time window

Bot Alerts Report

ID: clr3m5n8k000i08jqfv3w4x9s

Detects four categories of bot-specific anomalies: high negative feedback, token usage spikes, conversation volume drops, and abandonment rate increases. Default window is 7 days for more responsive alerting.

Input

ParameterTypeDefaultDescription
botIdstringID of the bot to check
periodDaysinteger7Look-back window

Output

FieldTypeDescription
alertsarray{ type, severity, title, message, metric } entries
summaryobject{ totalAlerts, criticalCount, warningCount, infoCount }
periodstringAnalysed time window

Alert types: negativeFeedback, tokenSpike, conversationDrop, abandonmentSpike. Severity levels: info, warning, critical.

Bot Negative Feedback Report

ID: clr3m5n8k000j08jqgw4x5y0t

Lists individual negative ratings for a bot including user-provided reasons, linked conversation and message IDs, and contact information for direct follow-up.

Input

ParameterTypeDefaultDescription
botIdstringID of the bot
periodDaysinteger30Look-back window
limitinteger10Maximum negative ratings to return (max 50)

Output

FieldTypeDescription
itemsarrayNegative rating entries with id, value, reason, conversationId, messageId, contactId, contactName, createdAt
totalnumberTotal ratings in period
thumbsDownnumberNegative rating count
thumbsUpnumberPositive rating count
periodstringAnalysed time window

Comprehensive Analytics Report

ID: gpv2an25fuhe2k6v6ckv85v8

Extends the Overview Report with token consumption metrics including total tokens consumed, daily token breakdown, and ranked lists of top bots and contacts by token usage.

Input: periodDays

Output: Same structure as the Overview Report, with an additional Total Tokens entry in data that contains bot and contact token consumption lists.