Usage Statistics
Understanding your usage patterns is essential for managing costs, optimizing resource allocation, and ensuring you stay within your plan limits. Usage statistics provide real-time insights into how you're using the platform, helping you make informed decisions about scaling, budgeting, and resource management.
The platform tracks multiple usage dimensions including token consumption from AI model interactions, conversation and message volumes, and counts of database resources like datasets, records, skillsets, and files. All metrics reset at the beginning of each billing period, providing a clear view of current-period consumption.
Fetching Current Usage
Retrieve comprehensive usage statistics for the current billing period to monitor consumption across all platform features and resources. The endpoint provides a complete snapshot of your usage in a single request.
Response Breakdown
The usage response includes several key metrics:
Token Usage: Total number of tokens consumed by AI model interactions during the current billing period. Tokens represent the computational currency for language model operations including chat completions, content generation, and other AI-powered features. Higher token counts indicate more extensive AI usage.
Conversations: Number of conversation instances created. Each conversation represents a distinct interaction session with bots or agents. This metric helps track user engagement and conversation volumes across your applications.
Messages: Total message count across all conversations. This includes both user inputs and bot responses, providing insight into interaction depth and engagement levels. High message counts relative to conversation counts indicate longer, more detailed interactions.
Database Resources: Counts of various database entities:
- Datasets: Number of knowledge base collections created
- Records: Total number of records across all datasets
- Skillsets: Number of ability collections defined
- Abilities: Total number of custom abilities created
- Files: Number of files uploaded and stored
- Users: Number of sub-users or team members created
Usage Monitoring Best Practices
Regular usage monitoring helps you:
- Track Consumption Trends: Identify usage patterns and growth over time
- Optimize Costs: Understand which features drive costs and optimize accordingly
- Prevent Overages: Monitor approaching limits before hitting billing thresholds
- Capacity Planning: Make informed decisions about plan upgrades or scaling
- Resource Optimization: Identify unused or underutilized resources
Consider integrating usage statistics into your application dashboards or monitoring systems to maintain continuous visibility into platform consumption. Automated alerts based on usage thresholds can help prevent unexpected overages.
Note: Usage metrics are calculated in real-time but may include slight delays due to caching optimizations. For high-precision billing calculations, refer to your detailed billing statements which provide complete accuracy.
Fetching Usage Time Series Data
Retrieve detailed time-series usage data spanning the last 90 days to analyze consumption trends, identify patterns, and track platform activity over time. Unlike the snapshot endpoint that provides current billing period totals, the series endpoint delivers daily data points enabling granular trend analysis, forecasting, and historical comparisons.
Time-series usage data is essential for understanding how your platform usage evolves. It reveals daily and weekly patterns, helps identify consumption spikes that might indicate viral growth or issues, supports accurate resource planning and capacity forecasting, enables comparison of usage across different time periods, and provides data for building custom usage dashboards and analytics visualizations.
Response Structure and Data Points
The endpoint returns three parallel time-series arrays covering the last 90 days, each containing daily aggregated totals. Every data point includes a Unix timestamp (in milliseconds) marking midnight UTC for that day, paired with the cumulative total for that metric on that specific day.
Token Series: Daily token consumption across all AI model interactions. Tokens represent the computational cost of language model operations including chat completions, content generation, embeddings, and other AI-powered features. Sharp increases in token usage indicate either growing user engagement or the deployment of more token-intensive features. Use this data to project future costs and optimize expensive operations.
Conversation Series: Number of new conversation instances created each day. Each conversation represents a distinct interaction session with bots or agents. This metric tracks user engagement frequency and provides insight into how many separate interactions occur daily. Conversation counts help measure user adoption, identify high-traffic periods, and assess the effectiveness of new features or marketing campaigns.
Message Series: Total messages exchanged daily across all conversations, including both user inputs and bot responses. Message volume relative to conversation count indicates interaction depth and engagement quality. High message-to-conversation ratios suggest users are having longer, more detailed exchanges, while low ratios might indicate quick, transactional interactions or potential user experience issues.
Analysis and Visualization Patterns
Trend Detection: Plot the time-series data to identify growth trajectories, seasonal patterns, or usage anomalies. Sustained upward trends indicate healthy growth, while sudden drops may signal technical issues or user experience problems requiring investigation.
Comparative Analysis: Compare usage across different periods to assess the impact of new features, marketing campaigns, or pricing changes. For example, compare the 30 days before and after launching a new bot to measure adoption and engagement impact.
Peak Period Identification: Identify days or periods with unusually high usage to understand when your infrastructure experiences peak load. This information guides capacity planning and helps schedule maintenance during low-usage periods.
Cost Forecasting: Use historical token consumption trends to project future costs and plan budgets. Linear regression or time-series forecasting models can predict upcoming usage based on historical patterns, enabling proactive resource allocation.
Dashboard Integration: Integrate time-series data into operational dashboards to provide real-time visibility into platform health and usage patterns. Consider building visualizations showing 7-day moving averages, week-over-week growth, or month-over-month comparisons.
Usage Pattern Examples
Daily Monitoring Workflow:
Weekly Trend Analysis:
Data Interpretation Guidelines
Normal Fluctuations: Daily usage naturally varies based on user activity patterns, weekday vs. weekend differences, and time zones. Week-over-week comparisons often provide more meaningful insights than day-to-day changes.
Seasonal Patterns: Many applications exhibit weekly or monthly patterns. Business-focused bots might see higher weekday usage, while consumer applications might peak on weekends. Identify your application's natural rhythms to distinguish normal patterns from anomalies.
Growth Assessment: Healthy growth typically shows consistent upward trends with manageable day-to-day variation. Exponential growth curves might indicate viral adoption but could also strain infrastructure and budgets.
Important Note: The series endpoint returns up to 90 days of historical data. For longer retention periods or more granular (hourly) data, consider implementing your own logging and storage solution that captures usage metrics in real-time as they occur.