Skip to main content
The following types of metrics are automatically generated from the logs being sent by your AI agent / workflow to Adaline, for each trace:
  • Latency: The time it takes for a trace to complete.
  • Input tokens: The sum of input tokens for all the spans that are of type ‘Model’ (LLM calls) in a trace.
  • Output tokens: The sum of output tokens for all the spans that are of type ‘Model’ (LLM calls) in a trace.
  • Cost: The cost, in US Dollars, of all the spans that are of type ‘Model’ (LLM calls) in a trace.
  • Evaluation score: The evaluation score of all the spans that are of type ‘Model’ (LLM calls) in a trace. Each failed evaluation gets a 0, while successfully completed evaluation gets a score of 1.
Using these metrics, you can visualize the following charts:
  • Logs: Sum of the number of traces per unit of time.
  • Avg latency: Average trace latency per unit of time.
  • Avg input tokens: Average trace input tokens per unit of time.
  • Avg cost: Average trace cost, in US Dollars, per unit of time.
  • Avg output tokens: Average trace output tokens per unit of time.
  • Avg eval score: Average trace evaluation score per unit of time.
Charts can be analyzed for different time windows. Choose among different time windows: Monitor charts granularity in Adaline Move the mouse on each data point to see the value of each metric at a precise time: Monitor charts data points values in Adaline