Monitoring & Tracebility
Assistant-Engine provides extensive monitoring and traceability features based on your subscription plan. You can track and analyze all AI interaction history, including messages, instructions, context, tool usage, and detailed API call (timing) information. The platform ensures full visibility into the execution of tasks and assistants, allowing you to debug and optimize performance efficiently. Additionally, Assistant-Engine helps monitor and manage LLM usage, offering insights into potential cost drivers and enabling more effective resource allocation.
Monitoring Metrics
You can monitor various metrics and information such as:
- Timings (only in Pro Plan): Track how long each step of a run takes to execute. See run steps and run duration as well as Token per second counts and statuses (
incomplete
,completed
) - Tool Calls: Monitor the tools called in every single run and their request and response details.
- LLM Configuration: For every message sent you can view historic information about Instructions, Context, Tools, User Message, Chat History & LLM Response used as well as the total Raw Data sent to and received from the LLM.
- Prompt Tokens and Completion Tokens: Token count used in each request and response to/from the LLM
Note
Task Execution Monitoring
- Go to the Tasks section within your project.
- Select a task and click on the History tab to view its execution history.
- You will see a list of all the times the Task was executed, along with initial details about
- Date and Time of conversation
- Status (
complete
,incomplete
,error
,initializing
) - Steps: Number of steps executed for task
- Tools: Number of Tool Calls during task execution
- Duration: Total time of task execution (empty if not
complete
)
- Click any table entry to get further information on its specific conversation-id including:
- Entire conversation(-id) history
- Input and output message
- Total Prompt Tokens: Aggregated outgoing token count to the LLM for the selected task
- Total Completion Tokens: Aggregated incoming token count from the LLM for the selected task
- Total Token Count: Total Prompt Tokens+ Total Completion Tokens
- Last Run Status: Status of the last run in the conversation -
complete
,incomplete
,error
,initializing
- Instruction Context or error message
Click the input or output message to get additional information on the specific run for the task including:
- Run Configuration: Details about Model, Token limits and Truncation Strategy
- Run Results: Detailed run information about Status, Duration and Token counts
- LLM Data: All information processed for the specific run
- Instruction: Instruction valid at the time of run
- Context: Context as defined at the time of run
- Tools: Tools assigned to and used by Task at the time of run
- Chat: User Message used in run as well as Response
- Raw: Entirety of LLM input data based on the aforementioned details AND Raw LLM output data.
Pro Plan only
- Click
Steps
tab to get detailled information broken down by steps taken during task execution- Step Type:
Message Creation
,Tool Calls
- Processing Status:
complete
,incomplete
,error
,initializing
- Step Duration: in seconds
- Tokens per second: LLM output/execution speed
- Token counts: Prompt Tokens, Completion Tokens and Total Tokens for this step of the run
- Tool Calls: Request and Response information for the individual tool call
- Step Type:
Pro Plan only
- Click
Timing
tab to get visual insights into detailed runtimes for each action.
Assistant Execution Monitoring
- Go to the Assistants section within your project.
- Select an Assistant and click on the History tab to view its execution history.
- You will see a list of all the times the Assistant was executed, along with details such as
- Date and Time of conversation
- Conversation id
- User
- Status (
complete
,incomplete
,error
,initializing
) - Message
- Click any table entry to get further information on its specific conversation-id including:
- Entire conversation(-id) history
- Context and User Information
- Conversation status:
active
,inactive
- Total Prompt Tokens: Aggregated outgoing token count to the LLM for the selected conversation
- Total Completion Tokens: Aggregated incoming token count from the LLM for the selected conversation
- Total Token Count:
Total Prompt Tokens
+Total Completion Tokens
- Last Run Status: Status of the last run in the conversation -
complete
,incomplete
,error
,initializing
- Instruction Context or error message
- Select a message to get additional information on the specific run for that message including:
- Run Configuration: Details about Model, Token limits and Truncation Strategy
- Run Results: Detailed run information about Status, Duration and Token counts
- LLM Data: All information processed for the specific run
- Instruction: Instruction valid at the time of run
- Context: Context as defined at the time of run
- Tools: Tools assigned to and used by Assistant at the time of run
- Chat: Chat History and User Message used in run as well as Response
- Raw: Entirety of LLM input data based on the aforementioned details AND Raw LLM output data.
Pro Plan only
- Click
Steps
tab to get detailled information broken down into the steps taken- Step Type:
Message Creation
,Tool Calls
- Processing Status:
complete
,incomplete
,error
,initializing
- Step Duration: in seconds
- Tokens per second: LLM output/execution speed
- Token counts: Prompt Tokens, Completion Tokens and Total Tokens for this step of the run
- Tool Calls: Request and Response information for the individual tool call
- Step Type:
Pro Plan only
- Click
Timing
tab to get visual insights into detailed runtimes for each action.
Tool Execution Monitoring
- Go to the Tools section within your project.
- Select a Tool and click on the Tool Calls tab to view its execution history.
- You will see a list of all the times the Tool was called, along with details such as
- Task or Assistant that triggered tool call
- Conversation id
- Response status
- Request function
- Date and Time of call
- Click any table entry to get further information on the specific tool execution
- The
API-Request
tab provides additional information