Tracing
Basic concept
Tracing is the process of logging interactions with an LLM including a variety of metadata. Tracing in GuardOps is based on OpenTelemtry (opens in a new tab). Tracing interactions for further use in GuardOps is automatically done in the Playground with some restrictions and can be added to your existing Python projects using the Python Library.
Tracing inside the Playground
The Playground has to 'versions' - chat based and prompt based. Tracing works differently depending on if you are using the prompt or chat version
Prompt version
Tracing in the prompt version always depends on if a project is selected in the top right corner or not. If a project is selected, every interaction is automatically traced and stored in that project. If no project is selected, no traces are stored.
Chat version
Tracing in the chat version never happens automatically. The user always has to explicitly decide if they want to store the chat in a project. This happens by selecting a project and clicking the save button on any open chat window. Now clicking the save button always results in all parallel chats getting stored.
Tracing using the Python Library
The open source Python Library provides an easy to set up tracing component. In essence, this works by connecting your GuardOps IDs with a config and creating a tracer that can be attached to your
LLM calls using callbacks. Refer to the How-To-Guide for detailed examples for the two common Frameworks Langchain and Llama Index.
Trace Retention
Retention is a feature that allows the user to define a per project rule that defines if traces should be automatically deleted after a certain number of days or not. This is optional and if no specific value is provided the traces are stored indefinitely. The technical solution to his is explained in the introduction.