Tracing
Basic concept
Tracing is the process of logging interactions with an LLM including a variety of metadata. Tracing in GuardOps is based on OpenTelemtry (opens in a new tab). Tracing interactions for further use in GuardOps is automatically done in the Playground with some restrictions and can be added to your existing Python projects using the Python Library. This section provides detailed examples for how to trace your existing Python Project.
Tracing using the Python Library
The open source Python Library provides an easy to set up tracing component. In essence, this works by connecting your GuardOps IDs with a config and creating a tracer that can be attached to your
LLM calls using callbacks. Below are examples for the two common Frameworks Langchain and Llama Index.
To get your parameters for the config or any other component of the framework that connects to backend data, you can click on the 🗲 lightning symbols of the cards and copy the data from there.
Langchain
Langchain Tracing is really straightforward and only requires a few additional lines of code to implement in your project. Follow these steps to integrate tracing.
Create a Project
Go to the Projects or Home page and create a new project that you want the traces to be stored in.
Create a new Tracing key
Go to Account → API-Management, create a new secret key and copy it.
Copy and save the key immediately. You will never be able to restore the key if you forgot it since the frontend only shows a hint!
Retreive the IDs
After you created your project, click on the 🗲 lightning symbol to open a popup. This popup includes dummy code with the needed IDs that you need to copy to insert in the placeholders below.
Complete your code
The below code shows a minimal example usage with Langchain. In essence, you need to create your config, create a new tracer with this config and attach this to the callbacks when creating your LLM instance.
from guardops.tracing.tracer import Tracer
from guardops.config.config import Config
from langchain_openai import ChatOpenAI
#Set up the config
Config.set_project_id("<your_project_id")
Config.set_tracing_key("<your_tracing_key")
Config.set_user_id("<your_user_id>")
#Create the tracer with the config
tracer = Tracer(config=Config)
#Create your llm as normal
llm = ChatOpenAI(model="<your_desired_model>", api_key="<your_api_key>",callbacks=[tracer])
#Call it with the tracer as a callback
llm.invoke("Antworte mit dem Wort Tracing2")