Quickstart
Get your first log in the BEval dashboard in five minutes.
1. Install
pip install bolder-aiRequires Python 3.9+.
2. Get an API key
Open your BEval dashboard → Settings → API Keys → Create key. Copy the key (starts with bv_...).
An API key is scoped to a tenant and optionally a project. You can create multiple keys — one per service is a good default.
3. Set environment variables
export BEVAL_API_KEY=bv_...
export BEVAL_API_URL=https://ai-gateway.bolder.services # default, optional
export BEVAL_PROJECT_ID=... # optional4. Initialize and log
import beval
beval.init() # reads env vars
beval.log(
kind="llm",
model_id="gpt-4o-mini",
input="What is the capital of France?",
output="Paris.",
latency_ms=312,
tokens_in=7,
tokens_out=2,
)
beval.flush() # optional — waits for the background queue to drainRun it. Open the dashboard. The log shows up within a second.
5. Auto-wrap your existing client
If you already use OpenAI or Anthropic, skip manual logging entirely:
import beval
from openai import OpenAI
beval.init()
client = beval.wrap(OpenAI())
client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello in one word."}],
)Every chat.completions.create call is now logged. Input messages, output, model, token counts, latency, errors — all captured automatically. Image parts are detected and logged as kind="vlm".
6. Trace agent functions
@beval.trace
def run_agent(query: str) -> str:
# ...
return answer
@beval.trace(name="tool:search", kind="agent")
async def search(q): ...Captures arguments, return value, latency, and exceptions.
You’re done
Everything else in these docs is optional. Common next steps:
- Redaction — strip PII before send
- Images — attach screenshots or photos to VLM logs
- Custom project ID per call — override the default project
Last updated on