Traces
The Traces tab shows every API call routed through your proxy endpoints. Each trace captures the full round-trip: what you sent, what Claude returned, how many tokens were used, and what it cost.
Traces list
Section titled “Traces list”Navigate to Traces in the dashboard to see all API calls, newest first.
Each row shows:
| Column | Description |
|---|---|
| Timestamp | When the request was made (click to view detail) |
| Model | The model that handled the request |
| Tokens | Input / output token counts |
| Cost | Calculated cost based on model pricing |
| Duration | Round-trip time to Anthropic |
| Status | HTTP status code (200 = success) |
Filtering
Section titled “Filtering”Use the filter bar at the top to narrow results:
- Model — filter by model name (e.g. “opus”, “sonnet”, “haiku”)
- From / To — date range filter
Trace detail
Section titled “Trace detail”Click any trace to see the full request and response.
Token breakdown
Section titled “Token breakdown”The token card shows:
- Input tokens — tokens in the prompt
- Output tokens — tokens in the response
- Cache created — tokens used for prompt cache creation
- Cache read — tokens read from prompt cache
- Total cost — calculated from model pricing
Request panel
Section titled “Request panel”The left panel parses your request JSON and shows:
- Model and parameters — model name, max_tokens, thinking config
- System prompt — collapsible, so long system prompts don’t dominate the view
- Tools — list of tool definitions sent to the model
- Messages — the conversation, with role badges (user/assistant)
Response panel
Section titled “Response panel”The right panel shows what Claude returned:
- Streaming responses — SSE events are parsed and reconstructed into the final message, showing text output, thinking blocks, and tool use calls
- Non-streaming responses — JSON is formatted and displayed
- Errors — red banner with the error body for failed requests (4xx/5xx)
- Raw toggle — click “Show raw SSE” to see the original SSE event stream
Common patterns
Section titled “Common patterns”Debugging a failed request
Section titled “Debugging a failed request”Filter traces by status code. Click a failed trace to see the error body in the response panel. The request panel shows exactly what was sent, so you can reproduce the issue.
Understanding cache behavior
Section titled “Understanding cache behavior”Look at the token breakdown. High cache_read tokens with low cache_creation tokens means your prompt cache is warm — subsequent requests are cheaper. If you see high cache_creation on every request, your prompts may be changing too frequently.
Tracking cost by project
Section titled “Tracking cost by project”Create separate proxy endpoints for each project or environment (e.g. “production”, “staging”, “dev”). Each endpoint’s usage is tracked independently and visible in the traces filter.