TROPS Documentation
Complete guide to integrating and using the TROPS LLM observability platform
๐ Quickstart
Get started with TROPS in under 10 minutes. Install the SDK and make your first request.
๐ API Reference
Complete API documentation with request/response examples and error codes.
๐งญ API Explorer
Interactive OpenAPI documentation with embedded explorer.
๐ Authentication
Learn about API keys, security best practices, and rate limiting.
๐ Analytics
Track usage, monitor performance, and analyze your LLM requests.
Quickstart
Installation
Install the OpenAI SDK in your project:
# Python
pip install openai
# Node.js
npm install openai
Get Your API Key
- Sign in to TROPS Console
- Create a new project
- Navigate to Settings โ API Keys
- Generate a new API key
Initialize the Client
Configure the OpenAI client to use TROPS as a proxy by setting the base_url:
Python:
from openai import OpenAI
client = OpenAI(
api_key="your_trops_api_key_here",
base_url="https://api.trops.dev/v1"
)
Node.js:
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'your_trops_api_key_here',
baseURL: 'https://api.trops.dev/v1'
});
Make Your First Request
Once configured, use the OpenAI SDK as normal. All requests will be routed through TROPS for observability:
Python:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "Hello, world!"}
]
)
print(response.choices[0].message.content)
Node.js:
const response = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'user', content: 'Hello, world!' }
]
});
console.log(response.choices[0].message.content);
Note: Your TROPS API key is used for authentication with TROPS. You'll need to configure your actual LLM provider API keys (OpenAI, Anthropic, etc.) in the TROPS Console under Settings โ Provider Keys.
API Reference
Base URL
Production: https://api.trops.dev
Chat Completions
POST /v1/chat/completions
Request Headers:
Content-Type: application/json
Authorization: Bearer your_api_key_here
Request Body:
{
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is the capital of France?"
}
],
"temperature": 0.7,
"max_tokens": 150,
"stream": false
}
Response:
{
"id": "chatcmpl-xxx",
"object": "chat.completion",
"created": 1234567890,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 8,
"total_tokens": 28
}
}
Streaming Responses
Set "stream": true to receive responses as Server-Sent Events.
Python Example:
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Analytics API
GET /v1/analytics/requests
Query Parameters:
| Parameter | Type | Description |
|---|---|---|
page |
integer | Page number (default: 1) |
page_size |
integer | Items per page (default: 50, max: 100) |
status |
string | Filter by status: "ok" or "error" |
model |
string | Filter by model name |
start_date |
ISO 8601 | Start date filter |
end_date |
ISO 8601 | End date filter |
Response Example:
{
"items": [
{
"id": "req_xxx",
"request_id": "chatcmpl-xxx",
"model": "gpt-4o-mini",
"status": "ok",
"latency_ms": 234,
"prompt_tokens": 20,
"completion_tokens": 45,
"total_tokens": 65,
"cost_cents": 0.0013,
"created_at": "2025-01-16T12:00:00Z"
}
],
"total": 1250,
"page": 1,
"page_size": 50
}
Analytics Overview
GET /v1/analytics/overview
Query Parameters:
window(string): Time window - "1h", "24h", "7d", "30d"bucket(string): Bucket size - "minute", "hour", "day"
Response Example:
{
"buckets": [
{
"timestamp": "2025-01-16T12:00:00Z",
"request_count": 145,
"error_count": 2,
"error_rate": 0.014,
"avg_latency_ms": 234,
"p95_latency_ms": 456,
"total_cost_cents": 12.34,
"prompt_tokens": 2340,
"completion_tokens": 3450,
"cache_hit_rate": 0.23
}
]
}
Authentication
API Keys
TROPS uses API keys to authenticate requests. Each project has its own set of API keys.
Using API Keys
Include your API key in the Authorization header:
Authorization: Bearer trops_sk_xxx
cURL Example:
curl https://api.trops.dev/v1/chat/completions \
-H "Authorization: Bearer trops_sk_xxx" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello"}]
}'
Security Best Practices
- Never commit API keys to version control
- Use environment variables to store keys
- Use different keys for different environments (dev, staging, prod)
- Rotate keys regularly (every 90 days recommended)
- Restrict key permissions to only what's needed
Rate Limiting
API keys are subject to rate limits based on your plan:
| Plan | Rate Limit |
|---|---|
| Free Tier | 60 requests/minute |
| Pro | 600 requests/minute |
| Enterprise | Custom limits |
Rate limit headers are included in responses:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1234567890
Analytics & Monitoring
Tracked Metrics
- Request Count - Total number of requests
- Error Rate - Percentage of failed requests
- Latency - Response time (avg, p75, p90, p95, p99)
- Token Usage - Prompt, completion, and total tokens
- Cost - Total and per-request costs
- Cache Hit Rate - Percentage of cached responses
Viewing Analytics
Access analytics through:
- Console Dashboard - Visual charts and graphs at app.trops.dev
- Analytics API - Programmatic access to metrics
Accessing Analytics via API
Use standard HTTP requests to query analytics data:
Python Example:
import requests
headers = {
"Authorization": "Bearer trops_sk_xxx"
}
# Get overview for last 24 hours
response = requests.get(
"https://api.trops.dev/v1/analytics/overview",
headers=headers,
params={"window": "24h", "bucket": "hour"}
)
data = response.json()
for bucket in data["buckets"]:
print(f"{bucket['timestamp']}: {bucket['request_count']} requests")
Error Handling
All errors follow this format:
{
"error": {
"type": "invalid_request_error",
"message": "Invalid API key provided",
"code": "invalid_api_key"
}
}
Common Error Codes:
invalid_api_key- API key is missing or invalidrate_limit_exceeded- Too many requestsquota_exceeded- Monthly quota limit reachedinvalid_request_error- Malformed requestmodel_not_found- Requested model doesn't exist