Request Logs
Request logs provide detailed visibility into every API request made through LangMart. Use them to debug issues, analyze usage patterns, and understand costs.
Viewing Request Logs
Dashboard View
- Navigate to Analytics > Request Logs in the sidebar
- View the list of recent requests with summary information
- Click on any request to see full details
API Access
# List request logs
curl -X GET "https://api.langmart.ai/api/account/request-logs?limit=50" \
-H "Authorization: Bearer YOUR_API_KEY"
# Get enhanced logs with token usage
curl -X GET "https://api.langmart.ai/api/account/request-logs/enhanced" \
-H "Authorization: Bearer YOUR_API_KEY"
# Get specific log details
curl -X GET "https://api.langmart.ai/api/account/request-logs/{log_id}/details" \
-H "Authorization: Bearer YOUR_API_KEY"Log Details
Basic Information
Each log entry contains:
| Field | Description |
|---|---|
id |
Unique request identifier |
created_at |
Timestamp of the request |
method |
HTTP method (POST, GET) |
endpoint |
API endpoint called |
response_status |
HTTP status code |
ip_address |
Client IP address |
user_agent |
Client user agent |
Token Usage
Enhanced logs include detailed token metrics:
| Field | Description |
|---|---|
input_tokens |
Number of tokens in the request |
output_tokens |
Number of tokens in the response |
total_tokens |
Total token count |
total_cost |
Total cost in USD |
platform_fee |
LangMart platform fee |
provider_revenue |
Amount paid to provider |
Latency Metrics
| Field | Description |
|---|---|
latency_ms |
Total request latency in milliseconds |
time_to_first_token_ms |
Time until first token (streaming) |
Model Information
| Field | Description |
|---|---|
model_name |
Model identifier (e.g., gpt-4o) |
model_display_name |
Human-readable model name |
provider_name |
Provider name (e.g., OpenAI) |
provider_key |
Provider identifier |
Enhanced Logs with Resolution Journey
When requests involve fallbacks or retries, the resolution_attempts field provides a complete picture:
{
"resolution_attempts": {
"summary": {
"totalCandidatesTried": 2,
"successfulCandidate": "anthropic/claude-3-sonnet",
"totalDuration": 1250
},
"attempts": [
{
"candidate": "openai/gpt-4o",
"status": "failed",
"error": "rate_limit_exceeded",
"duration": 450
},
{
"candidate": "anthropic/claude-3-sonnet",
"status": "success",
"duration": 800
}
]
}
}Resolution Journey Fields
| Field | Description |
|---|---|
totalCandidatesTried |
Number of models attempted |
successfulCandidate |
Model that handled the request |
totalDuration |
Total time including retries |
attempts |
Array of individual attempts |
Filtering and Searching
Available Filters
# Filter by response status
curl "https://api.langmart.ai/api/account/request-logs?status=200"
# Filter by model
curl "https://api.langmart.ai/api/account/request-logs/enhanced?model=gpt-4o"
# Filter by provider
curl "https://api.langmart.ai/api/account/request-logs/enhanced?provider=openai"
# Filter by date range
curl "https://api.langmart.ai/api/account/request-logs?start_date=2025-01-01&end_date=2025-01-31"
# Filter by HTTP method
curl "https://api.langmart.ai/api/account/request-logs?method=POST"
# Filter by endpoint
curl "https://api.langmart.ai/api/account/request-logs?endpoint=/v1/chat/completions"
# Filter by cost range
curl "https://api.langmart.ai/api/account/request-logs/enhanced?min_cost=0.01&max_cost=1.00"
# Filter by latency
curl "https://api.langmart.ai/api/account/request-logs/enhanced?min_latency=1000"
# Filter for errors only
curl "https://api.langmart.ai/api/account/request-logs/enhanced?has_error=true"
# Filter for requests with retries
curl "https://api.langmart.ai/api/account/request-logs/enhanced?has_retries=true"Pagination
# Get first 50 logs
curl "https://api.langmart.ai/api/account/request-logs?limit=50&offset=0"
# Get next 50 logs
curl "https://api.langmart.ai/api/account/request-logs?limit=50&offset=50"Response includes pagination metadata:
{
"success": true,
"data": [...],
"pagination": {
"total": 1250,
"limit": 50,
"offset": 0,
"returned": 50,
"has_more": true,
"next_offset": 50
}
}Log Statistics
Get aggregated statistics about your requests:
curl -X GET "https://api.langmart.ai/api/account/request-logs/stats?start_date=2025-01-01" \
-H "Authorization: Bearer YOUR_API_KEY"Response includes:
{
"success": true,
"data": {
"period": {
"start_date": "2025-01-01T00:00:00Z",
"end_date": "2025-01-31T23:59:59Z"
},
"summary": {
"total_requests": 15420,
"days_with_requests": 31,
"successful_requests": 15180,
"client_errors": 180,
"server_errors": 60,
"unique_models_used": 8,
"unique_endpoints": 3,
"success_rate": "98.44%"
},
"top_models": [
{"model_name": "gpt-4o", "request_count": 8500},
{"model_name": "claude-3-sonnet", "request_count": 4200}
],
"top_endpoints": [
{"endpoint": "/v1/chat/completions", "request_count": 14800}
],
"daily_counts": [
{"date": "2025-01-31", "request_count": 520, "successful_count": 512, "error_count": 8}
]
}
}Viewing Full Request/Response
For debugging, you can view the complete request and response:
curl -X GET "https://api.langmart.ai/api/account/request-logs/{log_id}/details" \
-H "Authorization: Bearer YOUR_API_KEY"Note: Sensitive headers (Authorization, API keys) are automatically redacted in the response.
Example Response
{
"success": true,
"data": {
"id": "abc123",
"request_body": {
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello"}]
},
"response_body": {
"choices": [{"message": {"content": "Hi there!"}}],
"usage": {"prompt_tokens": 10, "completion_tokens": 5}
},
"request_headers": {
"content-type": "application/json",
"authorization": "***REDACTED***"
},
"input_tokens": 10,
"output_tokens": 5,
"total_cost": 0.0001,
"latency_ms": 450
}
}Connection Pool Information
For requests using connection pools, logs include pool details:
| Field | Description |
|---|---|
connection_pool_id |
Pool identifier |
connection_pool_name |
Pool display name |
used_connection_id |
Specific connection used |
used_connection_name |
Connection display name |
pool_strategy |
Load balancing strategy |
Best Practices
Debugging Failed Requests
- Filter for error responses:
has_error=true - Check the
error_code,error_type, anderror_messagefields - Review the request body for invalid parameters
- Check resolution attempts if fallbacks were configured
Monitoring Costs
- Use the enhanced logs endpoint for cost data
- Filter by date range for billing periods
- Group by model to identify expensive operations
- Set up cost threshold alerts
Performance Analysis
- Filter by
min_latencyto find slow requests - Check
time_to_first_token_msfor streaming performance - Identify patterns in latency by model or time of day
- Review resolution journey for retry impact
Related Documentation
- Usage Analytics - Aggregated usage trends
- Error Tracking - Error analysis and debugging
- API Errors - Error code reference