Request Logs API

Quick Access: View Request Logs | Analytics

The Request Logs API allows you to view detailed logs of all API requests, analyze usage patterns, and debug issues.

Endpoints Overview

Endpoint Method Description
/api/account/request-logs GET List request logs with filters
/api/account/request-logs/{id} GET Get single request log details
/api/account/request-logs/stats GET Get aggregated statistics
/api/account/request-logs/enhanced GET Get logs with cost and token data
/api/account/request-logs/{id}/details GET Get full request/response details

List Request Logs

Get a paginated list of API request logs.

Endpoint

GET /api/account/request-logs

Query Parameters

Parameter Type Description
limit integer Results per page (max 100, default 50)
offset integer Skip first N results
model string Filter by model ID
provider string Filter by provider
status string Filter by status: success, error
start_date string Start date (ISO 8601)
end_date string End date (ISO 8601)
min_latency integer Minimum latency in ms
max_latency integer Maximum latency in ms
api_key_id string Filter by API key

Example Request

curl "https://api.langmart.ai/api/account/request-logs?limit=10&status=success" \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

{
  "logs": [
    {
      "id": "log_abc123",
      "request_id": "req_xyz789",
      "model": "openai/gpt-4o",
      "provider": "openai",
      "status": "success",
      "status_code": 200,
      "latency_ms": 1234,
      "input_tokens": 150,
      "output_tokens": 75,
      "total_tokens": 225,
      "cost": 0.00125,
      "created_at": "2025-01-10T12:00:00Z",
      "api_key_prefix": "sk-abc1..."
    }
  ],
  "pagination": {
    "total": 1250,
    "limit": 10,
    "offset": 0,
    "hasMore": true
  }
}

Get Request Details

Get full details for a specific request, including prompt and completion.

Endpoint

GET /api/account/request-logs/{log_id}

Example Request

curl https://api.langmart.ai/api/account/request-logs/log_abc123 \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

{
  "id": "log_abc123",
  "request_id": "req_xyz789",
  "model": "openai/gpt-4o",
  "provider": "openai",
  "endpoint": "/v1/chat/completions",
  "method": "POST",
  "status": "success",
  "status_code": 200,
  "latency_ms": 1234,
  "time_to_first_token_ms": 245,
  "input_tokens": 150,
  "output_tokens": 75,
  "total_tokens": 225,
  "cost": 0.00125,
  "streaming": true,
  "created_at": "2025-01-10T12:00:00Z",
  "completed_at": "2025-01-10T12:00:01Z",
  "api_key_id": "key_abc123",
  "api_key_prefix": "sk-abc1...",
  "user_agent": "Python/3.11 openai/1.0.0",
  "ip_address": "192.168.1.100",
  "resolution": {
    "gateway_type": 2,
    "gateway_id": "gw_abc123",
    "connection_id": "conn_xyz789"
  }
}

Get Enhanced Request Details

Get full prompt and completion content for debugging.

Endpoint

GET /api/account/request-logs/{log_id}/details

Example Request

curl https://api.langmart.ai/api/account/request-logs/log_abc123/details \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

{
  "id": "log_abc123",
  "request": {
    "model": "openai/gpt-4o",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is the capital of France?"}
    ],
    "temperature": 0.7,
    "max_tokens": 100
  },
  "response": {
    "id": "chatcmpl-abc123",
    "choices": [
      {
        "message": {
          "role": "assistant",
          "content": "The capital of France is Paris."
        },
        "finish_reason": "stop"
      }
    ],
    "usage": {
      "prompt_tokens": 25,
      "completion_tokens": 8,
      "total_tokens": 33
    }
  },
  "metadata": {
    "latency_ms": 1234,
    "cost": 0.00125,
    "created_at": "2025-01-10T12:00:00Z"
  }
}

Get Request Statistics

Get aggregated statistics for your requests.

Endpoint

GET /api/account/request-logs/stats

Query Parameters

Parameter Type Description
start_date string Start date (ISO 8601)
end_date string End date (ISO 8601)
group_by string Grouping: hour, day, week, model, provider

Example Request

curl "https://api.langmart.ai/api/account/request-logs/stats?group_by=day&start_date=2025-01-01" \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

{
  "stats": {
    "total_requests": 15420,
    "successful_requests": 15100,
    "failed_requests": 320,
    "success_rate": 97.9,
    "total_tokens": 2500000,
    "input_tokens": 1500000,
    "output_tokens": 1000000,
    "total_cost": 125.50,
    "avg_latency_ms": 1250,
    "p50_latency_ms": 980,
    "p95_latency_ms": 2500,
    "p99_latency_ms": 5000
  },
  "by_day": [
    {
      "date": "2025-01-10",
      "requests": 1542,
      "tokens": 250000,
      "cost": 12.55
    }
  ],
  "by_model": [
    {
      "model": "openai/gpt-4o",
      "requests": 8500,
      "tokens": 1500000,
      "cost": 95.00
    },
    {
      "model": "groq/llama-3.3-70b-versatile",
      "requests": 5000,
      "tokens": 800000,
      "cost": 8.00
    }
  ]
}

Enhanced Logs with Cost Data

Get logs with detailed token usage and cost information.

Endpoint

GET /api/account/request-logs/enhanced

Query Parameters

Same as /api/account/request-logs plus:

Parameter Type Description
min_cost number Minimum cost filter
max_cost number Maximum cost filter
sort string Sort by: created_at, cost, latency, tokens
order string Sort order: asc, desc

Example Request

curl "https://api.langmart.ai/api/account/request-logs/enhanced?sort=cost&order=desc&limit=10" \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

{
  "logs": [
    {
      "id": "log_abc123",
      "model": "openai/gpt-4o",
      "status": "success",
      "latency_ms": 5234,
      "tokens": {
        "input": 5000,
        "output": 2500,
        "total": 7500
      },
      "cost": {
        "input": 0.025,
        "output": 0.0375,
        "total": 0.0625
      },
      "pricing": {
        "input_per_million": 5.00,
        "output_per_million": 15.00
      },
      "created_at": "2025-01-10T12:00:00Z"
    }
  ],
  "summary": {
    "total_cost": 125.50,
    "total_tokens": 2500000,
    "avg_cost_per_request": 0.0081
  },
  "pagination": {
    "total": 15420,
    "limit": 10,
    "offset": 0,
    "hasMore": true
  }
}

Error Logs

Filter for failed requests only:

curl "https://api.langmart.ai/api/account/request-logs?status=error" \
  -H "Authorization: Bearer YOUR_API_KEY"

Error Log Response

{
  "logs": [
    {
      "id": "log_err123",
      "model": "openai/gpt-4o",
      "status": "error",
      "status_code": 429,
      "error": {
        "type": "rate_limit_error",
        "code": "rate_limit_exceeded",
        "message": "Rate limit exceeded. Please retry after 60 seconds."
      },
      "latency_ms": 150,
      "created_at": "2025-01-10T12:00:00Z"
    }
  ]
}

Usage Examples

Get Today's Requests

curl "https://api.langmart.ai/api/account/request-logs?start_date=$(date -I)" \
  -H "Authorization: Bearer YOUR_API_KEY"

Find Slow Requests

curl "https://api.langmart.ai/api/account/request-logs?min_latency=5000" \
  -H "Authorization: Bearer YOUR_API_KEY"

Get Usage by Model

curl "https://api.langmart.ai/api/account/request-logs/stats?group_by=model" \
  -H "Authorization: Bearer YOUR_API_KEY"

Python Example

import requests

API_KEY = "your-api-key"
BASE_URL = "https://api.langmart.ai"

def get_request_logs(limit=50, status=None, model=None):
    params = {"limit": limit}
    if status:
        params["status"] = status
    if model:
        params["model"] = model

    response = requests.get(
        f"{BASE_URL}/api/account/request-logs",
        headers={"Authorization": f"Bearer {API_KEY}"},
        params=params
    )
    return response.json()

def get_request_details(log_id):
    response = requests.get(
        f"{BASE_URL}/api/account/request-logs/{log_id}/details",
        headers={"Authorization": f"Bearer {API_KEY}"}
    )
    return response.json()

def get_usage_stats(start_date, end_date=None, group_by="day"):
    params = {
        "start_date": start_date,
        "group_by": group_by
    }
    if end_date:
        params["end_date"] = end_date

    response = requests.get(
        f"{BASE_URL}/api/account/request-logs/stats",
        headers={"Authorization": f"Bearer {API_KEY}"},
        params=params
    )
    return response.json()

# Example usage
logs = get_request_logs(limit=10, status="error")
for log in logs["logs"]:
    print(f"{log['created_at']}: {log['model']} - {log['error']['message']}")

stats = get_usage_stats("2025-01-01", group_by="model")
for model_stats in stats["by_model"]:
    print(f"{model_stats['model']}: {model_stats['requests']} requests, ${model_stats['cost']:.2f}")

Rate Limits

Endpoint Rate Limit
List logs 60 requests/minute
Get details 120 requests/minute
Get stats 30 requests/minute

Data Retention

  • Request logs: 90 days
  • Request/response content: 30 days
  • Aggregated statistics: Unlimited