LangMart API Reference

API Base URL: https://api.langmart.ai

LangMart provides a unified API for accessing AI models from multiple providers. The API is designed to be OpenAI-compatible, making it easy to integrate with existing applications.

Documentation Description
OpenAI-Compatible API Chat completions, embeddings, models
Authentication API keys, OAuth, guest access
Request Logs View and analyze your API requests
Models Browse and manage models
Connections Provider connections and API keys
Alerts Usage alerts and notifications
Organizations Team and organization management
Errors Error codes and troubleshooting

API Overview

OpenAI-Compatible Endpoints (/v1/*)

These endpoints follow the OpenAI API specification, making LangMart a drop-in replacement for OpenAI:

Endpoint Method Description
/v1/chat/completions POST Chat completions (streaming supported)
/v1/completions POST Text completions (legacy)
/v1/embeddings POST Generate text embeddings
/v1/models GET List available models
/v1/models/{model} GET Get model details

User API Endpoints (/api/*)

Manage your account, view analytics, and configure settings:

Category Endpoints Description
Request Logs /api/account/request-logs/* View request history and analytics
Models /api/models/* Browse and manage models
Connections /api/connections/* Manage provider connections
Favorites /api/favorites/* Manage favorite models
Alerts /api/account/alerts/* Configure usage alerts
Organizations /api/organizations/* Manage teams
Billing /api/billing/* View usage and billing

Public Endpoints (/api/public/*)

No authentication required:

Endpoint Method Description
/api/public/providers GET List available providers
/api/public/config GET Platform configuration
/api/public/organizations GET List public organizations

Authentication

All API requests (except public endpoints) require authentication using an API key:

curl https://api.langmart.ai/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY"

Getting an API Key

  1. Log in to https://langmart.ai
  2. Go to Settings
  3. Create a new API key
  4. Copy and store the key securely (it won't be shown again)

See Authentication for more details.

Quick Start

1. Make Your First Request

curl https://api.langmart.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "groq/llama-3.3-70b-versatile",
    "messages": [
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'

2. Using with OpenAI SDK

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_LANGMART_API_KEY",
    base_url="https://api.langmart.ai/v1"
)

response = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

print(response.choices[0].message.content)

3. Multi-Provider Access

Use models from any provider with the same API:

# OpenAI
response = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Hello"}]
)

# Anthropic
response = client.chat.completions.create(
    model="anthropic/claude-3-5-sonnet-20241022",
    messages=[{"role": "user", "content": "Hello"}]
)

# Groq (ultra-fast)
response = client.chat.completions.create(
    model="groq/llama-3.3-70b-versatile",
    messages=[{"role": "user", "content": "Hello"}]
)

Model Naming Convention

LangMart uses a provider/model naming convention:

Provider Model Example
OpenAI openai/gpt-4o, openai/gpt-4o-mini
Anthropic anthropic/claude-3-5-sonnet-20241022
Google google/gemini-1.5-pro
Groq groq/llama-3.3-70b-versatile
Mistral mistral/mistral-large-latest

Response Format

Success Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1704067200,
  "model": "openai/gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 15,
    "total_tokens": 25
  }
}

Error Response

{
  "error": {
    "type": "authentication_error",
    "code": "invalid_api_key",
    "message": "Invalid API key provided",
    "details": {}
  }
}

Rate Limiting

API requests are rate limited based on your account tier:

Tier Requests/min Tokens/min
Free 10 10,000
Standard 60 100,000
Pro 300 1,000,000
Enterprise Custom Custom

Rate limit headers are included in responses:

X-RateLimit-Limit: 60
X-RateLimit-Remaining: 55
X-RateLimit-Reset: 1704067260

Pagination

List endpoints support pagination:

GET /api/models?limit=50&offset=0

Response includes pagination info:

{
  "data": [...],
  "pagination": {
    "total": 150,
    "limit": 50,
    "offset": 0,
    "hasMore": true
  }
}

Streaming

Chat completions support streaming via Server-Sent Events (SSE):

curl https://api.langmart.ai/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o",
    "messages": [{"role": "user", "content": "Hello"}],
    "stream": true
  }'

Stream events:

data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"Hello"}}]}
data: {"id":"chatcmpl-abc","choices":[{"delta":{"content":"!"}}]}
data: [DONE]

SDK Support

LangMart works with any OpenAI-compatible SDK:

Language SDK Configuration
Python openai base_url="https://api.langmart.ai/v1"
JavaScript openai baseURL: "https://api.langmart.ai/v1"
Go go-openai config.BaseURL = "https://api.langmart.ai/v1"
Ruby ruby-openai `OpenAI.configure {

Next Steps