Models API

Quick Access: Browse Models | Model Collections

The Models API allows you to discover available AI models, view their capabilities and pricing, and manage your favorites.

Endpoints Overview

Endpoint Method Description
/v1/models GET List models (OpenAI format)
/v1/models/{model} GET Get model details (OpenAI format)
/api/models GET List models with full details
/api/models/{modelId} GET Get model with pricing and capabilities
/api/favorites GET List favorite models
/api/favorites POST Add model to favorites
/api/favorites/{modelId} DELETE Remove from favorites

List Models (OpenAI Format)

Get models in OpenAI-compatible format.

Endpoint

GET /v1/models

Example Request

curl https://api.langmart.ai/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

{
  "object": "list",
  "data": [
    {
      "id": "openai/gpt-4o",
      "object": "model",
      "created": 1704067200,
      "owned_by": "openai"
    },
    {
      "id": "anthropic/claude-3-5-sonnet-20241022",
      "object": "model",
      "created": 1704067200,
      "owned_by": "anthropic"
    }
  ]
}

List Models (Full Details)

Get models with complete information including pricing and capabilities.

Endpoint

GET /api/models

Query Parameters

Parameter Type Description
provider string Filter by provider
capability string Filter by capability: vision, tools, reasoning
billing_mode string Filter: org_paid, member_paid
search string Search by name or ID
limit integer Results per page (default 50)
offset integer Skip first N results

Example Request

curl "https://api.langmart.ai/api/models?provider=openai&capability=vision" \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

{
  "models": [
    {
      "id": "openai/gpt-4o",
      "name": "GPT-4o",
      "provider": "openai",
      "description": "Most capable GPT-4 model with vision",
      "context_length": 128000,
      "max_output_tokens": 16384,
      "capabilities": {
        "vision": true,
        "tools": true,
        "streaming": true,
        "json_mode": true
      },
      "pricing": {
        "input": 5.00,
        "output": 15.00,
        "unit": "per_million_tokens"
      },
      "billing_mode": "org_paid",
      "is_available": true,
      "is_favorite": false,
      "created_at": "2024-05-13T00:00:00Z"
    }
  ],
  "pagination": {
    "total": 150,
    "limit": 50,
    "offset": 0,
    "hasMore": true
  }
}

Get Model Details

Get detailed information about a specific model.

Endpoint (OpenAI Format)

GET /v1/models/{model_id}

Endpoint (Full Details)

GET /api/models/{model_id}

Example Request

curl https://api.langmart.ai/api/models/openai/gpt-4o \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

{
  "id": "openai/gpt-4o",
  "name": "GPT-4o",
  "provider": "openai",
  "provider_model_id": "gpt-4o",
  "description": "GPT-4o is OpenAI's most advanced model, featuring vision capabilities, tool use, and superior reasoning.",
  "family": "gpt-4",
  "context_length": 128000,
  "max_output_tokens": 16384,
  "capabilities": {
    "vision": true,
    "tools": true,
    "function_calling": true,
    "streaming": true,
    "json_mode": true,
    "system_prompt": true,
    "reasoning": false,
    "audio": false,
    "image_generation": false
  },
  "pricing": {
    "input": 5.00,
    "output": 15.00,
    "unit": "per_million_tokens",
    "currency": "USD"
  },
  "billing_mode": "org_paid",
  "rate_limits": {
    "requests_per_minute": 500,
    "tokens_per_minute": 150000
  },
  "training_data_cutoff": "2023-12-01",
  "is_available": true,
  "is_favorite": false,
  "organization_id": "org_abc123",
  "connection_id": "conn_xyz789"
}

Model Capabilities

Capability Description Example Models
vision Can process images GPT-4o, Claude 3.5, Gemini Pro
tools Can use tools/functions GPT-4o, Claude 3.5, Gemini
streaming Supports streaming Most models
json_mode Structured JSON output GPT-4o, Claude 3.5
reasoning Extended reasoning o1, o1-mini
audio Can process audio GPT-4o Audio
image_generation Can generate images DALL-E 3
embedding Text embeddings text-embedding-3

Filter by Capability

# Get models with vision
curl "https://api.langmart.ai/api/models?capability=vision" \
  -H "Authorization: Bearer YOUR_API_KEY"

# Get models with tool use
curl "https://api.langmart.ai/api/models?capability=tools" \
  -H "Authorization: Bearer YOUR_API_KEY"

Favorites

List Favorites

GET /api/favorites

Response

{
  "favorites": [
    {
      "model_id": "openai/gpt-4o",
      "added_at": "2025-01-10T00:00:00Z"
    },
    {
      "model_id": "anthropic/claude-3-5-sonnet-20241022",
      "added_at": "2025-01-09T00:00:00Z"
    }
  ]
}

Add to Favorites

POST /api/favorites

Request Body:

{
  "model_id": "openai/gpt-4o"
}

Response:

{
  "success": true,
  "model_id": "openai/gpt-4o",
  "added_at": "2025-01-10T00:00:00Z"
}

Remove from Favorites

DELETE /api/favorites/{model_id}

Example:

curl -X DELETE https://api.langmart.ai/api/favorites/openai%2Fgpt-4o \
  -H "Authorization: Bearer YOUR_API_KEY"

Model Collections

Organize models into custom collections.

List Collections

GET /api/user/model-collections

Response:

{
  "collections": [
    {
      "id": "coll_abc123",
      "name": "Production Models",
      "description": "Models approved for production use",
      "models": [
        {"model_id": "openai/gpt-4o"},
        {"model_id": "anthropic/claude-3-5-sonnet-20241022"}
      ],
      "created_at": "2025-01-01T00:00:00Z"
    }
  ]
}

Create Collection

POST /api/user/model-collections

Request Body:

{
  "name": "Fast Models",
  "description": "Models optimized for speed",
  "models": [
    {"model_id": "groq/llama-3.3-70b-versatile"},
    {"model_id": "google/gemini-1.5-flash"}
  ]
}

Update Collection

PUT /api/user/model-collections/{collection_id}

Delete Collection

DELETE /api/user/model-collections/{collection_id}

Add Model to Collection

POST /api/user/model-collections/{collection_id}/members

Request Body:

{
  "model_id": "openai/gpt-4o-mini"
}

Supported Providers

Provider Model Count Features
OpenAI 15+ Chat, Embeddings, Images, Audio
Anthropic 5+ Chat, Vision, Tools
Google 10+ Chat, Vision, Embeddings
Groq 8+ Ultra-fast inference
Mistral 5+ Chat, Embeddings
Meta (via providers) 10+ Llama models
Cohere 5+ Chat, Embeddings, RAG

Model Pricing

Pricing is in USD per million tokens:

Example Pricing Tiers

Tier Input Output Examples
Economy $0.05-0.20 $0.10-0.40 Llama 3.3, Gemma
Standard $0.25-1.00 $0.50-2.00 GPT-4o-mini, Claude Haiku
Premium $2.50-5.00 $7.50-15.00 GPT-4o, Claude Sonnet
Frontier $10-60 $30-60 GPT-4, Claude Opus, o1

Get Pricing for Model

curl https://api.langmart.ai/api/models/openai/gpt-4o \
  -H "Authorization: Bearer YOUR_API_KEY" | jq '.pricing'

Usage Examples

Python

import requests

API_KEY = "your-api-key"
BASE_URL = "https://api.langmart.ai"

def list_models(provider=None, capability=None):
    params = {}
    if provider:
        params["provider"] = provider
    if capability:
        params["capability"] = capability

    response = requests.get(
        f"{BASE_URL}/api/models",
        headers={"Authorization": f"Bearer {API_KEY}"},
        params=params
    )
    return response.json()

def get_model(model_id):
    response = requests.get(
        f"{BASE_URL}/api/models/{model_id}",
        headers={"Authorization": f"Bearer {API_KEY}"}
    )
    return response.json()

def add_favorite(model_id):
    response = requests.post(
        f"{BASE_URL}/api/favorites",
        headers={"Authorization": f"Bearer {API_KEY}"},
        json={"model_id": model_id}
    )
    return response.json()

# List vision-capable models
vision_models = list_models(capability="vision")
for model in vision_models["models"]:
    print(f"{model['id']}: ${model['pricing']['input']}/M input")

# Get specific model
gpt4o = get_model("openai/gpt-4o")
print(f"Context: {gpt4o['context_length']} tokens")

Find Best Model for Use Case

def find_models_for_task(task):
    """Find suitable models based on task requirements."""
    if task == "vision":
        return list_models(capability="vision")
    elif task == "fast":
        models = list_models(provider="groq")
        return models
    elif task == "cheap":
        all_models = list_models()
        return sorted(
            all_models["models"],
            key=lambda m: m["pricing"]["input"]
        )[:10]
    elif task == "long_context":
        all_models = list_models()
        return [m for m in all_models["models"] if m["context_length"] >= 100000]

# Find cheapest models
cheap_models = find_models_for_task("cheap")
for model in cheap_models:
    print(f"{model['id']}: ${model['pricing']['input']}/M")

Feature Direct Link
Browse Models https://langmart.ai/models
Model Collections https://langmart.ai/model-collections
Chat with Models https://langmart.ai/chat
Model Evaluation https://langmart.ai/model-evaluation