Adding Connections

This guide walks you through adding a new connection to LangMart, from obtaining your provider API key to testing the connection.

Prerequisites

  • A LangMart account
  • An account with at least one AI provider (OpenAI, Anthropic, etc.)
  • Your provider's API key

Step 1: Get Your Provider API Key

OpenAI

  1. Go to platform.openai.com
  2. Sign in or create an account
  3. Navigate to API Keys in the left sidebar
  4. Click Create new secret key
  5. Give it a name (e.g., "LangMart Connection")
  6. Copy the key immediately - it won't be shown again!

Important: OpenAI keys start with sk- followed by random characters.

Anthropic

  1. Go to console.anthropic.com
  2. Sign in or create an account
  3. Navigate to API Keys
  4. Click Create Key
  5. Name your key and select permissions
  6. Copy the key immediately

Important: Anthropic keys start with sk-ant-.

Groq

  1. Go to console.groq.com
  2. Sign in or create an account
  3. Navigate to API Keys
  4. Click Create API Key
  5. Copy the generated key

Important: Groq keys start with gsk_.

Google (Gemini)

  1. Go to aistudio.google.com
  2. Sign in with your Google account
  3. Click Get API Key
  4. Create a new project or select existing
  5. Copy the generated API key

Together AI

  1. Go to api.together.xyz
  2. Sign in or create an account
  3. Navigate to Settings > API Keys
  4. Create a new key and copy it

Mistral AI

  1. Go to console.mistral.ai
  2. Sign in or create an account
  3. Navigate to API Keys
  4. Create a new key and copy it

Step 2: Add the Connection in LangMart

Using the Web Interface

  1. Log in to LangMart at langmart.ai
  2. Navigate to Connections in the sidebar
  3. Click Add Connection
  4. Fill in the form:
Field Description
Provider Select from the dropdown (e.g., OpenAI, Anthropic)
API Key Paste your provider's API key
Connection Type Choose "Cloud Gateway" (Type 2) or "Self-Hosted" (Type 3)
Scope "Individual" (personal) or "Organization" (shared)
  1. Click Create Connection

Using the API

curl -X POST https://api.langmart.ai/api/connections \
  -H "Authorization: Bearer YOUR_LANGMART_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "provider_id": "PROVIDER_UUID",
    "endpoint_url": "https://api.openai.com/v1",
    "gateway_type": 2,
    "api_key": "sk-your-openai-key",
    "scope": "individual"
  }'

Response:

{
  "message": "Connection created successfully",
  "connection": {
    "id": "conn-abc123",
    "provider": {
      "id": "...",
      "key": "openai",
      "name": "OpenAI"
    },
    "endpoint_url": "https://api.openai.com/v1",
    "gateway_type": 2,
    "status": "active",
    "created_at": "2025-01-08T10:30:00Z"
  }
}

Step 3: Test the Connection

After adding a connection, always test it to ensure your API key is valid.

Using the Web Interface

  1. In the Connections list, find your new connection
  2. Click the Test button (checkmark icon)
  3. Wait for the test to complete
  4. A green checkmark indicates success; a red X indicates failure

Using the API

curl -X POST https://api.langmart.ai/api/connections/CONN_ID/test \
  -H "Authorization: Bearer YOUR_LANGMART_API_KEY"

Successful Response:

{
  "success": true,
  "message": "Connection test successful",
  "provider": "OpenAI",
  "gateway_type": 2,
  "models_available": 42,
  "details": "Successfully discovered 42 models"
}

Failed Response:

{
  "success": false,
  "error": "Authentication failed - API key may be invalid",
  "code": "auth_failed",
  "status": 401,
  "provider": "OpenAI"
}

Step 4: Discover Models

Once your connection is active, discover available models:

Using the Web Interface

  1. Click the Discover Models button on your connection
  2. LangMart will fetch all available models from the provider
  3. Models will appear in your Models list

Using the API

curl -X POST https://api.langmart.ai/api/models/discover \
  -H "Authorization: Bearer YOUR_LANGMART_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "connection_id": "CONN_ID"
  }'

Connection Settings

Rate Limits

Configure rate limits to prevent hitting provider quotas:

Setting Description Default
RPM Requests per minute 60
TPM Tokens per minute 40,000

Priority Settings

Control how connections are selected during model resolution:

Setting Description Default
Prefer Score Higher scores are tried first (1-1000) 100
Allow Fallback If this connection fails, try others true

Organization Connections

To create a connection shared with your team:

  1. Select Organization scope when creating
  2. Only organization admins can create org connections
  3. All members can use org connections
  4. Billing goes to the organization account

Provider-Specific Notes

OpenAI

  • Endpoint URL: https://api.openai.com/v1
  • Supports: GPT-4o, GPT-4, GPT-3.5, DALL-E, Whisper, Embeddings
  • Rate limits vary by tier (free, pay-as-you-go, enterprise)

Anthropic

  • Endpoint URL: https://api.anthropic.com/v1
  • Supports: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
  • Max tokens vary by model (some support up to 200K context)

Groq

  • Endpoint URL: https://api.groq.com/openai/v1
  • Supports: Llama 3, Mixtral, Gemma (extremely fast inference)
  • Very high rate limits for most users

Google Gemini

  • Endpoint URL: https://generativelanguage.googleapis.com/v1beta
  • Supports: Gemini Pro, Gemini Flash
  • Free tier available with generous limits

Together AI

  • Endpoint URL: https://api.together.xyz/v1
  • Supports: Many open-source models (Llama, Mistral, etc.)
  • Cost-effective for open-source model inference

Troubleshooting

"Authentication failed"

  • Double-check your API key is correct
  • Ensure the key hasn't expired or been revoked
  • Verify you have billing set up with the provider

"Connection refused"

  • Check the endpoint URL is correct
  • Ensure no firewall is blocking the connection
  • For self-hosted, verify your gateway is running

"Rate limit exceeded"

  • Wait a few minutes and try again
  • Consider upgrading your provider tier
  • Adjust your rate limit settings in LangMart

"No models discovered"

  • Some providers require specific permissions to list models
  • Try creating a test request instead of discovering models
  • Check provider documentation for API access requirements

Next Steps