Connection Issues

This guide helps you diagnose and resolve connection problems with LangMart and provider APIs.

Invalid API Key Errors

Symptoms

{
  "error": {
    "code": "GATEWAY1_AUTH_001",
    "message": "Invalid API key",
    "type": "authentication_error",
    "status": 401
  }
}

Causes

  1. Incorrect API Key: Key is misspelled or incomplete
  2. Expired API Key: Key has been rotated or deleted
  3. Wrong Environment: Using production key in test environment or vice versa
  4. Missing Bearer Prefix: Authorization header format is incorrect

Solutions

Check API Key Format

Ensure your authorization header is formatted correctly:

# Correct format
-H "Authorization: Bearer sk-langmart-your-key-here"

# Common mistakes
-H "Authorization: sk-langmart-your-key-here"  # Missing "Bearer"
-H "Bearer sk-langmart-your-key-here"  # Missing "Authorization:"

Verify Your API Key

  1. Go to Settings > API Keys in the dashboard
  2. Check that your key exists and is active
  3. Try generating a new key if issues persist

Test Authentication

curl -X GET "https://api.langmart.ai/v1/auth/validate" \
  -H "Authorization: Bearer YOUR_API_KEY"

Expected response:

{
  "valid": true,
  "user": {
    "id": "user_123",
    "email": "[email protected]"
  }
}

Provider Connection Failures

Symptoms

{
  "error": {
    "code": "PROVIDER_OPENAI_401",
    "message": "Provider authentication failed",
    "type": "authentication_error",
    "status": 401
  }
}

Causes

  1. Invalid Provider API Key: The key configured for the provider is incorrect
  2. Expired Provider Key: The provider key has expired or been revoked
  3. Insufficient Provider Permissions: Key lacks required scopes
  4. Provider Account Issues: Account suspended or billing issues

Solutions

Update Provider API Key

  1. Navigate to Connections in the dashboard
  2. Find the failing connection
  3. Click Edit or the key icon
  4. Enter your new provider API key
  5. Click Test to verify

Verify Provider Key Directly

Test your provider key directly to isolate the issue:

# Test OpenAI key
curl "https://api.openai.com/v1/models" \
  -H "Authorization: Bearer YOUR_OPENAI_KEY"

# Test Anthropic key
curl "https://api.anthropic.com/v1/messages" \
  -H "x-api-key: YOUR_ANTHROPIC_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "Content-Type: application/json" \
  -d '{"model": "claude-3-sonnet-20240229", "max_tokens": 10, "messages": [{"role": "user", "content": "Hi"}]}'

# Test Groq key
curl "https://api.groq.com/openai/v1/models" \
  -H "Authorization: Bearer YOUR_GROQ_KEY"

Check Provider Status

Rate Limiting

Symptoms

{
  "error": {
    "code": "PROVIDER_OPENAI_429",
    "message": "Rate limit exceeded",
    "type": "rate_limit_error",
    "status": 429
  }
}

Response headers:

Retry-After: 60
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 2025-01-15T14:30:00Z

Causes

  1. Too Many Requests: Exceeding requests per minute
  2. Token Limits: Exceeding tokens per minute
  3. Concurrent Connections: Too many simultaneous requests
  4. Provider Tier Limits: On a lower tier with strict limits

Solutions

Implement Exponential Backoff

import time
import random

def make_request_with_backoff(request_func, max_retries=5):
    for attempt in range(max_retries):
        try:
            response = request_func()
            if response.status_code == 429:
                # Get retry delay from header or calculate
                retry_after = int(response.headers.get('Retry-After', 60))
                # Add jitter to prevent thundering herd
                jitter = random.uniform(0, 0.1 * retry_after)
                time.sleep(retry_after + jitter)
                continue
            return response
        except Exception as e:
            if attempt == max_retries - 1:
                raise
            wait_time = (2 ** attempt) + random.uniform(0, 1)
            time.sleep(wait_time)

Use Request Queuing

from queue import Queue
from threading import Thread
import time

class RateLimitedQueue:
    def __init__(self, requests_per_minute=60):
        self.queue = Queue()
        self.interval = 60.0 / requests_per_minute

    def add_request(self, request):
        self.queue.put(request)

    def process_queue(self):
        while True:
            request = self.queue.get()
            request()
            time.sleep(self.interval)

Spread Load Across Providers

Configure multiple connections and use load balancing:

  1. Add connections to multiple providers (OpenAI, Anthropic, Groq)
  2. Create a connection pool
  3. Enable round-robin or weighted load balancing

Upgrade Provider Tier

Contact your provider to increase rate limits:

Timeout Errors

Symptoms

{
  "error": {
    "code": "GATEWAY_TIMEOUT_001",
    "message": "Gateway did not respond within 60s",
    "type": "gateway_timeout",
    "status": 504
  }
}

Causes

  1. Large Requests: Very long prompts or large outputs
  2. Slow Models: Some models are slower than others
  3. Provider Overload: Provider is experiencing high load
  4. Network Issues: Connectivity problems between services

Solutions

Reduce Request Size

# Split large prompts into smaller chunks
def chunk_prompt(prompt, max_tokens=4000):
    words = prompt.split()
    chunks = []
    current_chunk = []
    current_length = 0

    for word in words:
        if current_length + len(word) > max_tokens:
            chunks.append(' '.join(current_chunk))
            current_chunk = [word]
            current_length = len(word)
        else:
            current_chunk.append(word)
            current_length += len(word) + 1

    if current_chunk:
        chunks.append(' '.join(current_chunk))

    return chunks

Set Appropriate Timeouts

import requests

response = requests.post(
    "https://api.langmart.ai/v1/chat/completions",
    headers={"Authorization": "Bearer YOUR_KEY"},
    json={"model": "gpt-4o", "messages": [...]},
    timeout=120  # Increase timeout for large requests
)

Use Streaming

Enable streaming to get partial responses faster:

response = requests.post(
    "https://api.langmart.ai/v1/chat/completions",
    headers={"Authorization": "Bearer YOUR_KEY"},
    json={
        "model": "gpt-4o",
        "messages": [...],
        "stream": True  # Enable streaming
    },
    stream=True
)

for line in response.iter_lines():
    if line:
        print(line.decode())

Choose Faster Models

If speed is critical, consider:

Model Typical Latency
groq/llama-3.3-70b-versatile Very fast
gpt-4o-mini Fast
claude-3-haiku Fast
gpt-4o Medium
claude-3-opus Slower

Gateway Unavailable

Symptoms

{
  "error": {
    "code": "GATEWAY3_CONN_001",
    "message": "Gateway temporarily unavailable",
    "type": "service_unavailable",
    "status": 503
  }
}

Causes

  1. Gateway Maintenance: Scheduled maintenance window
  2. Gateway Overload: High traffic overwhelming the gateway
  3. No Available Gateways: All gateway instances are offline
  4. Network Partition: Network issues between services

Solutions

Check System Status

  1. Visit the LangMart status page
  2. Check for any announced maintenance windows
  3. Look for recent incident reports

Implement Retry Logic

import time

def request_with_retry(request_func, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = request_func()
            if response.status_code == 503:
                time.sleep(5 * (attempt + 1))  # Linear backoff
                continue
            return response
        except ConnectionError:
            if attempt == max_retries - 1:
                raise
            time.sleep(5)

Use Alternative Providers

Configure fallback connections to use if primary gateway is unavailable.

Network Connectivity Issues

Diagnosis

# Check DNS resolution
nslookup api.langmart.ai

# Check connectivity
curl -v "https://api.langmart.ai/health"

# Check SSL certificate
openssl s_client -connect api.langmart.ai:443

Common Network Issues

Issue Symptoms Solution
DNS failure Cannot resolve hostname Check DNS settings, try 8.8.8.8
SSL error Certificate validation failed Check system time, update CA certs
Firewall blocking Connection refused Check firewall rules for port 443
Proxy issues Connection timeout Configure proxy settings

Configure Proxy

import requests

proxies = {
    'http': 'http://proxy.example.com:8080',
    'https': 'http://proxy.example.com:8080',
}

response = requests.post(
    "https://api.langmart.ai/v1/chat/completions",
    headers={"Authorization": "Bearer YOUR_KEY"},
    json={...},
    proxies=proxies
)

Still Having Issues?

If you've tried the solutions above and still have problems:

  1. Check Error Analytics: Review recent errors in the dashboard
  2. Gather Information: Collect request ID, timestamp, and full error
  3. Contact Support: Submit a ticket with the gathered information