Connection Issues
This guide helps you diagnose and resolve connection problems with LangMart and provider APIs.
Invalid API Key Errors
Symptoms
{
"error": {
"code": "GATEWAY1_AUTH_001",
"message": "Invalid API key",
"type": "authentication_error",
"status": 401
}
}Causes
- Incorrect API Key: Key is misspelled or incomplete
- Expired API Key: Key has been rotated or deleted
- Wrong Environment: Using production key in test environment or vice versa
- Missing Bearer Prefix: Authorization header format is incorrect
Solutions
Check API Key Format
Ensure your authorization header is formatted correctly:
# Correct format
-H "Authorization: Bearer sk-langmart-your-key-here"
# Common mistakes
-H "Authorization: sk-langmart-your-key-here" # Missing "Bearer"
-H "Bearer sk-langmart-your-key-here" # Missing "Authorization:"Verify Your API Key
- Go to Settings > API Keys in the dashboard
- Check that your key exists and is active
- Try generating a new key if issues persist
Test Authentication
curl -X GET "https://api.langmart.ai/v1/auth/validate" \
-H "Authorization: Bearer YOUR_API_KEY"Expected response:
{
"valid": true,
"user": {
"id": "user_123",
"email": "[email protected]"
}
}Provider Connection Failures
Symptoms
{
"error": {
"code": "PROVIDER_OPENAI_401",
"message": "Provider authentication failed",
"type": "authentication_error",
"status": 401
}
}Causes
- Invalid Provider API Key: The key configured for the provider is incorrect
- Expired Provider Key: The provider key has expired or been revoked
- Insufficient Provider Permissions: Key lacks required scopes
- Provider Account Issues: Account suspended or billing issues
Solutions
Update Provider API Key
- Navigate to Connections in the dashboard
- Find the failing connection
- Click Edit or the key icon
- Enter your new provider API key
- Click Test to verify
Verify Provider Key Directly
Test your provider key directly to isolate the issue:
# Test OpenAI key
curl "https://api.openai.com/v1/models" \
-H "Authorization: Bearer YOUR_OPENAI_KEY"
# Test Anthropic key
curl "https://api.anthropic.com/v1/messages" \
-H "x-api-key: YOUR_ANTHROPIC_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "Content-Type: application/json" \
-d '{"model": "claude-3-sonnet-20240229", "max_tokens": 10, "messages": [{"role": "user", "content": "Hi"}]}'
# Test Groq key
curl "https://api.groq.com/openai/v1/models" \
-H "Authorization: Bearer YOUR_GROQ_KEY"Check Provider Status
- OpenAI: https://status.openai.com
- Anthropic: https://status.anthropic.com
- Google AI: https://status.cloud.google.com
Rate Limiting
Symptoms
{
"error": {
"code": "PROVIDER_OPENAI_429",
"message": "Rate limit exceeded",
"type": "rate_limit_error",
"status": 429
}
}Response headers:
Retry-After: 60
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 2025-01-15T14:30:00ZCauses
- Too Many Requests: Exceeding requests per minute
- Token Limits: Exceeding tokens per minute
- Concurrent Connections: Too many simultaneous requests
- Provider Tier Limits: On a lower tier with strict limits
Solutions
Implement Exponential Backoff
import time
import random
def make_request_with_backoff(request_func, max_retries=5):
for attempt in range(max_retries):
try:
response = request_func()
if response.status_code == 429:
# Get retry delay from header or calculate
retry_after = int(response.headers.get('Retry-After', 60))
# Add jitter to prevent thundering herd
jitter = random.uniform(0, 0.1 * retry_after)
time.sleep(retry_after + jitter)
continue
return response
except Exception as e:
if attempt == max_retries - 1:
raise
wait_time = (2 ** attempt) + random.uniform(0, 1)
time.sleep(wait_time)Use Request Queuing
from queue import Queue
from threading import Thread
import time
class RateLimitedQueue:
def __init__(self, requests_per_minute=60):
self.queue = Queue()
self.interval = 60.0 / requests_per_minute
def add_request(self, request):
self.queue.put(request)
def process_queue(self):
while True:
request = self.queue.get()
request()
time.sleep(self.interval)Spread Load Across Providers
Configure multiple connections and use load balancing:
- Add connections to multiple providers (OpenAI, Anthropic, Groq)
- Create a connection pool
- Enable round-robin or weighted load balancing
Upgrade Provider Tier
Contact your provider to increase rate limits:
- OpenAI: https://platform.openai.com/account/limits
- Anthropic: https://console.anthropic.com
- Google AI: https://console.cloud.google.com/apis/quotas
Timeout Errors
Symptoms
{
"error": {
"code": "GATEWAY_TIMEOUT_001",
"message": "Gateway did not respond within 60s",
"type": "gateway_timeout",
"status": 504
}
}Causes
- Large Requests: Very long prompts or large outputs
- Slow Models: Some models are slower than others
- Provider Overload: Provider is experiencing high load
- Network Issues: Connectivity problems between services
Solutions
Reduce Request Size
# Split large prompts into smaller chunks
def chunk_prompt(prompt, max_tokens=4000):
words = prompt.split()
chunks = []
current_chunk = []
current_length = 0
for word in words:
if current_length + len(word) > max_tokens:
chunks.append(' '.join(current_chunk))
current_chunk = [word]
current_length = len(word)
else:
current_chunk.append(word)
current_length += len(word) + 1
if current_chunk:
chunks.append(' '.join(current_chunk))
return chunksSet Appropriate Timeouts
import requests
response = requests.post(
"https://api.langmart.ai/v1/chat/completions",
headers={"Authorization": "Bearer YOUR_KEY"},
json={"model": "gpt-4o", "messages": [...]},
timeout=120 # Increase timeout for large requests
)Use Streaming
Enable streaming to get partial responses faster:
response = requests.post(
"https://api.langmart.ai/v1/chat/completions",
headers={"Authorization": "Bearer YOUR_KEY"},
json={
"model": "gpt-4o",
"messages": [...],
"stream": True # Enable streaming
},
stream=True
)
for line in response.iter_lines():
if line:
print(line.decode())Choose Faster Models
If speed is critical, consider:
| Model | Typical Latency |
|---|---|
groq/llama-3.3-70b-versatile |
Very fast |
gpt-4o-mini |
Fast |
claude-3-haiku |
Fast |
gpt-4o |
Medium |
claude-3-opus |
Slower |
Gateway Unavailable
Symptoms
{
"error": {
"code": "GATEWAY3_CONN_001",
"message": "Gateway temporarily unavailable",
"type": "service_unavailable",
"status": 503
}
}Causes
- Gateway Maintenance: Scheduled maintenance window
- Gateway Overload: High traffic overwhelming the gateway
- No Available Gateways: All gateway instances are offline
- Network Partition: Network issues between services
Solutions
Check System Status
- Visit the LangMart status page
- Check for any announced maintenance windows
- Look for recent incident reports
Implement Retry Logic
import time
def request_with_retry(request_func, max_retries=3):
for attempt in range(max_retries):
try:
response = request_func()
if response.status_code == 503:
time.sleep(5 * (attempt + 1)) # Linear backoff
continue
return response
except ConnectionError:
if attempt == max_retries - 1:
raise
time.sleep(5)Use Alternative Providers
Configure fallback connections to use if primary gateway is unavailable.
Network Connectivity Issues
Diagnosis
# Check DNS resolution
nslookup api.langmart.ai
# Check connectivity
curl -v "https://api.langmart.ai/health"
# Check SSL certificate
openssl s_client -connect api.langmart.ai:443Common Network Issues
| Issue | Symptoms | Solution |
|---|---|---|
| DNS failure | Cannot resolve hostname | Check DNS settings, try 8.8.8.8 |
| SSL error | Certificate validation failed | Check system time, update CA certs |
| Firewall blocking | Connection refused | Check firewall rules for port 443 |
| Proxy issues | Connection timeout | Configure proxy settings |
Configure Proxy
import requests
proxies = {
'http': 'http://proxy.example.com:8080',
'https': 'http://proxy.example.com:8080',
}
response = requests.post(
"https://api.langmart.ai/v1/chat/completions",
headers={"Authorization": "Bearer YOUR_KEY"},
json={...},
proxies=proxies
)Still Having Issues?
If you've tried the solutions above and still have problems:
- Check Error Analytics: Review recent errors in the dashboard
- Gather Information: Collect request ID, timestamp, and full error
- Contact Support: Submit a ticket with the gathered information
Related Documentation
- API Errors Reference - Complete error code reference
- Billing Issues - Payment and credit problems
- Error Tracking - Monitor error trends