Understanding Credits

Credits are the foundation of LangMart's billing system. This guide explains how credits work, how costs are calculated, and how different models affect your spending.

Credit Basics

  • 1 credit = $1 USD
  • Credits never expire
  • Unused credits remain in your account
  • Credits are non-refundable once purchased

How Costs Are Calculated

Every API request costs a certain number of credits based on:

  1. The model you use - Different models have different prices
  2. Input tokens - The text you send to the model
  3. Output tokens - The text the model generates
  4. Special token types - Some models have additional token categories

Token-to-Credit Conversion

The basic formula for calculating request cost is:

Cost = (Input Tokens x Input Price) + (Output Tokens x Output Price)

For example, if you send 1,000 input tokens and receive 500 output tokens using a model priced at $0.001 per 1K input tokens and $0.002 per 1K output tokens:

Cost = (1,000 x $0.001/1K) + (500 x $0.002/1K)
     = $0.001 + $0.001
     = $0.002 (0.002 credits)

Model Pricing Differences

Different models have dramatically different pricing. Here's a general guide:

Economy Models

Low-cost models suitable for simple tasks:

  • Cost range: $0.0001 - $0.001 per 1K tokens
  • Best for: Simple queries, classification, formatting
  • Examples: Smaller Llama models, Mistral 7B

Standard Models

Balanced cost and capability:

  • Cost range: $0.001 - $0.01 per 1K tokens
  • Best for: General conversation, content generation, code assistance
  • Examples: GPT-3.5-turbo, Claude Instant, Llama 70B

Premium Models

High-capability models for complex tasks:

  • Cost range: $0.01 - $0.15 per 1K tokens
  • Best for: Complex reasoning, code generation, analysis
  • Examples: GPT-4, Claude Opus, Gemini Pro

Frontier Models

Cutting-edge models with the highest capabilities:

  • Cost range: $0.15+ per 1K tokens
  • Best for: Advanced research, complex multi-step reasoning
  • Examples: GPT-4 Turbo, Claude 3 Opus

Special Token Types

Some models include additional token categories that may affect pricing:

Cached Input Tokens

  • Tokens from repeated prompts that can be cached
  • Often priced lower than regular input tokens
  • Reduces costs for repetitive tasks

Reasoning Tokens

  • Internal tokens used by models for complex reasoning
  • Some models charge separately for these
  • Used by models with chain-of-thought capabilities

Self-Funded vs. Organization-Funded

Your funding type affects how credits are managed:

Self-Funded Models

When using self-funded models:

  • Credits are deducted from your personal balance
  • You must maintain minimum balance ($0.10 default)
  • Full visibility into your spending
  • You control when to add credits

Organization-Funded Models

When using organization-funded models:

  • Credits come from organization pool
  • Organization admins set spending limits
  • Usage is tracked per member
  • No personal credit purchase needed

Checking Your Balance

You can check your credit balance in several ways:

  1. Dashboard - View balance in the top navigation bar
  2. Settings page - Detailed balance information
  3. API response headers - Credits remaining returned with each request
  4. Billing page - Full transaction history

Cost Optimization Tips

Choose the Right Model

  • Use economy models for simple tasks
  • Reserve premium models for complex needs
  • Test with cheaper models first

Optimize Token Usage

  • Be concise in your prompts
  • Limit output length when possible
  • Use system prompts efficiently

Monitor Usage

  • Set up cost alerts
  • Review usage analytics regularly
  • Identify expensive patterns

Use Caching

  • Cache responses for repeated queries
  • Use cached input tokens when available
  • Implement request deduplication