Models Overview

Quick Access: Browse Models | Model Collections

LangMart provides access to hundreds of AI models from multiple providers through a unified interface. Browse, compare, and use models from OpenAI, Anthropic, Google, Meta, Mistral, and many more.

The Models Page

The Models page is your central hub for discovering and managing AI models.

Accessing Models

Navigate to Models from the sidebar to access:

  • Browse all available models
  • Search and filter by capabilities
  • View model details and pricing
  • Manage favorites and collections
  • Access models for chat

Page Tabs

The Models page is organized into tabs:

Tab Description
Models All organization-available models
My Models Your personal model connections
Favorites Models you've starred
Collections Custom model groups you've created

Browsing Models

The Model Grid

Models are displayed in a grid showing key information:

Model Card Contents:

  • Model name and provider
  • Capability badges (vision, reasoning, tools, etc.)
  • Pricing information
  • Context window size
  • Billing mode indicator
  • Favorite button

Provider Tabs

Filter models by provider using the provider tabs:

  • All - Show all models
  • OpenAI - GPT models
  • Anthropic - Claude models
  • Google - Gemini models
  • Meta - Llama models
  • Mistral - Mistral/Mixtral models
  • Others - Additional providers

Click a provider tab to filter the view.

View Modes

Toggle between display modes:

  • Tabs View - Grouped by provider with tabs
  • List View - Single scrollable list

Search and Filter

Type in the search box to filter models:

  • Search by model name
  • Search by provider name
  • Search by model ID
  • Multiple terms work (space-separated)

Capability Filters

Filter by model capabilities:

Capability Description
Vision Can process images
Reasoning Enhanced reasoning/thinking
Tool Use Can call functions/tools
Image Gen Can generate images
Audio Can process audio
Embedding Creates text embeddings

Click capability badges to toggle filters. Multiple filters combine (AND logic).

Billing Filters

Filter by billing mode:

  • All - Show all models
  • Org-Paid - Organization covers costs
  • Self-Paid - User pays from their credits

Organization Filter

If you belong to multiple organizations:

  • Filter models by source organization
  • See model counts per organization
  • Combined view shows all accessible models

Model Families

Understanding Model Families

Models are grouped into families:

Example: GPT-4 Family

  • gpt-4o
  • gpt-4o-mini
  • gpt-4-turbo
  • gpt-4

Family grouping helps you:

  • Compare variants easily
  • Understand model evolution
  • Choose the right version

Family Information

Each family shows:

  • Family name and description
  • Number of variants
  • Capability range
  • Pricing range

Provider Overview

Supported Providers

LangMart integrates with major AI providers:

Provider Model Examples Strengths
OpenAI GPT-4o, GPT-4 Turbo Versatile, widely used
Anthropic Claude 3.5, Claude 3 Long context, safety
Google Gemini Pro, Gemini Flash Multimodal, speed
Meta Llama 3.1, Llama 3.2 Open weights, customizable
Mistral Mistral Large, Mixtral Efficiency, open models
Groq Hosted Llama, Mixtral Ultra-fast inference
Cohere Command R+ Enterprise, RAG

Provider Connections

Models are available through connections:

  • Organization connections - Shared with your team
  • Personal connections - Your private API keys

Model Information

Model Details View

Click any model to see detailed information:

Basic Information:

  • Full model name and ID
  • Provider and version
  • Release date
  • Description

Capabilities:

  • Feature list (vision, tools, etc.)
  • Context window size
  • Maximum output tokens
  • Supported formats

Pricing:

  • Input token cost (per million)
  • Output token cost (per million)
  • Billing mode

Understanding Pricing

Model pricing is shown per million tokens:

Input: $0.50 / 1M tokens
Output: $1.50 / 1M tokens

Estimating Costs:

  • 1 token ~ 4 characters (English)
  • 1,000 tokens ~ 750 words
  • A page of text ~ 500 tokens

Context Windows

Context window indicates how much text a model can process:

Size Tokens Approximate Content
Small 4K Short conversations
Medium 32K Long documents
Large 128K Multiple documents
Very Large 200K+ Books, codebases

Using Models

From the Models Page

Click Use in Chat on any model card to:

  1. Open chat with that model selected
  2. Start a new conversation immediately
  3. The model is pre-selected for you

From the Chat Interface

In chat, click the model selector:

  1. Browse available models
  2. Use favorites for quick access
  3. Select and start chatting

Using Collections

Select a collection as your "model":

  1. Create a collection with multiple models
  2. Use the collection ID in chat
  3. Requests route based on collection strategy

Learn more about Collections

Tips for Choosing Models

By Task Type

Task Recommended Models
Chat/Conversation GPT-4o-mini, Claude 3.5 Haiku
Analysis/Reasoning GPT-4o, Claude 3.5 Sonnet, Gemini Pro
Code Generation Claude 3.5 Sonnet, GPT-4 Turbo
Image Understanding GPT-4o, Claude 3.5 Sonnet, Gemini Pro
Long Documents Claude 3.5 (200K), Gemini 1.5 Pro (1M)
Fast Responses Groq Llama, Gemini Flash
Cost-Sensitive GPT-4o-mini, Claude 3.5 Haiku

By Budget

Budget-Friendly:

  • GPT-4o-mini
  • Claude 3.5 Haiku
  • Llama 3.1 (via Groq)

Premium Quality:

  • GPT-4o
  • Claude 3.5 Sonnet
  • Gemini 1.5 Pro

By Speed

Fastest:

  • Groq-hosted models
  • Gemini Flash
  • GPT-4o-mini

Balanced:

  • GPT-4o
  • Claude 3.5 Sonnet
  • Gemini Pro
Feature Direct Link
Browse All Models https://langmart.ai/models
Model Collections https://langmart.ai/model-collections
Model Templates https://langmart.ai/model-templates
Model Evaluation https://langmart.ai/model-evaluation
Benchmark Prompts https://langmart.ai/benchmark-prompts
Chat with Models https://langmart.ai/chat

Next Steps