Model Discovery Guide
LangMart makes it easy to discover the perfect AI model for your needs. This guide covers advanced browsing, comparing models, and understanding model capabilities.
Featured Models
What Are Featured Models?
Featured models are curated selections highlighted for:
- Performance - Best-in-class results
- Value - Great quality for the price
- Novelty - New and noteworthy releases
- Popularity - Most-used by the community
Accessing Featured Models
Featured models appear in several places:
- Homepage - Quick access to top models
- Models Page - Featured tab (when available)
- Chat Selector - Promoted in model picker
Featured Categories
Models may be featured in categories:
| Category | Criteria |
|---|---|
| Top Picks | Editor's choice for quality |
| Best Value | Quality vs. price ratio |
| Fastest | Lowest latency models |
| Newest | Recent releases |
| Most Popular | Highest usage |
Browsing by Provider
Provider Tabs
The Models page organizes models by provider:
Navigation:
- Go to the Models page
- Click a provider tab to filter
- All models from that provider appear
- Use additional filters within the provider view
Provider-Specific Features
Each provider offers unique capabilities:
OpenAI:
- GPT-4 series with vision
- Function calling support
- JSON mode output
- Structured outputs
Anthropic:
- Claude 3.5 with 200K context
- Strong reasoning abilities
- Built-in safety features
- Excellent code generation
Google:
- Gemini with multimodal input
- Up to 1M token context
- Fast inference options
- Native Google integration
Meta (Llama):
- Open weights models
- Available via multiple hosts
- Customizable deployments
- Cost-effective options
Mistral:
- Efficient architecture
- Open and commercial options
- Strong multilingual support
- Mixture of Experts models
Provider Badges
Model cards show provider-specific badges:
- Provider logo/icon
- Model family indicator
- Version information
- Special features
Comparing Models
Side-by-Side Comparison
Compare models to find the best fit:
- Open model details for a model
- Click Compare or add to comparison
- Select additional models
- View comparison table
Comparison Criteria
Compare models across key dimensions:
| Dimension | What to Compare |
|---|---|
| Pricing | Input/output costs per token |
| Context | Maximum context window |
| Speed | Typical response latency |
| Capabilities | Features supported |
| Quality | Output quality ratings |
Making Trade-offs
Consider these trade-offs:
Quality vs. Cost:
- Premium models cost more but deliver better results
- Mini/Flash variants offer good quality at lower cost
- Use premium for important tasks, budget models for simple ones
Speed vs. Quality:
- Faster models may sacrifice some quality
- Groq and Flash models prioritize speed
- Full models take longer but may be more thorough
Context vs. Cost:
- Larger context windows cost more per request
- Only use large context when needed
- Consider summarization for very long inputs
Benchmark Comparisons
Some models include benchmark scores:
- MMLU - Multi-task language understanding
- HumanEval - Code generation accuracy
- Math - Mathematical reasoning
- Reasoning - Logical problem solving
Model Capabilities
Understanding Capability Badges
Model cards display capability badges:
| Badge | Meaning |
|---|---|
| Vision | Can process images |
| Reasoning | Enhanced thinking/analysis |
| Tool Use | Can call functions |
| Image Gen | Creates images |
| Audio | Processes audio input |
| Embedding | Creates vector embeddings |
| JSON Mode | Structured JSON output |
| Streaming | Real-time token streaming |
Vision Capability
Models with vision can:
- Analyze images you upload
- Read text from images (OCR)
- Describe visual content
- Answer questions about images
Using Vision:
- Select a vision-capable model
- Attach an image to your message
- Ask about the image content
Reasoning Capability
Reasoning models excel at:
- Complex problem solving
- Multi-step analysis
- Logical deduction
- Mathematical reasoning
Best for:
- Technical problems
- Data analysis
- Strategic planning
- Code debugging
Tool Use Capability
Tool-capable models can:
- Call functions you define
- Use built-in tools (in Remote Chat)
- Chain multiple tool calls
- Handle complex workflows
Requirement: Enable tools in Remote Chat mode
Context Windows
Context window determines input capacity:
Small (4K-8K tokens):
- Short conversations
- Simple queries
- Quick responses
Medium (32K-64K tokens):
- Document analysis
- Extended conversations
- Code review
Large (128K-200K tokens):
- Long documents
- Multi-file analysis
- Book-length content
Very Large (1M+ tokens):
- Entire codebases
- Research papers
- Comprehensive analysis
Streaming Support
Most models support streaming:
- Text appears word by word
- Faster perceived response time
- Can stop generation early
- Better user experience
Advanced Discovery
Search Strategies
By Name:
Search: "gpt-4" "claude" "llama"By Capability:
Filter: Vision + ReasoningBy Provider + Capability:
Tab: OpenAI
Filter: Tool UseFinding Specific Models
Latest Models:
- Sort by release date
- Check "Newest" featured section
- Follow provider announcements
Budget Models:
- Filter by billing (Self-paid)
- Sort by price low-to-high
- Look for "mini" or "flash" variants
Enterprise Models:
- Filter by billing (Org-paid)
- Check security features
- Verify compliance certifications
Model Versioning
Models come in versions:
Naming Patterns:
gpt-4-0613- Date-versionedclaude-3-sonnet- Named versionsllama-3.1-70b- Size and version
Version Considerations:
- Newer isn't always better for your use case
- Date versions are more stable
- Latest may have improvements or changes
Model Selection Workflow
Step 1: Define Requirements
Ask yourself:
- What task am I doing?
- What capabilities do I need?
- What's my budget?
- How important is speed?
- How much context do I need?
Step 2: Filter Candidates
Use filters to narrow down:
- Set capability requirements
- Filter by billing mode
- Consider context needs
- Note pricing range
Step 3: Compare Options
For top candidates:
- Review detailed specifications
- Compare pricing
- Check benchmark scores
- Read descriptions
Step 4: Test and Iterate
Try models in practice:
- Use each for sample tasks
- Evaluate response quality
- Note response speed
- Check cost tracking
Step 5: Organize Favorites
Once you find winners:
- Add to favorites
- Create collections for different tasks
- Set up quick access
Tips for Discovery
Stay Updated
- Check featured models regularly
- New models are added frequently
- Pricing and capabilities change
- Follow LangMart announcements
Use Multiple Models
- Different models for different tasks
- Collections for A/B testing
- Fallback options for availability
Consider Total Cost
- Input and output pricing differ
- Context window affects cost
- Batch small requests
- Monitor usage in analytics
Next Steps
- Favorites and Collections - Organize discovered models
- Models Overview - Return to basics
- Chat Guide - Use models in conversations