LlamaIndex Integration
LlamaIndex is a data framework for building RAG (Retrieval-Augmented Generation) applications.
Python
Installation
pip install llama-index llama-index-llms-openai-likeBasic Usage
from llama_index.llms.openai_like import OpenAILike
llm = OpenAILike(
model="openai/gpt-5.2",
api_key="your-langmart-api-key",
api_base="https://api.langmart.ai/v1",
)
response = llm.complete("What is the capital of France?")
print(response.text)Chat Interface
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(role="system", content="You are a helpful assistant."),
ChatMessage(role="user", content="Explain quantum computing in simple terms."),
]
response = llm.chat(messages)
print(response.message.content)Using with Index
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core import Settings
Settings.llm = OpenAILike(
model="anthropic/claude-opus-4.5",
api_key="your-langmart-api-key",
api_base="https://api.langmart.ai/v1",
)
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is the main topic?")
print(response)Streaming
response = llm.stream_complete("Write a poem about coding")
for chunk in response:
print(chunk.delta, end="", flush=True)TypeScript
Installation
npm install llamaindexUsage
import { OpenAI } from "llamaindex";
const llm = new OpenAI({
model: "openai/gpt-5.2",
apiKey: "your-langmart-api-key",
additionalSessionOptions: {
baseURL: "https://api.langmart.ai/v1",
},
});
const response = await llm.complete({ prompt: "Hello, world!" });
console.log(response.text);