Documentation Index
Fetch the complete documentation index at: https://officellm.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Supported Providers
OfficeLLM supports multiple LLM providers. Each agent can use a different provider.
OpenAI
provider: {
type: 'openai' as const,
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4',
temperature: 0.7, // Optional: 0-1, default 0.7
maxTokens: 2000, // Optional: max response length
topP: 0.9, // Optional: nucleus sampling
frequencyPenalty: 0, // Optional: -2 to 2
presencePenalty: 0, // Optional: -2 to 2
}
Supported Models:
gpt-4, gpt-4-turbo, gpt-4-turbo-preview
gpt-3.5-turbo, gpt-3.5-turbo-16k
Anthropic
provider: {
type: 'anthropic' as const,
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-sonnet-20240229',
temperature: 0.7, // Optional: 0-1, default 0.7
maxTokens: 4096, // Optional: max response length
topP: 0.9, // Optional: nucleus sampling
topK: 40, // Optional: top-k sampling
}
Supported Models:
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-haiku-20240307
claude-3-5-sonnet-20240620
Google Gemini
provider: {
type: 'gemini' as const,
apiKey: process.env.GEMINI_API_KEY!,
model: 'gemini-2.5-pro',
temperature: 0.7, // Optional: 0-1, default 0.7
maxTokens: 2048, // Optional: max response length
topP: 0.8, // Optional: nucleus sampling
topK: 10, // Optional: top-k sampling
}
Supported Models:
gemini-2.5-pro, gemini-2.5-flash
gemini-pro, gemini-pro-vision
OpenRouter
provider: {
type: 'openrouter' as const,
apiKey: process.env.OPENROUTER_API_KEY!,
model: 'anthropic/claude-3-sonnet',
temperature: 0.7, // Optional: 0-1, default 0.7
maxTokens: 4096, // Optional: max response length
}
Popular Models:
openai/gpt-4, openai/gpt-3.5-turbo
anthropic/claude-3-opus, anthropic/claude-3-sonnet
google/gemini-pro
meta-llama/llama-2-70b-chat
Environment Variables
Create a .env file:
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Google Gemini
GEMINI_API_KEY=...
# OpenRouter
OPENROUTER_API_KEY=sk-or-v1-...
Provider Selection Guide
By Use Case
Creative/Writing Tasks
claude-3-opus or claude-3-sonnet (excellent reasoning)
gpt-4 (versatile)
Analytical/Math Tasks
gpt-4 (precise)
claude-3-sonnet (good reasoning)
Fast/Cost-Effective
claude-3-haiku (fastest, cheapest)
gpt-3.5-turbo (good balance)
gemini-2.5-flash (fast, cheap)
Maximum Quality
claude-3-opus (best reasoning)
gpt-4-turbo (very capable)
gemini-2.5-pro (strong performance)
By Provider
OpenAI
- Wide model selection
- Excellent for code generation
- Reliable function calling
Anthropic
- Strong reasoning capabilities
- Good for complex tasks
- Excellent safety features
Google Gemini
- Cost-effective
- Fast inference
- Good for high-volume tasks
OpenRouter
- Access to multiple providers
- One API for many models
- Flexible pricing
Mixed Provider Setup
Each agent can use a different provider:
const office = new OfficeLLM({
manager: {
name: 'Manager',
provider: {
type: 'anthropic',
apiKey: process.env.ANTHROPIC_API_KEY!,
model: 'claude-3-sonnet-20240229',
temperature: 0.7,
},
// ...
},
workers: [
{
name: 'Researcher',
provider: {
type: 'openai',
apiKey: process.env.OPENAI_API_KEY!,
model: 'gpt-4',
temperature: 0.3, // Lower for research
},
// ...
},
{
name: 'Writer',
provider: {
type: 'gemini',
apiKey: process.env.GEMINI_API_KEY!,
model: 'gemini-2.5-pro',
temperature: 0.9, // Higher for creativity
},
// ...
},
],
});
Cost Optimization
Use Cheaper Models Where Appropriate
// Manager: Use stronger model for coordination
manager: {
provider: { type: 'anthropic', model: 'claude-3-opus' }, // Expensive
}
// Workers: Use cheaper models for specific tasks
workers: [
{
name: 'simple_task_worker',
provider: { type: 'anthropic', model: 'claude-3-haiku' }, // Cheap
},
{
name: 'complex_task_worker',
provider: { type: 'anthropic', model: 'claude-3-sonnet' }, // Medium
},
]
Adjust Temperature
// Low temperature (0.1-0.3) for deterministic tasks
provider: { temperature: 0.1 } // Math, data analysis, code
// Medium temperature (0.5-0.7) for balanced tasks
provider: { temperature: 0.7 } // General purpose
// High temperature (0.8-1.0) for creative tasks
provider: { temperature: 0.9 } // Creative writing, brainstorming
Set Token Limits
provider: {
maxTokens: 500, // Limit response length to reduce costs
}
Error Handling
All providers include built-in error handling:
try {
const result = await office.executeTask(task);
} catch (error) {
// Common errors:
// - Invalid API key
// - Rate limits exceeded
// - Model not found
// - Network errors
console.error('Task failed:', error.message);
}
Adding Custom Providers
See Extending Providers for how to add support for new LLM providers.
Provider Interface
All providers implement the same interface:
interface IProvider {
readonly type: ProviderType;
readonly config: BaseProviderConfig;
chat(
messages: ProviderMessage[],
tools?: ToolDefinition[]
): Promise<ProviderResponse>;
isAvailable(): Promise<boolean>;
getSupportedModels(): string[];
}
This ensures consistency across all providers.