Documentation Index
Fetch the complete documentation index at: https://officellm.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Overview
OfficeLLM uses a manager-worker pattern where a manager agent coordinates specialized worker agents through function calling.
User Task → Manager → Worker(s) → Manager → Final Result
Components
OfficeLLM Class
The main entry point that initializes and manages agents.
const office = new OfficeLLM({
manager: managerConfig,
workers: [worker1, worker2],
});
Manager Agent
Role: Coordinates tasks and delegates to workers
The manager:
- Receives user tasks
- Analyzes requirements
- Calls appropriate workers (workers are registered as tools)
- Synthesizes results
- Returns final output
const manager = {
name: 'project_manager',
description: 'Coordinates AI workers',
provider: { type: 'gemini', apiKey: '...', model: 'gemini-2.5-pro' },
systemPrompt: `You coordinate AI agents...
Workflow:
1. Analyze the task
2. Call appropriate workers
3. Review results
4. Respond WITHOUT calling workers when done`,
};
Worker Agent
Role: Execute specialized tasks using tools
Workers:
- Receive delegated tasks from manager
- Use their specialized tools
- Return results to manager
const worker = {
name: 'research_agent',
description: 'Searches for information',
provider: { type: 'gemini', apiKey: '...', model: 'gemini-2.5-flash' },
systemPrompt: `You search for information...
Workflow:
1. Use your tools
2. Review results
3. Respond WITHOUT calling tools when done`,
tools: [/* tool definitions */],
toolImplementations: {/* your functions */},
};
Execution Flow
1. Task Submission
const result = await office.executeTask({
title: 'Create report',
description: 'Research topic X and write a report',
priority: 'high',
});
2. Manager Processing
The manager’s LLM decides which workers to call:
// Manager internally calls:
{
toolCalls: [
{
function: {
name: 'research_agent',
arguments: '{"task":"Research topic X","priority":"high"}'
}
}
]
}
3. Worker Execution
The research worker executes:
// Worker uses its tools:
{
toolCalls: [
{
function: {
name: 'web_search',
arguments: '{"query":"topic X","limit":5}'
}
}
]
}
// Tool implementation runs:
toolImplementations.web_search({ query: "topic X", limit: 5 })
// Returns: "Found 5 results..."
// Worker sees the result and responds:
"Based on my research, topic X involves..."
4. Manager Synthesis
Manager receives worker result and may:
- Call more workers if needed
- Respond with final result when complete
Continuous Execution
Agents execute continuously until they signal completion by responding without making tool/worker calls.
Manager completes when:
- It responds without calling any workers
Worker completes when:
- It responds without calling any tools
Example Flow
1. User → Manager: "Create a blog post about AI"
2. Manager calls research_agent:
task="Research AI trends"
3. research_agent calls web_search tool:
query="AI trends 2024"
4. web_search returns results
5. research_agent responds with findings
(no more tool calls = complete)
6. Manager calls writer_agent:
task="Write blog post"
content="[research findings]"
7. writer_agent calls generate_content tool
8. Tool returns formatted blog post
9. writer_agent calls write_file tool
10. Tool saves file
11. writer_agent responds with confirmation
(no more tool calls = complete)
12. Manager responds with summary
(no more worker calls = complete)
13. User receives final result
Provider Independence
Each agent can use a different LLM provider:
const office = new OfficeLLM({
manager: {
name: 'Manager',
provider: { type: 'anthropic', model: 'claude-3-opus' }, // Anthropic
},
workers: [
{
name: 'Researcher',
provider: { type: 'openai', model: 'gpt-4' }, // OpenAI
},
{
name: 'Writer',
provider: { type: 'gemini', model: 'gemini-2.5-pro' }, // Google
},
],
});
Worker Registration
Workers are registered as “tools” that the manager can call:
// Internally, workers become tool definitions:
{
name: 'research_agent',
description: 'Searches for information',
parameters: {
task: string,
priority: 'low' | 'medium' | 'high',
}
}
// Manager's LLM sees this and can call it like a function
Safety Features
Iteration Limits
- Manager: 20 iterations max
- Workers: 15 iterations max
- Prevents infinite loops
Error Handling
- Graceful failures at all levels
- Clear error messages
- Usage tracking even on errors
Missing Implementations
// If a tool has no implementation:
toolImplementations: {
// web_search missing!
}
// Clear error:
Error: Tool "web_search" has no implementation provided.
Between User and Manager
// Input
{
title: string,
description: string,
priority?: 'low' | 'medium' | 'high'
}
// Output
{
success: boolean,
content: string,
usage: { promptTokens, completionTokens, totalTokens },
error?: string
}
Between Manager and Workers
// Manager calls worker with parameters matching the worker's schema
{
task: string,
priority: 'high'
}
// Worker returns plain text result
"Here are my findings: ..."
// Worker calls tool
{
function: {
name: 'web_search',
arguments: '{"query":"AI","limit":5}'
}
}
// Tool implementation returns string
async web_search(args) {
return "Found 5 results: ...";
}
Best Practices
Manager System Prompts
systemPrompt: `You are a [role].
Available workers:
- worker_1: [what it does]
- worker_2: [what it does]
Workflow:
1. Analyze task
2. Call appropriate workers
3. Review results
4. Call more workers if needed
5. Respond WITHOUT calling workers when complete
IMPORTANT: Signal completion by responding without tool calls.`
Worker System Prompts
systemPrompt: `You are a [role].
Your tools:
- tool_1: [what it does]
- tool_2: [what it does]
Workflow:
1. Understand the task
2. Use tools as needed
3. Review tool results (they are complete - don't repeat calls)
4. Respond WITHOUT calling tools when done
IMPORTANT:
- Tool results contain all data
- Don't make the same tool call twice
- Signal completion by responding without tool calls`
Tools should return:
- Complete information in the response
- Formatted strings that are easy to read
- Error messages when something fails
toolImplementations: {
web_search: async (args) => {
try {
const results = await search(args.query);
return `Found ${results.length} results:\n${formatResults(results)}`;
} catch (error) {
return `Error searching: ${error.message}`;
}
},
}
Direct Worker Access
You can call workers directly, bypassing the manager:
const result = await office.callWorker('research_agent', {
task: 'Research topic X',
priority: 'high',
});
This is useful for:
- Testing individual workers
- Building custom workflows
- Debugging worker behavior