Key Concepts
Model Types
elizaOS supports many types of AI operations. Here are the most common ones:- TEXT_GENERATION (
TEXT_SMALL
,TEXT_LARGE
) - Having conversations and generating responses - TEXT_EMBEDDING - Converting text into numbers for memory and search
- OBJECT_GENERATION (
OBJECT_SMALL
,OBJECT_LARGE
) - Creating structured data like JSON
- Text Generation = Having a conversation
- Embeddings = Creating a “fingerprint” of text for finding similar things later
- Object Generation = Filling out forms with specific information
Plugin Capabilities
Not all LLM plugins support all model types. Here’s what each can do:Plugin | Text Chat | Embeddings | Structured Output | Runs Offline |
---|---|---|---|---|
OpenAI | ✅ | ✅ | ✅ | ❌ |
Anthropic | ✅ | ❌ | ✅ | ❌ |
Google GenAI | ✅ | ✅ | ✅ | ❌ |
Ollama | ✅ | ✅ | ✅ | ✅ |
OpenRouter | ✅ | ❌ | ✅ | ❌ |
- 🌟 OpenAI & Google GenAI = Do everything (jack of all trades)
- 💬 Anthropic & OpenRouter = Amazing at chat, need help with embeddings
- 🏠 Ollama = Your local hero - does almost everything, no internet needed!
Plugin Loading Order
The order in which plugins are loaded matters significantly. From the default character configuration:Understanding the Order
Think of it like choosing team players - you pick specialists first, then all-rounders:- Anthropic & OpenRouter go first - They’re specialists! They’re great at text generation but can’t do embeddings. By loading them first, they get priority for text tasks.
- OpenAI & Google GenAI come next - These are the all-rounders! They can do everything: text generation, embeddings, and structured output. They act as fallbacks for what the specialists can’t do.
- Ollama comes last - This is your local backup player! It supports almost everything (text, embeddings, objects) and runs on your computer. Perfect when cloud services aren’t available.
Why This Order Matters
When you ask elizaOS to do something, it looks for the best model in order:- Generate text? → Anthropic gets first shot (if loaded)
- Create embeddings? → Anthropic can’t, so OpenAI steps in
- No cloud API keys? → Ollama handles everything locally
- You get the best specialized models for each task
- You always have fallbacks for missing capabilities
- You can run fully offline with Ollama if needed
Real Example: How It Works
Let’s say you have Anthropic + OpenAI configured:Model Registration
When plugins load, they “register” what they can do. It’s like signing up for different jobs:How elizaOS Picks the Right Model
When you ask elizaOS to do something, it:- Checks what type of work it is (text? embeddings? objects?)
- Looks at who signed up for that job
- Picks based on priority (higher number goes first)
- If tied, first registered wins
- Anthropic registered with priority 100 ✅ (wins!)
- OpenAI registered with priority 50
- Ollama registered with priority 10
- Anthropic didn’t register ❌ (can’t do it)
- OpenAI registered with priority 50 ✅ (wins!)
- Ollama registered with priority 10
Embedding Fallback Strategy
Remember: Not all plugins can create embeddings! Here’s how elizaOS handles this: The Problem:- You’re using Anthropic (great at chat, can’t do embeddings)
- But elizaOS needs embeddings for memory and search
Common Patterns
Anthropic + OpenAI Fallback
OpenRouter + Local Embeddings
Configuration
Environment Variables
Each plugin requires specific environment variables:Available Plugins
Cloud Providers
- OpenAI Plugin - Full-featured with all model types
- Anthropic Plugin - Claude models for text generation
- Google GenAI Plugin - Gemini models
- OpenRouter Plugin - Access to multiple providers
Local/Self-Hosted
- Ollama Plugin - Run models locally with Ollama
Best Practices
1. Always Configure Embeddings
Even if your primary model doesn’t support embeddings, always include a fallback:2. Order Matters
Place your preferred providers first, but ensure embedding capability somewhere in the chain.3. Test Your Configuration
Verify all model types work:4. Monitor Costs
Different providers have different pricing. Consider:- Using local models (Ollama) for development
- Mixing providers (e.g., OpenRouter for text, local for embeddings)
- Setting up usage alerts with your providers
Troubleshooting
”No model found for type EMBEDDING”
Your configured plugins don’t support embeddings. Add an embedding-capable plugin:“Missing API Key”
Ensure your environment variables are set:Models Not Loading
Check plugin initialization in logs:Migration from v0.x
In elizaOS v0.x, models were configured directly in character files:modelProvider
field is now ignored. All model configuration happens through plugins.