Features
- Full model support - Text, embeddings, and objects
- Multiple models - GPT-4, GPT-3.5, and embedding models
- Streaming support - Real-time response generation
- Function calling - Structured output generation
Installation
Configuration
Environment Variables
Character Configuration
Supported Operations
Operation | Models | Notes |
---|---|---|
TEXT_GENERATION | Any GPT model (gpt-4o, gpt-4, gpt-3.5-turbo, etc.) | Conversational AI |
EMBEDDING | Any embedding model (text-embedding-3-small/large, ada-002) | Vector embeddings |
OBJECT_GENERATION | All GPT models | JSON/structured output |
Model Configuration
The plugin uses two model categories:- SMALL_MODEL: Used for simpler tasks, faster responses
- LARGE_MODEL: Used for complex reasoning, better quality
Usage Example
The plugin automatically registers with the runtime:Cost Considerations
- GPT-4 is more expensive than GPT-3.5
- Use
text-embedding-3-small
for cheaper embeddings - Monitor usage via OpenAI dashboard