Overview
Provider prefixes enable explicit routing to specific AI providers. Use the formatprovider:model to control which backend handles your request.
Syntax
geminicli:gemini-2.5-pro- Routes to Gemini CLIollama:llama3.2- Routes to Ollamaclaudecli:claude-sonnet-4- Routes to Claude CLI
Available Providers
CLI Providers
Use your paid CLI subscriptions:- Gemini CLI
- Claude CLI
- Codex
- Vibe CLI
- OpenCode
Prefix: Available Models:
geminicli:Routes to Google Gemini CLI tool. Requires gemini CLI installed and authenticated.geminicli:gemini-2.5-progeminicli:gemini-2.5-flashgeminicli:gemini-3-pro-preview
- ✅ File attachments via
extra_body.cli - ✅ Folder attachments
- ✅ Session management
- ✅ Sandbox mode
Local Providers
Run models on your machine:- Ollama
- LM Studio
Prefix: Available Models: Any model you’ve pulled with
ollama:Routes to local Ollama server. Requires Ollama running on localhost:11434.ollama pullFeatures:- ✅ Fully local (no internet required)
- ✅ Privacy-preserving
- ✅ Custom models
- ✅ Embeddings support
Cloud API Providers
Use cloud APIs directly:- switchAI
- Gemini API
- Claude API
- OpenAI API
- OpenAI Compatible
Prefix: Special Models:
switchai:Routes to Traylinx switchAI unified gateway. Requires switchai.api-key configured.switchai:auto- Intelligent model selectionswitchai:deepseek-reasoner- Reasoning modelswitchai:openai/gpt-oss-120b- OSS models
- ✅ Access to 40+ models
- ✅ Automatic best model selection
- ✅ Built-in failover
List Available Providers
Response Format
Filter Providers
Auto-Routing vs Explicit Routing
Auto-Routing (No Prefix)
Omit the prefix to let switchAILocal choose the best available provider:- Checks if model is available from any provider
- Prefers CLI providers (use your subscriptions)
- Falls back to API providers
- Uses intelligent routing based on provider health
Explicit Routing (With Prefix)
Specify the exact provider:- You need a specific provider feature (e.g., CLI attachments)
- Testing a particular provider
- Provider-specific behavior required
- Cost optimization (prefer local/CLI)
Provider Configuration
CLI Provider Setup
CLI providers work automatically if the CLI tool is installed and authenticated:config.yaml.
API Provider Setup
Configure API keys inconfig.yaml:
config.yaml
Local Provider Setup
Ollama
LM Studio
- Download and install LM Studio
- Load a model
- Start local server in LM Studio
- Configure endpoint in
config.yaml:
config.yaml
Advanced Features
Load Balancing
Configure multiple accounts for round-robin load balancing:config.yaml
Failover
Automatic failover to backup providers:config.yaml
Provider Priorities
Set priority order for auto-routing:config.yaml
Provider Comparison
| Feature | CLI | Local | API |
|---|---|---|---|
| Cost | Free (with subscription) | Free | Pay per token |
| Privacy | Local execution | Fully local | Data sent to cloud |
| Speed | Medium | Fast | Varies |
| Attachments | ✅ | ❌ | ❌ |
| Offline | ❌ | ✅ | ❌ |
| Setup | CLI install | Model download | API key |
Examples
Prefer CLI Providers
Route by Capability
Troubleshooting
Provider Not Found
Error:Provider 'geminicli' not available
Solutions:
- Verify CLI tool is installed:
which gemini - Check authentication:
gemini auth status - Test CLI directly:
gemini chat "hello" - Check server logs for errors
Model Not Found
Error:Model 'geminicli:invalid-model' not found
Solutions:
- List available models:
GET /v1/models - Check model name spelling
- Verify provider supports the model
- Try without prefix for auto-routing
Provider Timeout
Error:Provider 'ollama' timed out
Solutions:
- Verify provider is running:
curl http://localhost:11434 - Increase timeout in
config.yaml: - Check provider logs