Overview
switchAILocal supports multiple AI providers, each with its own configuration format. Providers are organized into:- Cloud Providers: OpenAI, Anthropic, Google Gemini, Traylinx SwitchAI
- Local Providers: Ollama, LM Studio, OpenCode
- Compatible Providers: OpenRouter, Groq, Together AI, and others via OpenAI compatibility
Traylinx SwitchAI Cloud
Unified access to 100+ cloud models through a single API.config.yaml
Your SwitchAI API key. Get one at switchai.traylinx.com
SwitchAI API endpoint
Optional prefix to namespace models (e.g.,
"teamA/deepseek")Model name mappings and aliases
Override global proxy for this credential
Additional HTTP headers for requests
Google Gemini API
Configure Google Gemini API access:config.yaml
Google Gemini API key from Google AI Studio
Namespace models (e.g.,
"google/gemini-pro")Override Gemini API endpoint (optional)
Model aliases for custom routing
Anthropic Claude API
Configure Claude API credentials:config.yaml
Anthropic API key from Anthropic Console
Override Claude API endpoint (for Claude-compatible services)
Model name mappings and aliases
Namespace models for this credential
OpenAI / Codex API
Configure OpenAI and compatible services:config.yaml
OpenAI API key from OpenAI Platform
OpenAI API endpoint
Model aliases (optional)
The
codex-api-key name is historical. This provider works with all OpenAI models, not just Codex.Ollama (Local)
Configure local Ollama server integration:config.yaml
Enable Ollama provider registration
Ollama API endpoint
Automatically fetch available models from Ollama on startup
Model IDs to exclude from discovery
Manual model alias definitions
OpenCode (Local)
Integrate with local OpenCode server:config.yaml
Enable OpenCode provider integration
OpenCode API endpoint
Default agent to use when no specific model is requested
LM Studio (Local)
Configure LM Studio integration:config.yaml
Enable LM Studio provider registration
LM Studio API endpoint
Automatically fetch models from LM Studio on startup
OpenAI Compatibility
Configure third-party providers that support OpenAI API format:config.yaml
Provider identifier (used in logs and metrics)
Provider’s OpenAI-compatible API endpoint
Namespace models (e.g.,
"groq/llama-3.1-70b")List of API keys for this provider
Model name mappings and aliases
- Groq:
https://api.groq.com/openai/v1 - OpenRouter:
https://openrouter.ai/api/v1 - Together AI:
https://api.together.xyz/v1 - Fireworks AI:
https://api.fireworks.ai/inference/v1 - DeepSeek:
https://api.deepseek.com/v1 - Any OpenAI-compatible service
Vertex AI Compatibility
For third-party services using Vertex AI-style protocols with API key auth:config.yaml
API key for Vertex-compatible service
Base URL for Vertex-compatible endpoint
Model configurations with aliases
Vertex compatibility is for third-party services that mimic Google’s Vertex AI endpoint structure but use simple API key authentication instead of OAuth.
Global Model Exclusions
Exclude models globally for OAuth/file-backed auth entries:config.yaml
Map of provider names to excluded model patterns (supports wildcards)
Per-Provider Settings
All cloud providers support these common settings:Common provider settings
Common provider settings
prefix: Namespace models (e.g.,team-a/model-name)proxy-url: Override global proxy for this providermodels-url: Override model discovery endpointheaders: Add custom HTTP headersexcluded-models: List of model patterns to excludemodels: Manual model name/alias mappings
Model Aliases
Create friendly aliases for model names:flash route to gemini-2.0-flash-exp.
Multiple Credentials
Configure multiple API keys for load balancing and failover:routing.strategy: "round-robin", requests are distributed evenly.
Complete Example
config.yaml