Skip to main content

Overview

switchAILocal supports multiple AI providers, each with its own configuration format. Providers are organized into:
  • Cloud Providers: OpenAI, Anthropic, Google Gemini, Traylinx SwitchAI
  • Local Providers: Ollama, LM Studio, OpenCode
  • Compatible Providers: OpenRouter, Groq, Together AI, and others via OpenAI compatibility

Traylinx SwitchAI Cloud

Unified access to 100+ cloud models through a single API.
config.yaml
switchai-api-key:
  - api-key: "sk-lf-..."
    base-url: "https://switchai.traylinx.com/v1"
    models:
      - name: "openai/gpt-oss-120b"
        alias: "switchai-fast"
      - name: "deepseek-reasoner"
        alias: "switchai-reasoner"
switchai-api-key[].api-key
string
required
Your SwitchAI API key. Get one at switchai.traylinx.com
switchai-api-key[].base-url
string
default:"https://switchai.traylinx.com/v1"
SwitchAI API endpoint
switchai-api-key[].prefix
string
Optional prefix to namespace models (e.g., "teamA/deepseek")
switchai-api-key[].models
array
Model name mappings and aliases
switchai-api-key[].proxy-url
string
Override global proxy for this credential
switchai-api-key[].headers
object
Additional HTTP headers for requests

Google Gemini API

Configure Google Gemini API access:
config.yaml
gemini-api-key:
  - api-key: "AIzaSy..."
    prefix: "google"
    base-url: "https://generativelanguage.googleapis.com"
gemini-api-key[].api-key
string
required
Google Gemini API key from Google AI Studio
gemini-api-key[].prefix
string
Namespace models (e.g., "google/gemini-pro")
gemini-api-key[].base-url
string
Override Gemini API endpoint (optional)
gemini-api-key[].models
array
Model aliases for custom routing
Example with model exclusions:
gemini-api-key:
  - api-key: "AIzaSy..."
    prefix: "google"
    excluded-models:
      - "*-preview"  # Exclude all preview models
      - "gemini-1.0-*"  # Exclude Gemini 1.0 models

Anthropic Claude API

Configure Claude API credentials:
config.yaml
claude-api-key:
  - api-key: "sk-ant-..."
    models:
      - name: "claude-3-5-sonnet-20241022"
        alias: "sonnet"
claude-api-key[].api-key
string
required
Anthropic API key from Anthropic Console
claude-api-key[].base-url
string
Override Claude API endpoint (for Claude-compatible services)
claude-api-key[].models
array
Model name mappings and aliases
claude-api-key[].prefix
string
Namespace models for this credential

OpenAI / Codex API

Configure OpenAI and compatible services:
config.yaml
codex-api-key:
  - api-key: "sk-..."
    base-url: "https://api.openai.com/v1"
codex-api-key[].api-key
string
required
OpenAI API key from OpenAI Platform
codex-api-key[].base-url
string
required
OpenAI API endpoint
codex-api-key[].models
array
Model aliases (optional)
The codex-api-key name is historical. This provider works with all OpenAI models, not just Codex.

Ollama (Local)

Configure local Ollama server integration:
config.yaml
ollama:
  enabled: true
  base-url: "http://localhost:11434"
  auto-discover: true
ollama.enabled
boolean
default:"false"
Enable Ollama provider registration
ollama.base-url
string
default:"http://localhost:11434"
Ollama API endpoint
ollama.auto-discover
boolean
default:"true"
Automatically fetch available models from Ollama on startup
ollama.excluded-models
array
Model IDs to exclude from discovery
ollama.models
array
Manual model alias definitions
Example with custom models:
ollama:
  enabled: true
  base-url: "http://localhost:11434"
  auto-discover: true
  excluded-models:
    - "llama2"  # Exclude old llama2
  models:
    - name: "llama3.2:latest"
      alias: "llama"

OpenCode (Local)

Integrate with local OpenCode server:
config.yaml
opencode:
  enabled: true
  base-url: "http://localhost:4096"
  default-agent: "build"
opencode.enabled
boolean
default:"false"
Enable OpenCode provider integration
opencode.base-url
string
default:"http://localhost:4096"
OpenCode API endpoint
opencode.default-agent
string
default:"build"
Default agent to use when no specific model is requested

LM Studio (Local)

Configure LM Studio integration:
config.yaml
lmstudio:
  enabled: false
  base-url: "http://localhost:1234/v1"
  auto-discover: true
lmstudio.enabled
boolean
default:"false"
Enable LM Studio provider registration
lmstudio.base-url
string
default:"http://localhost:1234/v1"
LM Studio API endpoint
lmstudio.auto-discover
boolean
default:"true"
Automatically fetch models from LM Studio on startup

OpenAI Compatibility

Configure third-party providers that support OpenAI API format:
config.yaml
openai-compatibility:
  - name: "groq"
    prefix: "groq"
    base-url: "https://api.groq.com/openai/v1"
    api-key-entries:
      - api-key: "gsk_..."
  - name: "openrouter"
    prefix: "or"
    base-url: "https://openrouter.ai/api/v1"
    api-key-entries:
      - api-key: "sk-or-v1-..."
openai-compatibility[].name
string
required
Provider identifier (used in logs and metrics)
openai-compatibility[].base-url
string
required
Provider’s OpenAI-compatible API endpoint
openai-compatibility[].prefix
string
Namespace models (e.g., "groq/llama-3.1-70b")
openai-compatibility[].api-key-entries
array
List of API keys for this provider
openai-compatibility[].models
array
Model name mappings and aliases
Supported providers:
  • Groq: https://api.groq.com/openai/v1
  • OpenRouter: https://openrouter.ai/api/v1
  • Together AI: https://api.together.xyz/v1
  • Fireworks AI: https://api.fireworks.ai/inference/v1
  • DeepSeek: https://api.deepseek.com/v1
  • Any OpenAI-compatible service

Vertex AI Compatibility

For third-party services using Vertex AI-style protocols with API key auth:
config.yaml
vertex-api-key:
  - api-key: "vk-..."
    base-url: "https://api.example.com"
    models:
      - name: "gemini-2.0-flash"
        alias: "flash"
vertex-api-key[].api-key
string
required
API key for Vertex-compatible service
vertex-api-key[].base-url
string
required
Base URL for Vertex-compatible endpoint
vertex-api-key[].models
array
Model configurations with aliases
Vertex compatibility is for third-party services that mimic Google’s Vertex AI endpoint structure but use simple API key authentication instead of OAuth.

Global Model Exclusions

Exclude models globally for OAuth/file-backed auth entries:
config.yaml
oauth-excluded-models:
  geminicli:
    - "*-preview"
  ollama:
    - "llama2"
oauth-excluded-models
object
Map of provider names to excluded model patterns (supports wildcards)

Per-Provider Settings

All cloud providers support these common settings:
  • prefix: Namespace models (e.g., team-a/model-name)
  • proxy-url: Override global proxy for this provider
  • models-url: Override model discovery endpoint
  • headers: Add custom HTTP headers
  • excluded-models: List of model patterns to exclude
  • models: Manual model name/alias mappings

Model Aliases

Create friendly aliases for model names:
gemini-api-key:
  - api-key: "AIzaSy..."
    models:
      - name: "gemini-2.0-flash-exp"
        alias: "flash"
      - name: "gemini-2.0-pro-exp"
        alias: "pro"
Now requests for flash route to gemini-2.0-flash-exp.

Multiple Credentials

Configure multiple API keys for load balancing and failover:
gemini-api-key:
  - api-key: "AIzaSy...key1"
    prefix: "team-a"
  - api-key: "AIzaSy...key2"
    prefix: "team-b"
  - api-key: "AIzaSy...shared"
    # No prefix - shared across all teams
With routing.strategy: "round-robin", requests are distributed evenly.

Complete Example

config.yaml
# Cloud providers
switchai-api-key:
  - api-key: "sk-lf-..."
    models:
      - name: "deepseek-reasoner"
        alias: "reasoner"

gemini-api-key:
  - api-key: "AIzaSy..."
    prefix: "google"

claude-api-key:
  - api-key: "sk-ant-..."
    models:
      - name: "claude-3-5-sonnet-20241022"
        alias: "sonnet"

codex-api-key:
  - api-key: "sk-..."
    base-url: "https://api.openai.com/v1"

# Local providers
ollama:
  enabled: true
  base-url: "http://localhost:11434"
  auto-discover: true

opencode:
  enabled: true
  base-url: "http://localhost:4096"
  default-agent: "build"

# OpenAI-compatible providers
openai-compatibility:
  - name: "groq"
    prefix: "groq"
    base-url: "https://api.groq.com/openai/v1"
    api-key-entries:
      - api-key: "gsk_..."
  - name: "openrouter"
    prefix: "or"
    base-url: "https://openrouter.ai/api/v1"
    api-key-entries:
      - api-key: "sk-or-v1-..."

Next Steps