List Models
Returns all available models from all connected providers. Compatible with the OpenAI Models API.
Request
No parameters required. Include authentication header.
curl http://localhost:18080/v1/models \
-H "Authorization: Bearer sk-test-123"
Array of model objectsShow Model Object Properties
Model identifier (e.g., gemini-2.5-pro, geminicli:gemini-2.5-pro)
Unix timestamp of model registration
Provider name (e.g., google, anthropic, ollama)
Response Example
{
"object": "list",
"data": [
{
"id": "gemini-2.5-pro",
"object": "model",
"created": 1709251200,
"owned_by": "google"
},
{
"id": "geminicli:gemini-2.5-pro",
"object": "model",
"created": 1709251200,
"owned_by": "google-cli"
},
{
"id": "claude-sonnet-4",
"object": "model",
"created": 1709251200,
"owned_by": "anthropic"
},
{
"id": "ollama:llama3.2",
"object": "model",
"created": 1709251200,
"owned_by": "ollama"
}
]
}
Examples
curl http://localhost:18080/v1/models \
-H "Authorization: Bearer sk-test-123"
Model Naming Convention
With Provider Prefix
Models with provider prefixes explicitly route to that provider:
geminicli:gemini-2.5-pro → Gemini CLI
claudecli:claude-sonnet-4 → Claude CLI
ollama:llama3.2 → Ollama
switchai:auto → switchAI with auto-routing
Without Provider Prefix
Models without prefixes allow auto-routing:
gemini-2.5-pro → Any Gemini provider (CLI, API, or switchAI)
claude-sonnet-4 → Any Claude provider
llama3.2 → Auto-detect local provider
See Provider Prefixes for details.
Refresh Models
Trigger model re-discovery from all providers. Useful after adding new providers or models.
Request
Optional: Refresh only a specific provider (e.g., ollama, geminicli)
Examples
curl -X POST http://localhost:18080/v1/models/refresh \
-H "Authorization: Bearer sk-test-123"
Response
{
"message": "Model refresh completed",
"provider": "ollama"
}
Gemini Native API
For Gemini-specific clients, use the native endpoint:
Returns models in Gemini format with supportedGenerationMethods:
{
"models": [
{
"name": "models/gemini-2.5-pro",
"displayName": "Gemini 2.5 Pro",
"description": "Stable release of Gemini 2.5 Pro",
"inputTokenLimit": 1048576,
"outputTokenLimit": 65536,
"supportedGenerationMethods": [
"generateContent",
"countTokens"
]
}
]
}
Filter by Provider Type
Use the provider status endpoint to filter models:
GET /v1/providers?filter=active
See Provider Prefixes for filtering options.
Model Capabilities
Different models support different features:
| Capability | Gemini | Claude | Ollama | switchAI |
|---|
| Chat Completions | ✅ | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ | ✅ |
| Function Calling | ✅ | ✅ | ⚠️ | ✅ |
| Vision | ✅ | ✅ | ⚠️ | ✅ |
| JSON Mode | ✅ | ✅ | ✅ | ✅ |
| Embeddings | ✅ | ❌ | ✅ | ✅ |
⚠️ = Model-dependent
Model Discovery
switchAILocal automatically discovers models from:
- Configuration File: Models defined in
config.yaml
- CLI Providers: Models detected from installed CLI tools
- Local Servers: Models from Ollama, LM Studio
- API Providers: Models from authenticated API providers
- Dynamic Registration: Models registered at runtime
Discovery Sources
CLI Discovery
Ollama Discovery
API Discovery
Manual Config
# Gemini CLI models discovered from:
gemini models list
# Claude CLI models discovered from:
claude models
# Results cached for performance
# Ollama models queried from:
curl http://localhost:11434/api/tags
# Auto-prefixed with 'ollama:'
# API provider models fetched from:
# - Google AI Studio: /v1beta/models
# - Anthropic: Static model list
# - OpenAI: /v1/models
# Manually define custom models
models:
- id: custom-model
provider: openai-compat
endpoint: https://api.example.com/v1
Troubleshooting
No Models Returned
Cause: No providers configured or authenticated
Solution:
- Verify provider setup: Check
config.yaml for API keys
- Test CLI tools: Run
gemini --version, claude --version
- Check logs: Look for provider initialization errors
- Try refresh:
POST /v1/models/refresh
Missing Specific Model
Cause: Provider not authenticated or model not available
Solution:
- Verify provider access: Test CLI tool directly
- Check subscription: Ensure model is in your plan
- Refresh models: Force re-discovery
- Check spelling: Model IDs are case-sensitive
Stale Model List
Cause: Models cached from previous discovery
Solution:
# Force refresh all providers
curl -X POST http://localhost:18080/v1/models/refresh \
-H "Authorization: Bearer sk-test-123"
Next Steps