Skip to main content
Get switchAILocal up and running with your first AI request in minutes. This guide will help you install, configure, and test your unified AI gateway.

What You’ll Build

By the end of this guide, you’ll have:
  • switchAILocal server running on http://localhost:18080
  • At least one AI provider configured (CLI, API, or local)
  • Made your first successful API request
1
Clone the Repository
2
Get the latest version of switchAILocal:
3
git clone https://github.com/traylinx/switchAILocal.git
cd switchAILocal
4
Start the Server
5
Use the unified hub script to start switchAILocal. It automatically builds and runs the server:
6
Local (Recommended)
./ail.sh start
Docker
./ail.sh start --docker --build
Local with Logs
./ail.sh start -f
7
The server will start on http://localhost:18080 by default.
8
First time running? The hub script will automatically check for Go dependencies and build the binary for you.
9
Choose Your Provider
10
Select one of three authentication methods to connect AI providers:
11
CLI Wrappers (Zero Setup)
# If you have gemini, claude, or vibe CLI tools installed,
# switchAILocal uses them automatically - no configuration needed!

# Just use the CLI prefix in your requests:
curl http://localhost:18080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-test-123" \
  -d '{
    "model": "geminicli:gemini-2.5-pro",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
API Keys
# Add API keys to config.yaml
cp config.example.yaml config.yaml

# Edit config.yaml:
gemini-api-key:
  - api-key: "AIzaSy..."

claude-api-key:
  - api-key: "sk-ant-..."

codex-api-key:
  - api-key: "sk-..."
Local Models
# Enable Ollama or LM Studio in config.yaml
ollama:
  enabled: true
  base-url: "http://localhost:11434"
  auto-discover: true

lmstudio:
  enabled: true
  base-url: "http://localhost:1234/v1"
  auto-discover: true
12
The default API key sk-test-123 is for testing only. Update api-keys in config.yaml for production use.
13
Make Your First Request
14
Test your setup with a simple API call:
15
cURL
curl http://localhost:18080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-test-123" \
  -d '{
    "model": "gemini-2.5-pro",
    "messages": [
      {"role": "user", "content": "What is the meaning of life?"}
    ]
  }'
Python
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:18080/v1",
    api_key="sk-test-123"
)

response = client.chat.completions.create(
    model="gemini-2.5-pro",
    messages=[
        {"role": "user", "content": "What is the meaning of life?"}
    ]
)

print(response.choices[0].message.content)
JavaScript
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'http://localhost:18080/v1',
  apiKey: 'sk-test-123'
});

const response = await client.chat.completions.create({
  model: 'gemini-2.5-pro',
  messages: [
    { role: 'user', content: 'What is the meaning of life?' }
  ]
});

console.log(response.choices[0].message.content);
16
Verify Server Status
17
Check that everything is running smoothly:
18
./ail.sh status
19
Expected output:
20
--- Local Status ---
[OK]   Running (PID 12345)

--- Docker Status ---
Not running locally.

Bridge: Not running.

Next Steps

Quick Tips

Use the provider:model format to route to a specific provider:
# Force Gemini CLI
model="geminicli:gemini-2.5-pro"

# Force Ollama
model="ollama:llama3.2"

# Force Claude API
model="claude:claude-sonnet-4"
Without a prefix, switchAILocal auto-routes to any available provider.
Add "stream": true to your request:
stream = client.chat.completions.create(
    model="gemini-2.5-pro",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)
Query the /v1/models endpoint:
curl http://localhost:18080/v1/models \
  -H "Authorization: Bearer sk-test-123"
This returns all models from all configured providers.
Run diagnostics:
# Check dependencies
./ail.sh check

# View logs
./ail.sh logs -f

# Verify Go installation
go version  # Should be 1.24+

Troubleshooting

Connection Refused

Problem: curl: (7) Failed to connectSolution:
  • Check if server is running: ./ail.sh status
  • Verify port 18080 is not in use: lsof -i :18080
  • Review logs: ./ail.sh logs

401 Unauthorized

Problem: API key rejectedSolution:
  • Ensure your API key matches one in config.yaml:
    api-keys:
      - "sk-test-123"
    
  • Restart server after config changes: ./ail.sh restart

No Models Available

Problem: Empty models list or “model not found”Solution:
  • Verify provider is enabled in config.yaml
  • Check CLI tools are installed: which gemini claude
  • Enable auto-discovery for local providers:
    ollama:
      enabled: true
      auto-discover: true
    

Build Failed

Problem: Go build errorsSolution:
  • Update Go to 1.24+: go version
  • Clean modules: go clean -modcache
  • Re-download dependencies: go mod download
Need more help? Check the Installation Guide for detailed setup instructions or visit our GitHub Issues.