Skip to main content

Quick Diagnostics

1

Check Server Status

curl http://localhost:18080/health
Expected response:
{"status": "ok"}
2

Verify Provider Health

curl http://localhost:18080/v0/management/heartbeat/status \
  -H "X-Management-Key: your-secret-key"
3

List Available Models

curl http://localhost:18080/v1/models \
  -H "Authorization: Bearer sk-test-123"
4

Check Logs

# Local deployment
tail -f logs/switchailocal.log

# Docker deployment
docker-compose logs -f switchailocal

Connection Issues

Server Won’t Start

Symptom: bind: address already in use or port 18080 conflict.Cause: Another process is using port 18080.Solution:
  1. Find the process using the port:
    # Linux/Mac
    lsof -i :18080
    
    # Windows
    netstat -ano | findstr :18080
    
  2. Kill the conflicting process or change switchAILocal’s port:
    config.yaml
    port: 18081  # Use a different port
    
  3. Restart switchAILocal:
    ./switchAILocal
    
Symptom: config.yaml not found or configuration errors.Cause: Missing or incorrectly named configuration file.Solution:
# Copy the example config
cp config.example.yaml config.yaml

# Edit with your settings
nano config.yaml
Symptom: failed to create auth directory or permission errors.Cause: Insufficient permissions for ~/.switchailocal/.Solution:
# Fix permissions
mkdir -p ~/.switchailocal
chmod 700 ~/.switchailocal

# Docker: Fix ownership
sudo chown -R 1000:1000 ~/.switchailocal

Connection Refused

Symptom: connection refused when making requests.Checklist:
  • Is the server running? Check with ps aux | grep switchAILocal
  • Is it listening on the correct port? Verify with lsof -i :18080
  • Are you using the correct host? Try 127.0.0.1 instead of localhost
  • Check firewall rules: sudo ufw status
Solution:
# Verify server is running
./ail.sh status

# Restart if needed
./ail.sh restart
Symptom: failed to connect to Ollama or timeout errors.Solution:
  1. Verify Ollama is running:
    curl http://localhost:11434/api/tags
    
  2. Check config.yaml:
    ollama:
      enabled: true
      base-url: "http://localhost:11434"
    
  3. For Docker deployments, use host gateway:
    ollama:
      base-url: "http://host.docker.internal:11434"
    

Authentication Errors

OAuth Failures

Symptom: timeout waiting for OAuth callback.Cause: Browser didn’t complete the OAuth flow within the timeout period.Solution:
  1. Ensure port 3000 is available for the callback server
  2. Try the login command again
  3. Complete the browser authorization quickly
  4. Check firewall isn’t blocking localhost:3000
# Verify port 3000 is free
lsof -i :3000

# Retry OAuth login
./switchAILocal --login
Symptom: OAuth state parameter is invalid.Cause: CSRF token mismatch or expired session.Solution:
  1. Clear OAuth state files:
    rm -f ~/.switchailocal/oauth_*
    
  2. Retry the login:
    ./switchAILocal --login
    
Symptom: GEMINI_CLIENT_ID not found or missing credentials.Cause: OAuth environment variables not configured.Solution:Option 1: Use CLI wrappers instead (recommended):
# No OAuth setup needed
# Just use: geminicli:gemini-2.5-pro
Option 2: Set environment variables:
export GEMINI_CLIENT_ID="your-client-id"
export GEMINI_CLIENT_SECRET="your-client-secret"
./switchAILocal --login

API Key Errors

Symptom: 401 Unauthorized or invalid api key.Cause: Incorrect or missing API key in requests.Solution:
  1. Verify the API key matches config.yaml:
    api-keys:
      - "sk-test-123"
    
  2. Include the key in your request:
    curl http://localhost:18080/v1/chat/completions \
      -H "Authorization: Bearer sk-test-123" \
      ...
    
  3. For provider API keys, check they’re correctly formatted:
    gemini-api-key:
      - api-key: "AIzaSy..."  # Must start with AIzaSy
    
    claude-api-key:
      - api-key: "sk-ant-..."  # Must start with sk-ant-
    
Symptom: Requests fail with authentication errors.Solution:Check which authentication method is configured:
# No config needed - uses CLI tool's auth
# Just use: geminicli:gemini-2.5-pro

Provider Errors

Model Not Found

Symptom: model not found or no matching provider.Cause: Model not configured or provider prefix incorrect.Solution:
  1. List available models:
    curl http://localhost:18080/v1/models \
      -H "Authorization: Bearer sk-test-123"
    
  2. Use correct provider prefix:
    # CLI providers
    geminicli:gemini-2.5-pro
    claudecli:claude-sonnet-4
    ollama:llama3.2
    
    # API providers
    gemini:gemini-2.5-pro
    claude:claude-3-5-sonnet-20241022
    
  3. Trigger model discovery:
    curl -X POST http://localhost:18080/v0/management/discover_models \
      -H "X-Management-Key: your-secret-key"
    
Symptom: gemini: command not found or CLI tool errors.Cause: CLI tool not installed or not in PATH.Solution:
  1. Install the CLI tool:
    # Google Gemini CLI
    npm install -g @google/gemini-cli
    
    # Anthropic Claude CLI
    npm install -g @anthropic-ai/claude-cli
    
    # OpenAI Codex CLI
    npm install -g @openai/codex-cli
    
  2. Verify installation:
    which gemini
    gemini --version
    
  3. Authenticate the CLI:
    gemini auth login
    

Rate Limiting

Symptom: 429 Too Many Requests or rate limit errors.Cause: Exceeded provider API rate limits.Solution:
  1. Configure multiple credentials for load balancing:
    config.yaml
    gemini-api-key:
      - api-key: "AIzaSy...account1"
      - api-key: "AIzaSy...account2"
      - api-key: "AIzaSy...account3"
    
    routing:
      strategy: "round-robin"
    
  2. Enable automatic quota rotation:
    config.yaml
    quota-exceeded:
      switch-project: true
      switch-preview-model: true
    
  3. Implement retry logic:
    config.yaml
    request-retry: 3
    
Symptom: quota exceeded or billing errors.Cause: API quota limits reached.Solution:
  1. Check quota status:
    curl http://localhost:18080/v0/management/quota \
      -H "X-Management-Key: your-secret-key"
    
  2. Add fallback providers:
    config.yaml
    intelligence:
      enabled: true
      router-fallback: "ollama:llama3.2"  # Free local fallback
    
  3. Monitor usage:
    config.yaml
    usage-statistics-enabled: true
    

Request Failures

Timeout Errors

Symptom: Requests timeout before completion.Cause: Slow provider response or network issues.Solution:
  1. Increase client timeout:
    from openai import OpenAI
    
    client = OpenAI(
        base_url="http://localhost:18080/v1",
        api_key="sk-test-123",
        timeout=120.0  # 2 minutes
    )
    
  2. Check provider health:
    curl http://localhost:18080/v0/management/heartbeat/status \
      -H "X-Management-Key: your-secret-key"
    
  3. Use faster models:
    config.yaml
    intelligence:
      matrix:
        fast: "gemini-2.0-flash"  # Faster model
    
Symptom: Streaming responses stop mid-completion.Cause: SSE connection lost or provider timeout.Solution:
  1. Enable keepalive:
    config.yaml
    streaming:
      keepalive-seconds: 15
      bootstrap-retries: 2
    
  2. Check network stability
  3. Reduce request complexity (shorter prompts, smaller context)

Response Errors

Symptom: JSON parsing errors or incomplete responses.Cause: Provider returned invalid JSON or protocol mismatch.Solution:
  1. Enable debug logging:
    config.yaml
    debug: true
    logging-to-file: true
    
  2. Check logs for raw responses:
    tail -f logs/switchailocal.log | grep "response:"
    
  3. Verify model supports the requested format
Symptom: API returns empty content or null values.Cause: Provider error, rate limit, or safety filter.Solution:
  1. Check provider status:
    curl http://localhost:18080/v0/management/heartbeat/status \
      -H "X-Management-Key: your-secret-key"
    
  2. Review the request for safety issues (if using content filters)
  3. Try a different provider:
    # Original request
    {"model": "gemini:gemini-2.5-pro", ...}
    
    # Try alternative
    {"model": "claude:claude-3-5-sonnet-20241022", ...}
    

Cortex Router Issues

Intelligence System

Symptom: router model failed or classification errors.Cause: Configured router model is not available.Solution:
  1. Verify router model exists:
    curl http://localhost:18080/v1/models | grep "router-model"
    
  2. Update router model in config:
    config.yaml
    intelligence:
      router-model: "ollama:qwen:0.5b"
      router-fallback: "gemini:gemini-2.0-flash"
    
  3. Use a local model for faster classification:
    ollama pull qwen:0.5b
    
Symptom: Embedding errors or semantic matching failures.Cause: Embedding model not loaded or ONNX runtime missing.Solution:
  1. Download embedding model:
    ./scripts/download-embedding-model.sh
    
  2. Disable semantic tier if not needed:
    config.yaml
    intelligence:
      embedding:
        enabled: false
      semantic-tier:
        enabled: false
    
  3. Check ONNX runtime installation:
    # Linux
    sudo apt-get install libonnxruntime
    
    # macOS
    brew install onnxruntime
    
Symptom: Skills directory errors or skill matching failures.Cause: Skills directory not found or misconfigured.Solution:
  1. Verify skills directory exists:
    ls -la plugins/cortex-router/skills/
    
  2. Check config path:
    config.yaml
    intelligence:
      skills:
        enabled: true
        directory: "plugins/cortex-router/skills"
    
  3. Reload skills:
    curl -X POST http://localhost:18080/v0/management/intelligence/reload \
      -H "X-Management-Key: your-secret-key"
    

Docker-Specific Issues

Symptom: Docker container exits immediately.Solution:
  1. Check logs:
    docker-compose logs switchailocal
    
  2. Verify config.yaml is mounted:
    docker-compose exec switchailocal ls -la /app/config.yaml
    
  3. Check volume permissions:
    sudo chown -R 1000:1000 ~/.switchailocal
    
Symptom: Container can’t connect to Ollama/LM Studio on host.Solution:Use host.docker.internal instead of localhost:
config.yaml
ollama:
  enabled: true
  base-url: "http://host.docker.internal:11434"

lmstudio:
  enabled: true
  base-url: "http://host.docker.internal:1234/v1"
Symptom: Permission denied on mounted volumes.Solution:
# Fix ownership (UID 1000 is the container user)
sudo chown -R 1000:1000 ~/.switchailocal
sudo chown -R 1000:1000 ./logs
sudo chown -R 1000:1000 ./plugins

# Restart container
docker-compose restart switchailocal

Memory System Issues

Symptom: Analytics show stale data or zero values.Cause: Memory system disabled or directory permissions.Solution:
  1. Verify auth directory is writable:
    ls -la ~/.switchailocal/memory/
    
  2. Check memory files exist:
    ls -la ~/.switchailocal/memory/*.json
    
  3. Enable usage statistics:
    config.yaml
    usage-statistics-enabled: true
    

Performance Issues

Symptom: Requests take longer than expected.Solution:
  1. Use local models for routing:
    config.yaml
    intelligence:
      router-model: "ollama:qwen:0.5b"  # Fast local classifier
    
  2. Enable semantic cache:
    config.yaml
    intelligence:
      semantic-cache:
        enabled: true
        max-size: 10000
    
  3. Reduce classification overhead:
    config.yaml
    intelligence:
      semantic-tier:
        enabled: true  # Bypass LLM classification for known patterns
    
Symptom: Process consumes excessive RAM.Solution:
  1. Limit cache sizes:
    config.yaml
    intelligence:
      semantic-cache:
        max-size: 1000  # Reduce from 10000
    
  2. Disable unused features:
    config.yaml
    intelligence:
      embedding:
        enabled: false
      semantic-tier:
        enabled: false
    
  3. Set Docker resource limits:
    docker-compose.yml
    deploy:
      resources:
        limits:
          memory: 2G
    

Getting Help

Debug Mode

Enable verbose logging:
config.yaml
debug: true
logging-to-file: true

Collect Diagnostic Information

# System info
uname -a

# Server version
./switchAILocal --version

# Recent logs
tail -100 logs/switchailocal.log

# Provider health
curl http://localhost:18080/v0/management/heartbeat/status \
  -H "X-Management-Key: your-secret-key"

# Available models
curl http://localhost:18080/v1/models \
  -H "Authorization: Bearer sk-test-123"

Report Issues

If you encounter a bug, please report it with:
  1. Description: What happened vs. what you expected
  2. Steps to Reproduce: Exact commands or API calls
  3. Environment: OS, Docker version, switchAILocal version
  4. Logs: Relevant error messages (with sensitive data removed)
  5. Configuration: Sanitized config.yaml snippet
GitHub Issues: https://github.com/traylinx/switchAILocal/issues

Next Steps

Setup Providers

Configure your AI providers correctly

Management Dashboard

Monitor system health and performance