Quick Diagnostics
Connection Issues
Server Won’t Start
Error: Port Already in Use
Error: Port Already in Use
bind: address already in use or port 18080 conflict.Cause: Another process is using port 18080.Solution:-
Find the process using the port:
-
Kill the conflicting process or change switchAILocal’s port:
config.yaml
-
Restart switchAILocal:
Error: Config File Not Found
Error: Config File Not Found
config.yaml not found or configuration errors.Cause: Missing or incorrectly named configuration file.Solution:Error: Permission Denied on Auth Directory
Error: Permission Denied on Auth Directory
failed to create auth directory or permission errors.Cause: Insufficient permissions for ~/.switchailocal/.Solution:Connection Refused
Cannot Connect to Localhost
Cannot Connect to Localhost
connection refused when making requests.Checklist:- Is the server running? Check with
ps aux | grep switchAILocal - Is it listening on the correct port? Verify with
lsof -i :18080 - Are you using the correct host? Try
127.0.0.1instead oflocalhost - Check firewall rules:
sudo ufw status
Cannot Connect to Ollama
Cannot Connect to Ollama
failed to connect to Ollama or timeout errors.Solution:-
Verify Ollama is running:
-
Check config.yaml:
-
For Docker deployments, use host gateway:
Authentication Errors
OAuth Failures
Error: OAuth Callback Timeout
Error: OAuth Callback Timeout
timeout waiting for OAuth callback.Cause: Browser didn’t complete the OAuth flow within the timeout period.Solution:- Ensure port 3000 is available for the callback server
- Try the login command again
- Complete the browser authorization quickly
- Check firewall isn’t blocking localhost:3000
Error: Invalid State Parameter
Error: Invalid State Parameter
OAuth state parameter is invalid.Cause: CSRF token mismatch or expired session.Solution:-
Clear OAuth state files:
-
Retry the login:
Error: Client ID/Secret Not Set
Error: Client ID/Secret Not Set
GEMINI_CLIENT_ID not found or missing credentials.Cause: OAuth environment variables not configured.Solution:Option 1: Use CLI wrappers instead (recommended):API Key Errors
Error: Invalid API Key
Error: Invalid API Key
401 Unauthorized or invalid api key.Cause: Incorrect or missing API key in requests.Solution:-
Verify the API key matches
config.yaml: -
Include the key in your request:
-
For provider API keys, check they’re correctly formatted:
Error: Authentication Required
Error: Authentication Required
Provider Errors
Model Not Found
Error: Model Not Available
Error: Model Not Available
model not found or no matching provider.Cause: Model not configured or provider prefix incorrect.Solution:-
List available models:
-
Use correct provider prefix:
-
Trigger model discovery:
Error: CLI Command Not Found
Error: CLI Command Not Found
gemini: command not found or CLI tool errors.Cause: CLI tool not installed or not in PATH.Solution:-
Install the CLI tool:
-
Verify installation:
-
Authenticate the CLI:
Rate Limiting
Error: Rate Limit Exceeded
Error: Rate Limit Exceeded
429 Too Many Requests or rate limit errors.Cause: Exceeded provider API rate limits.Solution:-
Configure multiple credentials for load balancing:
config.yaml
-
Enable automatic quota rotation:
config.yaml
-
Implement retry logic:
config.yaml
Error: Quota Exceeded
Error: Quota Exceeded
quota exceeded or billing errors.Cause: API quota limits reached.Solution:-
Check quota status:
-
Add fallback providers:
config.yaml
-
Monitor usage:
config.yaml
Request Failures
Timeout Errors
Error: Request Timeout
Error: Request Timeout
-
Increase client timeout:
-
Check provider health:
-
Use faster models:
config.yaml
Error: Streaming Connection Dropped
Error: Streaming Connection Dropped
-
Enable keepalive:
config.yaml
- Check network stability
- Reduce request complexity (shorter prompts, smaller context)
Response Errors
Error: Malformed JSON Response
Error: Malformed JSON Response
-
Enable debug logging:
config.yaml
-
Check logs for raw responses:
- Verify model supports the requested format
Error: Empty or Null Response
Error: Empty or Null Response
-
Check provider status:
- Review the request for safety issues (if using content filters)
-
Try a different provider:
Cortex Router Issues
Intelligence System
Error: Router Model Not Available
Error: Router Model Not Available
router model failed or classification errors.Cause: Configured router model is not available.Solution:-
Verify router model exists:
-
Update router model in config:
config.yaml
-
Use a local model for faster classification:
Error: Semantic Tier Failed
Error: Semantic Tier Failed
-
Download embedding model:
-
Disable semantic tier if not needed:
config.yaml
-
Check ONNX runtime installation:
Skills Not Loading
Skills Not Loading
-
Verify skills directory exists:
-
Check config path:
config.yaml
-
Reload skills:
Docker-Specific Issues
Container Won't Start
Container Won't Start
-
Check logs:
-
Verify config.yaml is mounted:
-
Check volume permissions:
Cannot Access Host Services
Cannot Access Host Services
host.docker.internal instead of localhost:Volume Permission Errors
Volume Permission Errors
Memory System Issues
Memory Stats Not Updating
Memory Stats Not Updating
-
Verify auth directory is writable:
-
Check memory files exist:
-
Enable usage statistics:
config.yaml
Performance Issues
Slow Response Times
Slow Response Times
-
Use local models for routing:
config.yaml
-
Enable semantic cache:
config.yaml
-
Reduce classification overhead:
config.yaml
High Memory Usage
High Memory Usage
-
Limit cache sizes:
config.yaml
-
Disable unused features:
config.yaml
-
Set Docker resource limits:
docker-compose.yml
Getting Help
Debug Mode
Enable verbose logging:Collect Diagnostic Information
Report Issues
If you encounter a bug, please report it with:- Description: What happened vs. what you expected
- Steps to Reproduce: Exact commands or API calls
- Environment: OS, Docker version, switchAILocal version
- Logs: Relevant error messages (with sensitive data removed)
- Configuration: Sanitized config.yaml snippet