Skip to main content

Welcome to switchAILocal

switchAILocal is a unified API gateway that bridges multiple AI providers into a single OpenAI-compatible endpoint running locally on your machine. Connect Gemini, Claude, Ollama, and more through one consistent interface.

Key Features

Unified Endpoint

Single http://localhost:18080 endpoint for all providers

OpenAI Compatible

Works with any OpenAI SDK or tool out of the box

CLI Wrappers

Use your existing Gemini, Claude, and Codex CLI subscriptions

Local Models

Integrate Ollama and LM Studio for private inference

Intelligent Routing

Cortex Router automatically selects optimal models

Load Balancing

Round-robin across multiple accounts per provider

Auto Failover

Smart routing to alternatives on provider failures

Self-Healing

Superbrain monitors and auto-recovers from errors

Supported Providers

CLI Tools (Use Your Subscriptions)

Use the gemini CLI tool with your existing Google AI subscription. No API keys needed.
Use the claude CLI tool with your Claude Code subscription. No API keys needed.
Use the codex CLI tool with your OpenAI subscription.
Use the vibe CLI tool with your Mistral subscription.
Use the opencode CLI tool for local coding assistance.

Local Models

Cloud APIs

  • Google AI Studio - Gemini models via API key
  • Anthropic API - Claude models via API key
  • OpenAI API - GPT models via API key
  • switchAI Cloud - Unified access to 100+ models
  • OpenRouter - Route to multiple cloud providers

How It Works

  1. Point your app to http://localhost:18080/v1
  2. Use any model with optional provider:model prefix
  3. switchAILocal routes your request to the right provider
  4. Get responses in OpenAI-compatible format
All processing happens locally. Your data never leaves your machine unless you explicitly use cloud providers.

Use Cases

Multi-Provider Apps

Build apps that seamlessly switch between AI providers

Cost Optimization

Route expensive queries to local models, complex ones to cloud

Privacy-First AI

Keep sensitive data local with Ollama and LM Studio

Development Testing

Test against multiple models without changing code

Next Steps

1

Install switchAILocal

Follow the installation guide to set up the gateway
2

Run the quickstart

Complete the quickstart to make your first request
3

Connect providers

Set up your AI providers for full functionality
4

Explore features

Community & Support