LLM Providers

Configure cloud and local server providers to power your conversations.

Cloud Providers

OpenAI

Access GPT-4, GPT-3.5, and other OpenAI models.

Setup Steps:

  1. Get your API key from OpenAI Platform
  2. Go to Settings → Manage LLM Providers → Add Provider
  3. Select "OpenAI" as the provider type
  4. Enter your API key and save

Anthropic

Access Claude models including Claude 3.5 Sonnet, Claude 3 Opus, and more.

Setup Steps:

  1. Get your API key from Anthropic Console
  2. Go to Settings → Manage LLM Providers → Add Provider
  3. Select "Anthropic" as the provider type
  4. Enter your API key and save

OpenAI-Compatible APIs

Connect to any service that implements the OpenAI API format, such as LM Studio, Together AI, or Groq.

Setup Steps:

  1. Go to Settings → Manage LLM Providers → Add Provider
  2. Select "OpenAI Spec" as the provider type
  3. Enter the base URL of your API endpoint
  4. Add your API key if required
  5. Specify a default model name

Local Server Providers

Ollama

Connect to a locally running Ollama server for privacy and offline use.

Prerequisites:

  1. Install Ollama from ollama.ai
  2. Pull a model: ollama pull llama3.2
  3. Ensure Ollama is running (default: http://localhost:11434)

Setup Steps:

  1. Go to Settings → Manage LLM Providers → Add Provider
  2. Select "Ollama" as the provider type
  3. Enter the server URL (default: http://localhost:11434)
  4. Test the connection to verify it works

Note: Ollama connection is not available in the web version due to browser CORS restrictions. Use the Android app for local Ollama support.

On-Device Providers

For completely local, on-device inference without any server, see the Local Models documentation.