Skip to main content
This is a beta feature according to Algolia’s Terms of Service (“Beta Services”).
Agent Studio lets you use your preferred large language model (LLM) provider. This means you connect your own provider accounts to power your agents, giving you control over model selection, data governance, and costs.

Advantages of using your own LLM

You pay only for the tokens you consume directly with your LLM provider. Optimize spending with Algolia’s transparent pricing: there are no fees on top of provider costs.
Switch between providers or models without rebuilding your entire system. You can use smaller models for simple tasks and larger ones for complex scenarios.
  • Route workflows based on demand, cost, or performance across various models and providers, and enable fallback strategies if there’s a provider outage.
  • Reduce vendor lock-in and adapt to evolving model capabilities.
You maintain full ownership of your data and business logic. Algolia handles retrieval and orchestration, while you control which provider processes your data. This supports governance, compliance, and data sovereignty requirements. Benefit from observability, fine-tuning, and guardrails offered by your chosen provider.

Supported providers

Agent Studio supports multiple LLM providers. Choose based on your requirements for regional compliance, model availability, and cost optimization.

Provider overview

ProviderKey requirementRegional support
AnthropicAnthropic API keyGlobal
OpenAIOpenAI API keyUS, Europe
Azure OpenAIAzure endpoint and deployment nameYour Azure region
Google GeminiGemini API keyGlobal
OpenAI-compatibleProvider API key and base URLVaries by provider
Algolia provides free access to GPT-4.1 (from OpenAI) for creating and testing your first agents.While your own LLM provider is required for production environments, using the same model during development is strongly recommended to prevent unexpected behavior upon deployment.

Provider details

Agent Studio supports Anthropic’s Claude latest models.

Supported models

claude-opus-4-5, claude-haiku-4-5, claude-sonnet-4-5, claude-opus-4-1, claude-opus-4, claude-sonnet-4, claude-3-5-haiku, claude-3-opus, claude-3-haiku

Configuration requirements

  • Anthropic API key (required)
  • Custom endpoint URL (optional)

Where to get your API key

  1. Go to Claude Console and sign in with your Anthropic account.
  2. Click Create Key, then name the key.
  3. Copy the API key and store it securely.
You can create API keys for free but you must add credits to your account before making API calls.
For details about how to authenticate, see Anthropic’s API documentation.
To request a provider that isn’t listed here, contact the Algolia support team.

Add a provider

  1. Go to the Settings page in the Agent Studio dashboard
  2. Click Create provider profile, and select your provider type
  3. Enter a name for this provider configuration (for example, “OpenAI-Production” or “Azure-EU-Prod”) and fill in the required configuration fields
  4. Click Save to complete the setup
Add a provider Your provider is now available for use when configuring agents.

Use your provider

Go to the Agent Studio page in the dashboard and select your agent. In Provider and model, select your LLM provider and choose a model. Screenshot of a 'Change provider' dialog with 'OpenRouteur' selected among options for 'OpenAI EU' and 'Algolia sandbox', and a 'Confirm' button. You can switch providers or models at any time. Changes take effect immediately.

Manage providers

To update or delete providers, go to Agent Studio’s Settings and click the provider’s action menu . Provider updates affect all agents using that provider. If you delete a provider that’s in use, those agents will stop working until you assign a different provider.

Advanced model capabilities

Different models support different configuration parameters. Agent Studio automatically detects and applies appropriate settings based on the model you select.
Most models support temperature configuration (0.0 to 2.0) to control randomness in responses. Use:
  • Lower values (0.0-0.5) for more deterministic, focused responses.
  • Higher values (1.0-2.0) for more creative and varied responses.
By default, Agent Studio doesn’t apply a temperature value. Models use their provider’s default (typically 1.0).You can set the temperature in your agent configuration:
{
  "config": {
    "temperature": 0.7
  }
}
Models that support temperature:
  • All Anthropic Claude models
  • All Google Gemini models
  • Most OpenAI GPT models (except GPT-5 and o-series)
  • Most OpenAI-compatible models
Models that don’t support temperature:
  • GPT-5 series models
  • o-series models (o1, o3, o4)
When temperature isn’t supported, it’s automatically excluded from requests.

See also

Last modified on February 19, 2026