Skip to main content
This is a beta feature according to Algolia’s Terms of Service (“Beta Services”).
Agent Studio lets you use your preferred large language model (LLM) provider. This means you connect your own provider accounts to power your agents, giving you control over model selection, data governance, and costs.

Advantages of using your own LLM

You pay only for the tokens you consume directly with your LLM provider. Optimize spending with Algolia’s transparent pricing: there are no fees on top of provider costs.
Switch between providers or models without rebuilding your entire system. You can use smaller models for simple tasks and larger ones for complex scenarios.
  • Route workflows based on demand, cost, or performance across various models and providers, and enable fallback strategies if there’s a provider outage.
  • Reduce vendor lock-in and adapt to evolving model capabilities.
You maintain full ownership of your data and business logic. Algolia handles retrieval and orchestration, while you control which provider processes your data. This supports governance, compliance, and data sovereignty requirements. Benefit from observability, fine-tuning, and guardrails offered by your chosen provider.

Supported providers

Agent Studio supports multiple LLM providers. Choose based on your requirements for regional compliance, model availability, and cost optimization.

Provider overview

ProviderKey requirementRegional support
OpenAIOpenAI API keyUS, Europe
Azure OpenAIAzure endpoint and deployment nameYour Azure region
Google GeminiGemini API keyGlobal
OpenAI-compatibleProvider API key and base URLVaries by provider
Algolia provides free access to ChatGPT-4.1 (from OpenAI) for creating and testing your first agents.While your own LLM provider is required for production environments, using the same model during development is strongly recommended to prevent unexpected behavior upon deployment.

Provider details

  • OpenAI
  • Azure OpenAI
  • Google Gemini
  • OpenAI-compatible
Agent Studio supports the latest OpenAI models.

Supported models

gpt-5, gpt-5-nano, gpt-5-mini, gpt-4, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini, o1-preview, o3, o3-mini, o4-mini

Configuration requirements

  • OpenAI API key (required)
  • Region (required)
Regional considerationsOpenAI supports the US and Europe data residency regions.
  • Ensure European data residency by directing requests to the https://eu.api.openai.com/v1 base URL. This URL is configured automatically when you add an OpenAI provider from the Algolia dashboard and select the Europe region.
  • If you operate in unsupported regions (those outside the US and Europe), consider creating separate agents for European and non-European customers, each configured with the appropriate regional endpoint.

Where to get your API key

  1. Go to the OpenAI Platform and sign in with your OpenAI account.
  2. Click Create new secret key and name it.
  3. Copy the API key and store it securely.
Although API key setup is free, OpenAI requires an active payment method before you can use the key.
For more information, see OpenAI’s API key documentation.
To request a provider that isn’t listed here, contact the Algolia support team.

Add a provider

  1. Go to the Settings page in the Agent Studio dashboard
  2. Click Create provider profile, and select your provider type
  3. Enter a name for this provider configuration (for example, “OpenAI-Production” or “Azure-EU-Prod”) and fill in the required configuration fields
  4. Click Save to complete the setup
Add a provider Your provider is now available for use when configuring agents.

Use your provider

Go to the Agent Studio page in the dashboard and select your agent. In Provider and model, select your LLM provider and choose a model. Select LLM provider in Agent Studio You can switch providers or models at any time. Changes take effect immediately.

Manage providers

To update or delete providers, go to Agent Studio’s Settings and click the provider’s action menu . Provider updates affect all agents using that provider. If you delete a provider that’s in use, those agents will stop working until you assign a different provider.

Advanced model capabilities

Different models support different configuration parameters. Agent Studio automatically detects and applies appropriate settings based on the model being used.
  • Temperature support
  • Reasoning models
Most models support temperature configuration (0.0 to 2.0) to control randomness in responses. Use:
  • Lower values (0.0-0.5) for more deterministic, focused responses.
  • Higher values (1.0-2.0) for more creative and varied responses.
By default, Agent Studio doesn’t apply a temperature value. Models use their provider’s default (typically 1.0).You can set the temperature in your agent configuration:
{
  "config": {
    "temperature": 0.7
  }
}
Models that support temperature:
  • All OpenAI GPT models (except reasoning models: o1, o1-mini, o1-preview, o3, o3-mini, o4-mini)
  • All Google Gemini models
  • Most OpenAI-compatible models
When temperature isn’t supported, it’s automatically excluded from requests.

See also