Agent Studio lets you use your preferred large language model (LLM) provider.
This means you connect your own provider accounts to power your agents, giving you control over model selection, data governance, and costs.
You pay only for the tokens you consume directly with your LLM provider.
Optimize spending with Algolia’s transparent pricing:
there are no fees on top of provider costs.
Vendor flexibility
Switch between providers or models without rebuilding your entire system.
You can use smaller models for simple tasks and larger ones for complex scenarios.
Route workflows based on demand, cost, or performance across various models and providers, and enable fallback strategies if there’s a provider outage.
Reduce vendor lock-in and adapt to evolving model capabilities.
Data and context control
You maintain full ownership of your data and business logic.
Algolia handles retrieval and orchestration,
while you control which provider processes your data.
This supports governance, compliance, and data sovereignty requirements.
Benefit from observability, fine-tuning, and guardrails offered by your chosen provider.
Algolia provides free access to GPT-4.1 (from OpenAI) for creating and testing your first agents.While your own LLM provider is required for production environments,
using the same model during development is strongly recommended to prevent unexpected behavior upon deployment.
Regional considerationsOpenAI supports the US and Europe data residency regions.
Ensure European data residency by directing requests to the https://eu.api.openai.com/v1 base URL. This URL is configured automatically when you add an OpenAI provider from the Algolia dashboard and select the Europe region.
If you operate in unsupported regions (those outside the US and Europe), consider creating separate agents for European and non-European customers, each configured with the appropriate regional endpoint.
Agent Studio supports the Azure OpenAI service,
which lets you deploy OpenAI models in your Azure environment with custom compliance, security, and access controls.
Any model deployed in your Azure OpenAI resource can be used.
Common examples are GPT-5, GPT-4.1, and GPT-4o.The model selection isn’t limited to a fixed list.
You specify your Azure deployment name,
which can include custom configurations like rate limits and content filters.
Agent Studio supports any provider that implements the OpenAI API specification.This includes, but is not limited to: OpenRouter, LiteLLM, Groq, Mistral AI, Together AI, DeepSeek, Hugging Face Inference API, and custom or self-hosted LLM deployments.
Unified interface for multiple providers with standardized integration, routing, logging, and governance. For more information about OpenAI-compatible endpoints, see LiteLLM’s documentation
Depends on deployment
Mistral AI
Instruction-tuned models focused on accuracy and performance. For more information, see Mistral’s API documentation
https://api.mistral.ai/v1
OpenRouter
Access to numerous models (Grok, DeepSeek, and more) with routing and automatic fallback for consistent uptime. For more information about how to use the OpenAI SDK, see OpenRouter’s documentation
https://openrouter.ai/api/v1
Together AI
Model-hosting ecosystem for open source models with flexible deployment options. For more information about OpenAI compatibility, see Together AI’s documentation
https://api.together.xyz/v1
DeepSeek
Cost-effective reasoning models with OpenAI-compatible API. For more information, see DeepSeek’s API documentation
Go to the Agent Studio page in the dashboard and select your agent.
In Provider and model, select your LLM provider and choose a model.You can switch providers or models at any time.
Changes take effect immediately.
To update or delete providers,
go to Agent Studio’s Settings and click the provider’s action menu .Provider updates affect all agents using that provider.
If you delete a provider that’s in use, those agents will stop working until you assign a different provider.
Different models support different configuration parameters.
Agent Studio automatically detects and applies appropriate settings based on the model you select.
Temperature support
Reasoning models
Most models support temperature configuration (0.0 to 2.0) to control randomness in responses.
Use:
Lower values (0.0-0.5) for more deterministic, focused responses.
Higher values (1.0-2.0) for more creative and varied responses.
By default, Agent Studio doesn’t apply a temperature value. Models use their provider’s default (typically 1.0).You can set the temperature in your agent configuration:
Report incorrect code
Copy
{ "config": { "temperature": 0.7 }}
Models that support temperature:
All Anthropic Claude models
All Google Gemini models
Most OpenAI GPT models (except GPT-5 and o-series)
Most OpenAI-compatible models
Models that don’t support temperature:
GPT-5 series models
o-series models (o1, o3, o4)
When temperature isn’t supported, it’s automatically excluded from requests.
Some models support advanced reasoning capabilities with different configuration options depending on the provider and model series.
OpenAI o-series models (o3, o3-mini, o4-mini) use extended reasoning to solve complex problems.
These models don’t support temperature or custom reasoning parameters.
They analyze problems step-by-step before generating responses,
which can improve accuracy for complex queries but increases latency and token usage.