You can configure Agent Studio at two levels: app-wide and per agent.
Both are updated from the API with changes taking effect immediately.
App settings
Configure app-wide behavior using the /configuration endpoint.
Data retention
Control how long Agent Studio retains your data:
curl -X PATCH 'https://{{APPLICATION_ID}}.algolia.net/agent-studio/1/configuration' \
-H 'Content-Type: application/json' \
-H 'x-algolia-application-id: {{APPLICATION_ID}}' \
-H 'x-algolia-api-key: {{API_KEY}}' \
-d '{ "maxRetentionDays": 30 }'
This operation requires an API key with the logs ACL.
| Value | Effect |
|---|
90 (default) | Data retained for 90 days |
60 | Data retained for 60 days |
30 | Data retained for 30 days |
0 | Privacy mode (see below) |
Data affected by retention settings
| Data | Behavior |
|---|
| Completion cache | Cached responses expire after the retention period |
| Conversations | Conversation history deleted after the retention period |
| Messages | Message content deleted after the retention period |
Privacy mode (maxRetentionDays: 0)
When set to 0, Agent Studio operates in privacy mode:
- Completion caching is turned off (every request calls the LLM)
- Agent Studio saves conversation metadata but the message content isn’t stored.
- Ideal for strict data privacy requirements
Conversation history
Conversations are automatically stored per retention settings. Each conversation gets an auto-generated title based on content.
What’s stored:
- Conversation metadata (ID, timestamps, user token)
- Message content (user queries, assistant responses, tool calls)
- Auto-generated titles for browsing
For GDPR compliance, users can export or delete their data with the
GET /user-data/{userToken} and DELETE /user-data/{userToken} endpoints.
For more information, see the API reference.
Agent settings
Configure individual agents using the /agents/{agentId} endpoint.
Agent properties
| Property | Type | Description |
|---|
name | string | Display name (1-128 chars) |
description | string | Optional description |
providerId | UUID | LLM provider credentials |
model | string | Model identifier. For example, gpt-5, gemini-2.5-pro |
instructions | string | System prompt |
config | object | Feature flags and settings |
tools | array | Algolia search and custom tools |
Update agent settings
Update any property without affecting others:
curl -X PATCH 'https://{{APPLICATION_ID}}.algolia.net/agent-studio/1/agents/{{agentId}}' \
-H 'Content-Type: application/json' \
-H 'x-algolia-application-id: {{APPLICATION_ID}}' \
-H 'x-algolia-api-key: {{API_KEY}}' \
-d '{ "instructions": "You are a helpful shopping assistant." }'
This operation requires an API key with the editSettings ACL.
Configuration options
The config object controls agent behavior:
| Option | Type | Default | Description |
|---|
sendUsage | boolean | false | Include token usage in response |
sendReasoning | boolean | false | Include model reasoning (if supported) |
useCache | boolean | true | Enable response caching |
features | array | [] | Experimental features |
suggestions | object | null | Prompt suggestions (see below) |
Prompt suggestions
Generate contextual follow-up questions after each agent response. Suggestions help users discover capabilities and continue conversations naturally.
{
"config": {
"suggestions": {
"enabled": true,
"model": "gpt-5-mini"
}
}
}
When enabled, the agent streams a suggestions-chunk after the main response:
{
"type": "suggestions-chunk",
"suggestions": ["How do I filter by price?", "Show me trending products", "What categories are available?"]
}
Configuration options
| Option | Type | Default | Description |
|---|
enabled | boolean | false | Enable prompt suggestions |
model | string | Agent’s model | Model for generating suggestions |
system_prompt | string | Built-in | Custom prompt for suggestion generation |
Generation settings (suggestions.generation):
| Option | Range | Default | Description |
|---|
max_count | 1-5 | 3 | Number of suggestions |
max_words | 5-15 | 8 | Max words per suggestion |
timeout_seconds | 1-30 | 10 | Timeout for generation |
Context settings (suggestions.context):
| Option | Range | Default | Description |
|---|
max_messages | 1-50 | 10 | Conversation history to include |
include_tool_outputs | - | false | Include tool results in context |
Client-side handling
With AI SDK:
import { useChat } from '@ai-sdk/react';
function Chat() {
const { messages, data } = useChat({ /* ... */ });
// Suggestions arrive in the data stream
const suggestions = data?.find(d => d.type === 'suggestions-chunk')?.suggestions;
return (
<>
{/* Chat messages */}
{suggestions && (
<div className="suggestions">
{suggestions.map(s => <button key={s}>{s}</button>)}
</div>
)}
</>
);
}
Use a faster, cheaper model (like gpt-5-mini) for suggestions. They don’t need the same reasoning depth as the main response.
Publish workflow
Agents have two states:
- Draft: test changes in preview.
- Published: live for API consumers.
curl -X POST 'https://{{APPLICATION_ID}}.algolia.net/agent-studio/1/agents/{{agentId}}/publish' \
-H 'x-algolia-application-id: {{APPLICATION_ID}}' \
-H 'x-algolia-api-key: {{API_KEY}}'
When you make changes to an agent using the PATCH /agents/{agentId} endpoint,
you’re modifying the draft version of the agent.
These changes aren’t visible to API consumers until you publish the agent using the POST /agents/{agentId}/publish endpoint.
See also