- Real-time streaming responses for better user experience
- Advanced facet filtering to control AI context
- Hash-based Message Authentication Code (HMAC) token authentication for secure API access
- Full compatibility with popular frameworks like Next.js and Vercel AI SDK
- AI SDK version support for different message formats and features
This page is for developers who want to build custom Ask AI integrations. If
you’re looking for a no-code way to add Ask AI to your site, read the Ask AI
overview.
Overview
The Algolia Ask AI API provides endpoints for integrating with an Algolia Ask AI assistant. You can use this API to build custom chat interfaces and integrate Algolia with your LLM. Base URL:https://askai.algolia.com
All endpoints allow cross-origin requests (CORS) from browser-based apps.
Vercel AI SDK version support
The Ask AI API supports different Vercel AI SDK versions to ensure compatibility with different message formats and frameworks. Specify eitherv4
or v5
in the X-AI-SDK-Version
header:
v4
(default). Uses a message format withrole
,content
,id
, and optionalparts
.v5
. Uses a message format with a structured, mandatoryparts
array.
Authentication
Ask AI uses HMAC tokens for authentication. Tokens expire after 5 minutes, so you’ll need to request a new one before each chat request.Get an HMAC token
POST/chat/token
Headers
Your Ask AI assistant configuration ID.
Request origin for CORS validation.
Full URL of the requesting page.
Response
JSON
Endpoints
Chat with Ask AI
POST/chat
Start or continue a chat with the AI assistant.
The response is streamed in real-time using server-sent events,
letting you display the AI’s response as it’s being generated.
Headers
Your Algolia application ID.
Your Algolia API key.
Name of the Algolia index to use.
Ask AI assistant configuration ID.
HMAC token (retrieved from
/chat/token
).Vercel AI SDK version to use for the request.
Defaults to
v4
if not specified.Request body
JSON
Unique conversation identifier.
Conversation messages.
Using search parameters
Search parameters let you control how Ask AI searches your index:JSON
Advanced facet filtering with OR logic
You can use nested arrays for OR logic within facet filters:JSON
language:en
AND (docusaurus_tag:default
OR docusaurus_tag:docs-default-current
)
Common use cases
- Multi-language sites:
["language:en"]
- Versioned documentation:
["version:latest"]
or["version:v2.0"]
- Content types:
["type:content"]
to exclude navigation/metadata - Multiple tags:
[["tag:api", "tag:tutorial"]]
for OR logic - Categories with fallbacks:
[["category:advanced", "category:intermediate"]]
Response
- Content-Type:
text/event-stream
- Format: Server-sent events with incremental AI response chunks
- Benefits: Real-time response display, better user experience, lower perceived latency
Streaming responses
JavaScript
Submit feedback
POST/chat/feedback
Submit thumbs up/down feedback for a chat message.
Headers
Ask AI assistant configuration ID.
HMAC token (retrieved from
/chat/token
).Request body
JSON
Your Algolia application ID.
ID of the message for which to vote.
1
for positive feedback0
for negative feedback
Response
JSON
Health check
GET/chat/health
Check the operational status of the Ask AI service.
Response: OK
(text/plain)
Search parameter examples
Combine multiple search parameters to improve the accuracy, performance, and relevance of Ask AI results. The following example demonstrates how these parameters work together.JSON
facetFilters
) and includes only published guides (filters
), ensuring relevance and response quality.
It limits both the searchable (restrictSearchableAttributes
) and returned (attributesToRetrieve
) fields to titles, content, tags, and URLs. This reduces noise, improves focus, enhances the overall user experience, and also eliminates duplicate results (distinct
).
You can also use parameters individually to handle specific use cases.
The following examples illustrate how to configure commonly-used parameters for more targeted search behavior.
Complex filtering with the filters
parameter
JSON
facetFilters
formats
JSON
JSON
JSON
Control content visibility with attributesToRetrieve
JSON
Focus search scope with restrictSearchableAttributes
JSON
Remove duplicates with distinct
JSON
Remove group duplicates
JSON
distinct
set to 2, means that
the top two results in the defined group are returned.Custom integration examples
Basic chat implementation
With Vercel AI SDK
The Vercel AI SDK provides automatic handling of the request format and streaming, with support for the new search parameters and AI SDK version features.Using a Next.js API proxy (recommended)
Integrating the chat with a Next.js proxy has these benefits:- Security: API keys stay on the server
- Token management: Automatic token refresh
- Error handling: Centralized error management
- CORS: No cross-origin issues
- Caching: Can add caching logic if needed
- Pages router:
pages/api/chat.ts
- App router:
app/api/chat/route.ts
TypeScript
Environment variables
Frontend with useChat
React
Vercel AI SDK v5 Integration
Simple example using Vercel AI SDK v5 with the new search parameters:React
Direct integration
JavaScript
Error handling
All error responses follow this format:JSON
- Invalid assistant ID: Configuration doesn’t exist
- Expired token: Request a new HMAC token
- Rate limiting: Too many requests
- Invalid index: Index name doesn’t exist or isn’t accessible
Best practices
- Token management. Always request a fresh HMAC token before chat requests.
- Error Handling. Implement retry logic for network failures.
- Streaming. Handle server-sent events properly for real-time responses.
- Feedback. Implement thumbs up/down for continuous improvement.
- CORS. Ensure your domain is allowed in your Ask AI configuration.