Skip to main content
The Ask AI API enables developers to build custom chat interfaces powered by Algolia’s AI assistant. Use these endpoints to create tailored conversational experiences that search your Algolia index and generate contextual responses using your own LLM provider. Key features:
  • Real-time streaming responses for better user experience
  • Advanced facet filtering to control AI context
  • Hash-based Message Authentication Code (HMAC) token authentication for secure API access
  • Full compatibility with popular frameworks like Next.js and Vercel AI SDK
  • AI SDK version support for different message formats and features
This page is for developers who want to build custom Ask AI integrations. If you’re looking for a no-code way to add Ask AI to your site, read the Ask AI overview.

Overview

The Algolia Ask AI API provides endpoints for integrating with an Algolia Ask AI assistant. You can use this API to build custom chat interfaces and integrate Algolia with your LLM. Base URL: https://askai.algolia.com All endpoints allow cross-origin requests (CORS) from browser-based apps.

Vercel AI SDK version support

The Ask AI API supports different Vercel AI SDK versions to ensure compatibility with different message formats and frameworks. Specify either v4 or v5 in the X-AI-SDK-Version header:
  • v4 (default). Uses a message format with role, content, id, and optional parts.
  • v5. Uses a message format with a structured, mandatoryparts array.

Authentication

Ask AI uses HMAC tokens for authentication. Tokens expire after 5 minutes, so you’ll need to request a new one before each chat request.

Get an HMAC token

POST /chat/token

Headers

X-Algolia-Assistant-Id
string
required
Your Ask AI assistant configuration ID.
origin
string
Request origin for CORS validation.
referer
string
Full URL of the requesting page.

Response

JSON
{
  "success": true,
  "token": "HMAC_TOKEN"
}

Endpoints

Chat with Ask AI

POST /chat Start or continue a chat with the AI assistant. The response is streamed in real-time using server-sent events, letting you display the AI’s response as it’s being generated.

Headers

X-Algolia-Application-Id
string
required
Your Algolia application ID.
X-Algolia-API-Key
string
required
Your Algolia API key.
X-Algolia-Index-Name
string
required
Name of the Algolia index to use.
X-Algolia-Assistant-Id
string
required
Ask AI assistant configuration ID.
authorization
string
required
HMAC token (retrieved from /chat/token).
X-AI-SDK-Version
string
Vercel AI SDK version to use for the request. Defaults to v4 if not specified.

Request body

JSON
{
  "id": "your-conversation-id",
  "messages": [
    {
      "role": "user",
      "content": "What is Algolia?",
      "id": "msg-123",
      "createdAt": "2025-01-01T12:00:00.000Z",
      "parts": [
        {
          "type": "text",
          "text": "What is Algolia?"
        }
      ]
    }
  ],
  "searchParameters": {
    "facetFilters": ["language:en", "version:latest"]
  }
}
id
string
required
Unique conversation identifier.
messages
object[]
required
Conversation messages.
searchParameters
object
Search API parameters.
Using search parameters
Search parameters let you control how Ask AI searches your index:
JSON
{
  "id": "conversation-1",
  "messages": [
    {
      "role": "user",
      "content": "How do I configure the API?",
      "id": "msg-1"
    }
  ],
  "searchParameters": {
    "facetFilters": ["language:en", "version:latest", "type:content"],
    "filters": "category:api AND status:published",
    "attributesToRetrieve": ["title", "content", "url"],
    "restrictSearchableAttributes": ["title", "content"],
    "distinct": true
  }
}
Advanced facet filtering with OR logic
You can use nested arrays for OR logic within facet filters:
JSON
{
  "searchParameters": {
    "facetFilters": [
      "language:en",
      ["docusaurus_tag:default", "docusaurus_tag:docs-default-current"]
    ]
  }
}
This example filters to: language:en AND (docusaurus_tag:default OR docusaurus_tag:docs-default-current)
Common use cases
  • Multi-language sites: ["language:en"]
  • Versioned documentation: ["version:latest"] or ["version:v2.0"]
  • Content types: ["type:content"] to exclude navigation/metadata
  • Multiple tags: [["tag:api", "tag:tutorial"]] for OR logic
  • Categories with fallbacks: [["category:advanced", "category:intermediate"]]

Response

  • Content-Type: text/event-stream
  • Format: Server-sent events with incremental AI response chunks
  • Benefits: Real-time response display, better user experience, lower perceived latency
Streaming responses
JavaScript
const response = await fetch("/chat", {
  /* ... */
});
const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const chunk = decoder.decode(value);
  // Display chunk immediately in your UI
  console.log("Received chunk:", chunk);
}

Submit feedback

POST /chat/feedback Submit thumbs up/down feedback for a chat message.

Headers

X-Algolia-Assistant-Id
string
required
Ask AI assistant configuration ID.
authorization
string
required
HMAC token (retrieved from /chat/token).

Request body

JSON
{
  "appId": "ALGOLIA_APPLICATION_ID",
  "messageId": "msg-123",
  "thumbs": 1
}
appId
string
required
Your Algolia application ID.
messageId
string
required
ID of the message for which to vote.
thumbs
number
required
  • 1 for positive feedback
  • 0 for negative feedback

Response

JSON
{
  "success": true,
  "message": "Feedback was successfully submitted."
}

Health check

GET /chat/health Check the operational status of the Ask AI service. Response: OK (text/plain)

Search parameter examples

Combine multiple search parameters to improve the accuracy, performance, and relevance of Ask AI results. The following example demonstrates how these parameters work together.
JSON
{
  "searchParameters": {
    "facetFilters": ["language:en", "version:latest"],
    "filters": "category:guide AND status:published",
    "attributesToRetrieve": ["title", "content", "url"],
    "restrictSearchableAttributes": ["title", "content", "tags"],
    "distinct": true
  }
}
This configuration narrows results to English content (facetFilters) and includes only published guides (filters), ensuring relevance and response quality. It limits both the searchable (restrictSearchableAttributes) and returned (attributesToRetrieve) fields to titles, content, tags, and URLs. This reduces noise, improves focus, enhances the overall user experience, and also eliminates duplicate results (distinct). You can also use parameters individually to handle specific use cases. The following examples illustrate how to configure commonly-used parameters for more targeted search behavior.

Complex filtering with the filters parameter

JSON
{
  "searchParameters": {
    "filters": "category:api AND (status:published OR status:beta) AND NOT deprecated:true"
  }
}

facetFilters formats

JSON
{
  "searchParameters": {
    "facetFilters": "language:en"
  }
}
JSON
{
  "searchParameters": {
    "facetFilters": ["language:en", "version:latest"]
  }
}
JSON
{
  "searchParameters": {
    "facetFilters": ["language:en", ["tag:api", "tag:tutorial"]]
  }
}

Control content visibility with attributesToRetrieve

JSON
{
  "searchParameters": {
    "attributesToRetrieve": ["title", "content", "excerpt", "url"]
  }
}

Focus search scope with restrictSearchableAttributes

JSON
{
  "searchParameters": {
    "restrictSearchableAttributes": ["title", "content", "tags"]
  }
}

Remove duplicates with distinct

JSON
{
  "searchParameters": {
    "distinct": true
  }
}

Remove group duplicates

JSON
{
  "searchParameters": {
    "distinct": 2,
    "facetFilters": "language:en"
  }
}
distinct set to 2, means that the top two results in the defined group are returned.

Custom integration examples

Basic chat implementation

class AskAIChat {
  constructor({ appId, apiKey, indexName, assistantId }) {
    this.appId = appId;
    this.apiKey = apiKey;
    this.indexName = indexName;
    this.assistantId = assistantId;
    this.baseUrl = "https://askai.algolia.com";
  }

  async getToken() {
    const response = await fetch(`${this.baseUrl}/chat/token`, {
      method: "POST",
      headers: {
        "X-Algolia-Assistant-Id": this.assistantId,
      },
    });
    const data = await response.json();
    return data.token;
  }

  async sendMessage(conversationId, messages, searchParameters = {}) {
    const token = await this.getToken();

    const response = await fetch(`${this.baseUrl}/chat`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "X-Algolia-Application-Id": this.appId,
        "X-Algolia-API-Key": this.apiKey,
        "X-Algolia-Index-Name": this.indexName,
        "X-Algolia-Assistant-Id": this.assistantId,
        "X-AI-SDK-Version": "v4",
        Authorization: token,
      },
      body: JSON.stringify({
        id: conversationId,
        messages,
        ...(Object.keys(searchParameters).length > 0 && { searchParameters }),
      }),
    });

    if (!response.ok) {
      throw new Error(`HTTP error! status: ${response.status}`);
    }

    // Return a streaming iterator for real-time response handling
    return {
      async *[Symbol.asyncIterator]() {
        const reader = response.body.getReader();
        const decoder = new TextDecoder();

        try {
          while (true) {
            const { done, value } = await reader.read();
            if (done) break;

            // Decode and yield each chunk as it arrives
            const chunk = decoder.decode(value, { stream: true });
            if (chunk.trim()) {
              yield chunk;
            }
          }
        } finally {
          reader.releaseLock();
        }
      },
    };
  }

  async submitFeedback(messageId, thumbs) {
    const token = await this.getToken();

    const response = await fetch(`${this.baseUrl}/chat/feedback`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "X-Algolia-Assistant-Id": this.assistantId,
        Authorization: token,
      },
      body: JSON.stringify({
        appId: this.appId,
        messageId,
        thumbs,
      }),
    });

    return response.json();
  }
}

// Usage with streaming
const chat = new AskAIChat({
  appId: "ALGOLIA_APPLICATION_ID",
  apiKey: "ALGOLIA_API_KEY",
  indexName: "ALGOLIA_INDEX_NAME",
  assistantId: "ALGOLIA_ASSISTANT_ID",
});

// Send message and handle streaming response
const stream = await chat.sendMessage(
  "conversation-1",
  [
    {
      role: "user",
      content: "What is Algolia?",
      id: "msg-1",
    },
  ],
  {
    facetFilters: ["language:en", "type:content"],
    filters: "status:published",
    attributesToRetrieve: ["title", "content", "url"],
    restrictSearchableAttributes: ["title", "content"],
    distinct: true,
  }
); // Add search parameters

// Display response as it streams in real-time
let fullResponse = "";
for await (const chunk of stream) {
  fullResponse += chunk;
  console.log("Received chunk:", chunk);
  // Update your UI immediately with each chunk
  // e.g., appendToMessageUI(chunk);
}
console.log("Complete response:", fullResponse);

With Vercel AI SDK

The Vercel AI SDK provides automatic handling of the request format and streaming, with support for the new search parameters and AI SDK version features. Integrating the chat with a Next.js proxy has these benefits:
  • Security: API keys stay on the server
  • Token management: Automatic token refresh
  • Error handling: Centralized error management
  • CORS: No cross-origin issues
  • Caching: Can add caching logic if needed
Create a Next.js API route as a proxy:
  • Pages router: pages/api/chat.ts
  • App router: app/api/chat/route.ts
TypeScript
import { StreamingTextResponse } from "ai";

export const runtime = "edge";

async function getToken(assistantId: string, origin: string) {
  const tokenRes = await fetch("https://askai.algolia.com/chat/token", {
    method: "POST",
    headers: {
      "X-Algolia-Assistant-Id": assistantId,
      Origin: origin,
    },
  });

  const tokenData = await tokenRes.json();
  if (!tokenData.success) {
    throw new Error(tokenData.message || "Failed to get token");
  }
  return tokenData.token;
}

export default async function handler(req: Request) {
  try {
    const body = await req.json();
    const assistantId = process.env.ALGOLIA_ASSISTANT_ID!;
    const origin = req.headers.get("origin") || "";

    // Fetch a new token before each chat call
    const token = await getToken(assistantId, origin);

    // Prepare headers for Algolia Ask AI
    const headers = {
      "X-Algolia-Application-Id": process.env.ALGOLIA_APPLICATION_ID!,
      "X-Algolia-API-Key": process.env.ALGOLIA_API_KEY!,
      "X-Algolia-Index-Name": process.env.ALGOLIA_INDEX_NAME!,
      "X-Algolia-Assistant-Id": assistantId,
      "X-AI-SDK-Version": "v4",
      Authorization: token,
      "Content-Type": "application/json",
    };

    // Forward the request to Algolia Ask AI
    const response = await fetch("https://askai.algolia.com/chat", {
      method: "POST",
      headers,
      body: JSON.stringify(body),
    });

    if (!response.ok) {
      throw new Error(`Ask AI API error: ${response.status}`);
    }

    // Stream the response back to the client
    return new StreamingTextResponse(response.body);
  } catch (error) {
    console.error("Chat API error:", error);
    return new Response(JSON.stringify({ error: "Internal server error" }), {
      status: 500,
      headers: { "Content-Type": "application/json" },
    });
  }
}
Environment variables
# .env.local
ALGOLIA_APPLICATION_ID=your_app_id
ALGOLIA_API_KEY=your_api_key
ALGOLIA_INDEX_NAME=your_index_name
ALGOLIA_ASSISTANT_ID=your_assistant_id
Frontend with useChat
React
import { useChat } from "ai/react";

function ChatComponent() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } =
    useChat({
      api: "/api/chat", // Use your Next.js API route
      body: {
        searchParameters: {
          facetFilters: ["language:en", "type:content"],
          filters: "status:published",
          attributesToRetrieve: ["title", "content", "url"],
          restrictSearchableAttributes: ["title", "content"],
          distinct: true,
        },
      },
    });

  return (
    <div className="chat-container">
      <div className="messages">
        {messages.map((m) => (
          <div key={m.id} className={`message ${m.role}`}>
            <strong>{m.role === "user" ? "You" : "AI"}:</strong>
            <div>{m.content}</div>
          </div>
        ))}
        {isLoading && <div className="loading">AI is thinking...</div>}
      </div>

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          placeholder="Ask a question..."
          onChange={handleInputChange}
          disabled={isLoading}
        />
        <button type="submit" disabled={isLoading}>
          {isLoading ? "Sending..." : "Send"}
        </button>
      </form>
    </div>
  );
}
Vercel AI SDK v5 Integration
Simple example using Vercel AI SDK v5 with the new search parameters:
React
import { useChat } from "ai/react";

function AskAIChatV5() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } =
    useChat({
      api: "/api/chat",
      headers: {
        "X-AI-SDK-Version": "v5",
        "X-Algolia-Application-Id": "YOUR_ALGOLIA_APP_ID",
        "X-Algolia-API-Key": "YOUR_ALGOLIA_API_KEY",
        "X-Algolia-Index-Name": "YOUR_INDEX_NAME",
        "X-Algolia-Assistant-Id": "YOUR_ASSISTANT_ID",
      },
      body: {
        searchParameters: {
          facetFilters: ["language:en", "type:content"],
          filters: "status:published",
          attributesToRetrieve: ["title", "content", "url"],
          restrictSearchableAttributes: ["title", "content"],
          distinct: true,
        },
        trigger: "user", // Required for v5 format
      },
    });

  return (
    <div>
      <div className="messages">
        {messages.map((m) => (
          <div key={m.id} className={`message ${m.role}`}>
            <strong>{m.role === "user" ? "You" : "AI"}:</strong>
            <div>
              {/* Handle both v4 and v5 message formats */}
              {m.content ||
                (m.parts && m.parts.map((part: any) => part.text).join(""))}
            </div>
          </div>
        ))}
        {isLoading && <div>AI is thinking...</div>}
      </div>

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          placeholder="Ask a question..."
          onChange={handleInputChange}
          disabled={isLoading}
        />
        <button type="submit" disabled={isLoading}>
          {isLoading ? "Sending..." : "Send"}
        </button>
      </form>
    </div>
  );
}

export default AskAIChatV5;

Direct integration

JavaScript
import { useChat } from "ai/react";

function ChatComponent() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: "https://askai.algolia.com/chat",
    headers: {
      "X-Algolia-Application-Id": "YOUR_APP_ID",
      "X-Algolia-API-Key": "YOUR_API_KEY",
      "X-Algolia-Index-Name": "YOUR_INDEX_NAME",
      "X-Algolia-Assistant-Id": "YOUR_ASSISTANT_ID",
      "X-AI-SDK-Version": "v4",
    },
  });

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}>
          {m.role === "user" ? "User: " : "AI: "}
          {m.content}
        </div>
      ))}

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          placeholder="Say something..."
          onChange={handleInputChange}
        />
      </form>
    </div>
  );
}

Error handling

All error responses follow this format:
JSON
{
  "success": false,
  "message": "Error description"
}
Common error scenarios:
  • Invalid assistant ID: Configuration doesn’t exist
  • Expired token: Request a new HMAC token
  • Rate limiting: Too many requests
  • Invalid index: Index name doesn’t exist or isn’t accessible

Best practices

  • Token management. Always request a fresh HMAC token before chat requests.
  • Error Handling. Implement retry logic for network failures.
  • Streaming. Handle server-sent events properly for real-time responses.
  • Feedback. Implement thumbs up/down for continuous improvement.
  • CORS. Ensure your domain is allowed in your Ask AI configuration.
I