Skip to main content
This is a beta feature according to Algolia’s Terms of Service (“Beta Services”).
Prompt engineering is iterative. Start with a first version, test it on examples (using Response Playground in the Algolia dashboard), identify any unusual or unexpected outcomes, and refine the prompt. Start by defining the task you want the model to perform, for example:
  • Summarize this user review
  • Describe who this product is for
  • Translate this product’s description into Spanish
You can then develop this into a full prompt. Be as precise as possible. Provide detailed and explicit instructions for the task to get the best results. When possible, provide examples of what you expect, or even of what’s a good or a bad result. This technique (few shot prompting) improves most AI tasks.

Keep prompts simple

Simplicity ensures that your prompts are logical and maintainable, allowing the model to focus effectively on the task without confusion or misinterpretation. Simple prompts are also more generic. They’re easier to maintain by your team as your data and your goals evolve. If your prompt becomes too complex:
  • Use clear language: avoid jargon unless it’s necessary for the task at hand. Use straightforward, everyday language that the model can understand.
  • Break complicated tasks into sub-tasks. For example, instead of “Summarize the political strategy in this presidential discourse”, you could create the following subtasks:
    • Create a first prompt: “Extract a bullet point list of the key policies in this discourse.”
    • Followed up with: “Summarize the political strategy based on these bullet points.”

Keep prompts short

Short prompts create better AI experiences by being clear and focused. They direct the LLM’s processing efficiently, avoiding unnecessary details that could complicate understanding or reduce accuracy. This results in faster, more accurate responses tailored to the task. If your prompt is getting too long:
  • Focus on essential elements: remove unnecessary details as they might reduce the quality of answers.
  • Use examples or templates: if similar prompts are available, use them as guides to structure your prompt effectively. This ensures consistency and accuracy across different scenarios.
  • Test with shorter versions: experiment with condensing parts of the prompt during testing before committing to a more detailed version. This helps identify unnecessary parts of the prompt.
  • Consider AI rephrasing: using a LLM to rephrase a prompt can sometimes make it more generic and compact.

Make prompts specific

Specificity makes your prompts more efficient at doing one thing and doing it well. It ensures clarity and reduces ambiguity. Specific prompts make LLMs perform more accurately and efficiently for tasks like those in RAG APIs. If your prompt is getting too ambiguous, try to:
  • Narrow the scope: define what you want the model to focus on. Instead of asking a broad question like “Tell me about the company.”, specify which aspect of the company you’re interested in. For example, “Provide a summary of the company’s financial performance.”
  • Provide context or constraints: offer context to guide the model’s response. For example, instead of asking “What are the benefits of this exercise plan?”, you could say, “Explain the benefits of this exercise plan for stress management in people over 50.”
  • Use explicit instructions: directly tell the model what you need. For example, “Summarize the following article in three bullet points.”, or “Give me a list of five specific benefits of meditation for stress reduction.”

Be explicit about your expectations

  • Do Make your expectations explicit: Write: “Describe what kind of audience this product is for. Is it appropriate for new visitors, for power users, or for longstanding members?”
  • Don’t Keep the expectations implicit in your prompt. Don’t write “Describe who this product is for.”

Be specific about the expected output

  • Do
    • Explain what output you accept. Write: “Analyze the sentiment in these user reviews. Return only a single label: either ‘positive’, ‘negative’, or ‘neutral’.”
    • Describe what structure you need. Write: “Generate five questions about this product. Return a list of questions in XML. For example, ‘What sizes are available? Is this jacket suitable for cold weather?’”
  • Don’t
    • Keep the options implicit. Don’t write: “Analyze the sentiment in these user reviews.”
    • Keep the structure you expect implicit. Don’t write “Generate five questions about this product.”

Provide fallback options

If you include a fallback option in your prompt, the LLM tends to follow the fallback instruction rather than generate an inaccurate response.
  • Do
    • Offer an alternative (which could be to contact a human). Write: “Answer the user’s question as best as you can from the product data. If the context doesn’t let you answer with certainty, answer ‘I’m not sure I have the answer to this: contact support@acme.com for help’.”
  • Don’t
    • Request a reply at any cost (unless this is what your UX needs). Don’t write: “Answer the user’s question as best as you can from the product data or your internal knowledge.”

See also

Last modified on February 10, 2026