Providers & models

MentionsAPI supports four major answer engines. The providers enum on every endpoint accepts "openai", "anthropic", "gemini", or "perplexity".

Default models

If you don't pin a model per provider, these are what we route to. Defaults are chosen for latency + cost balance and are upgraded as vendors release new mid-tier models.

Provider IDDisplay nameDefault modelWeb search
openaiChatGPTgpt-4o-2024-11-20Yes
anthropicClaudeclaude-sonnet-4-5-20250929Yes
geminiGeminigemini-2.5-flashYes
perplexityPerplexitysonar-proYes (built-in)

Pinning a specific model

Pass the model object on /v1/ask to pin a model per provider. Every value is validated against that provider's text-only allowlist; unknown or blocked model names return a per-provider provider_auth_failed error rather than failing the whole call.

json
{
  "providers": ["openai", "anthropic"],
  "prompt": "Best deploy targets for Next.js?",
  "model": {
    "openai":    "gpt-4o-2024-11-20",
    "anthropic": "claude-sonnet-4-5-20250929"
  }
}

Partial-success semantics

If a provider is degraded or you pin a model that's blocked, the fan-out continues. The failing provider appears as { provider, error: { code, message } } inside the top-level providers[] array. The HTTP status stays 200. You are only charged for the providers that succeeded.

Web search

Web grounding is on by default (web_search: true). Turn it off to reduce latency and cost — at the price of no citations. Perplexity is always web-grounded (its sonar-pro model has built-in search); the flag is a no-op for that provider.