POST /v1/check · mode reference

mode:all_live

The bundle. One call, six live AI search surfaces fanned out in parallel. Returns per-surface results in a single envelope so you can compare brand presence across Perplexity, ChatGPT, Gemini, Google AI Overview, Google AI Mode, and Bing Copilot in one shot.

Why use it

  • Discount. Component price would be 25 + 10 + 10 + 5 + 10 + 5 = 65¢; the bundle is 50¢ — 23% cheaper than calling each separately.
  • Latency stacking. The 6 calls run in parallel via Promise.allSettled. p95 latency tracks max(per-mode p95) ≈ 30s, not sum (which would be ~80s).
  • Single billing event. One charge, one wallet deduction, one usage_event row. Easier to reason about than 6 separate calls.

Pricing & refund

$0.50 per call. Partial-failure refund applies — if k of 6 surfaces succeed, you pay (k / 6) × 50¢. Examples:

  • 6 of 6 succeed → 50¢
  • 4 of 6 succeed → (4/6) × 50¢ = 33¢
  • 2 of 6 succeed → (2/6) × 50¢ = 16¢
  • 0 of 6 succeed → (full refund)

SLA

Target 95% rate of "≥5 of 6 surfaces returned" — the bundle is meant to give you broad coverage, not 100% per-surface reliability. Any surface that fails surfaces under errors with a typed code; the bundle still 200s.

Request

bash
curl https://api.mentionsapi.com/v1/check \
  -H "Authorization: Bearer lvk_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "mode": "all_live",
    "query": "best CRM for small business",
    "brand": "HubSpot"
  }'

Response — all 6 succeeded

json
{
  "id": "chk_a8d2c...",
  "mode": "all_live",
  "providers": {
    "perplexity":   { "mentioned": true, "rank": 6, "context": "...", "citations": [...10 items], "fan_out": [...5 items] },
    "chatgpt":      { "mentioned": true, "rank": 4, "context": "...", "citations": [...4 items], "fan_out": [...3 items], "brand_entities": [...] },
    "gemini":       { "mentioned": true, "rank": 5, "context": "...", "citations": [...6 items], "fan_out": [...4 items], "brand_entities": [...] },
    "ai_overview":  { "mentioned": true, "rank": 3, "context": "...", "citations": [...8 items], "fan_out": [], "brand_entities": [] },
    "ai_mode":      { "mentioned": true, "rank": 2, "context": "...", "citations": [...5 items], "fan_out": [], "brand_entities": [] },
    "bing_copilot": { "mentioned": true, "rank": 4, "context": "...", "citations": [...7 items], "fan_out": [], "brand_entities": [] }
  },
  "duration_ms": 28041,
  "cost_cents": 50,
  "balance_after_cents": 9948,
  "cache_hit": false
}

Response — partial success

Two upstream surfaces failed. The bundle still 200s; failures surface under errors and the price is prorated to (4 / 6) × 50¢ = 33¢.

json
{
  "mode": "all_live",
  "providers": {
    "perplexity":  { "mentioned": true, "rank": 6, ... },
    "chatgpt":     { "mentioned": true, "rank": 4, ... },
    "ai_overview": { "mentioned": false, "rank": null, ... },
    "bing_copilot":{ "mentioned": true, "rank": 4, ... }
  },
  "errors": {
    "gemini":  { "error": "upstream_rate_limited" },
    "ai_mode": { "error": "upstream_timeout" }
  },
  "cost_cents": 33,
  "balance_after_cents": 9967
}

Notes

  • Idempotency: retry the bundle with the same Idempotency-Key within 24h to replay the original response without re-charging.
  • Watches: use mode:all_live with /v1/watch to monitor brand presence across the full AI search universe on a schedule.
  • Per-surface dive: if you only need one surface, call its dedicated mode (cheaper). The bundle is for when you want the full picture in one call.