POST /v1/check · mode reference
mode:quick
Query the official APIs of OpenAI (ChatGPT), Anthropic (Claude), Google (Gemini), and Perplexity (Sonar) in parallel. Returns brand mention + rank + the citations each API surfaces. The headline mode.
When to use it
- You want the cheapest, fastest LLM-coverage check across 4 platforms in one call.
- You don't need fan-out queries or visual UI artifacts — the API answer is enough.
- You're polling on a schedule (use /v1/watch).
Pricing
$0.02 per call regardless of how many of the 4 providers you fan out to. Partial-failure refund applies — if 3 of 4 providers succeed, you pay (3/4) × 2¢ = 1¢ (rounded down to the nearest cent).
SLA
Target 99% success rate, <5s p95 latency over the trailing 7 days. Live numbers at /v1/health.
Request
bash
curl https://api.mentionsapi.com/v1/check \
-H "Authorization: Bearer lvk_live_..." \
-H "Content-Type: application/json" \
-d '{
"mode": "quick",
"query": "best CRM for small business",
"brand": "HubSpot",
"providers": ["chatgpt","claude","gemini","perplexity"]
}'Response
json
{
"id": "chk_a8d2c...",
"mode": "quick",
"query": "best CRM for small business",
"brand": "HubSpot",
"providers": {
"chatgpt": { "mentioned": true, "rank": 6, "context": "...", "citations": [], "fan_out": [] },
"claude": { "mentioned": true, "rank": 4, "context": "...", "citations": [...], "fan_out": [] },
"gemini": { "mentioned": true, "rank": 7, "context": "...", "citations": [], "fan_out": [] },
"perplexity": { "mentioned": true, "rank": 6, "context": "...", "citations": [...], "fan_out": [] }
},
"duration_ms": 4218,
"cost_cents": 2,
"balance_after_cents": 9998,
"cache_hit": false
}Notes
- Citations: populated only when the LLM API returns them. Claude and Perplexity Sonar return citations more often than ChatGPT/Gemini.
- Fan-out: always empty in mode:quick — the official APIs don't expose the platform's fan-out queries. Use mode:chatgpt_live or mode:gemini_live for those.
- Web search: we enable web search on every provider that supports it (OpenAI's tool-call, Perplexity Sonar, etc.) so the answers reflect current data.