Documentation
Everything you need to ship.
From your first curl to brand monitoring across 8 AI search surfaces at scale. Every endpoint, every payload, every edge case.
Start here
Endpoints
POST
/v1/check— Brand mention check across 8 modes — quick (4 LLM APIs, $0.02), perplexity_live / chatgpt_live / gemini_live ($0.10–$0.25), ai_overview / ai_mode / bing_copilot ($0.05–$0.10), or all_live (composed bundle, $0.50)POST
/v1/discover— Generate 10–50 query candidates for any brand ($0.50)POST
/v1/compare— Head-to-head delta between two brands or queries ($1.50)POST
/v1/watch— Schedule a brand-rank watch with webhook delivery (free; runs billed at mode rate)GET
/v1/health— Public reliability snapshot — 7-day rolling success_rate + p95 (free, no auth)GET
/v1/usage— Aggregate your usage events for any window (free)POST
/v1/ask— Legacy multi-LLM fan-out — kept for back-compat. New integrations should use /v1/check.POST
/v1/extract_brands— Run brand extraction over text you already haveConcepts
- Providers & modelsWhich LLMs, default models, how to pin a specific version.
- Brand extractionRank, sentiment, context, and matching rules.
- Citation canonicalizationHow we resolve redirects and dedupe URLs across providers.
- CachingThree tiers, shared vs private scope, how cache_tier is reported.
- Rate limitsPer-plan RPS + burst, 429 handling, retry patterns.
- BillingPrepaid cents, per-call pricing matrix, failed-call semantics.
- Webhooksask.completed payload shape, HMAC signing, retries.
- Security & data handlingWhere prompts live, retention, compliance posture.
Need a hand?
Email us at [email protected]. We typically reply within a few hours during the work week.