For developers building AI-visibility tools

AI brand monitoring API for the answer-engine era

An AI brand monitoring API is a programmatic interface for tracking how language-model answer engines describe a brand over time. MentionsAPI's `/v1/check` plus `/v1/monitors` endpoints let you schedule prompt runs across ChatGPT, Claude, Gemini, and Perplexity, extract every mention with position and sentiment, and ship the deltas to your own webhook, without standing up four separate SDKs and a parser per provider.

Your customers are asking ChatGPT before they Google. That makes the answer to 'how does ChatGPT describe our company?' a marketing KPI, not a curiosity. The problem is that nobody is shipping a clean monitoring API for it. The tools that exist are SEO platforms with an 'AI tab' bolted on.

MentionsAPI is built for this from the ground up. Monitor a list of prompts on a schedule across all four major answer engines. Track how often your brand appears, in what position, with what sentiment, and which sources the AI cited.

If you're building a brand-monitoring product, an SEO agency dashboard, or an internal team's AI-mention tracker, this is the data layer.

Top up from $10 · Pay per call · Credits never expire

Pay as you go·$10 minimum · Credits never expire · No plans

What 'brand monitoring' actually means here

Three things, in order of value. First: presence. Does the model name your brand at all in answer to a target prompt? Second: position. First mention or last? Third: sentiment. Does the surrounding sentence frame your brand as a leader, an option, or a warning?

Per-prompt tracking is the unit. You define a list of prompts that matter ('best CRM for startups,' 'alternatives to HubSpot'), schedule them, and watch the per-provider answer evolve. We store the raw response so you can re-run extraction with new brand names without re-querying the LLM.

What we deliberately don't track: vanity metrics. There's no 'AI score' that collapses presence, position, sentiment, and citation rate into a single number. Those four signals trade off in real prompts and the right weighting depends on your customers. We return the atoms so you compose the score, not the other way around.

Competitor share-of-voice in one query

Pass your brand and your three biggest competitors in `track_brands`. Each provider's answer comes back tagged for all of them, so you can render a head-to-head comparison without writing a second pipeline.

Combined with our 24-hour cache, you can poll a hundred prompts hourly across four providers without burning through your wallet. Most of those hits will be cache returns at $0.02 a piece.

Share-of-voice math is just arithmetic on the response: count `mentions[]` per brand across the prompt set, divide by the total mention pool, weight by prominence if you care about that, and you have the chart. We've seen agencies build a 'mention rate' (how often we appear), a 'top-of-answer rate' (first-mentioned share), and a 'sentiment-weighted SoV' from the same pull. Three KPIs, one API call.

From data to dashboard in an afternoon

We don't ship a UI. That's your product. We ship the data layer that makes your UI possible. Webhooks fire on each completed monitor run, so you can pipe results into your own database, charting layer, or alerting system. A typical brand-monitor MVP takes a weekend, not a quarter.

The recommended pipeline is straightforward: `POST /v1/monitors` with your prompts and a `webhook_url`, receive the HMAC-signed POST when each run completes, store the normalized response in your own DB, and join it against your prompt-intent taxonomy on read. Reports get faster as your historical store grows because re-extraction with new brands runs against your archive. No re-querying.

How scheduled monitoring and webhook delivery work under the hood

When you `POST /v1/monitors` with a `prompts[]` array, a cron expression, a `track_brands` list, and a `webhook_url`, we register the schedule on a worker queue. At each tick, the queue dispatches one `/v1/check` per prompt × provider set in parallel, normalizes results, computes citation and mention deltas against the last completed run, and POSTs the full normalized payload (plus a `delta` object) to your webhook with an `X-Signature` HMAC-SHA256 header.

Webhook delivery has three redelivery attempts with exponential backoff (30 s, 5 m, 30 m) on any non-2xx response. After the third failure, the run is marked `delivery_failed` and you can pull it from `GET /v1/monitors/:id/runs` later. The signature key rotates per-monitor, so a leaked URL doesn't compromise other monitors on your account.

Latency profile per scheduled run, sampled across April 2026: a 10-prompt, 4-provider monitor with no `web_search` lands at p50 ~14 s, p95 ~28 s including dispatch overhead, parsing, dedup, and webhook POST. Add `web_search: true` and that climbs to p50 ~45 s, p95 ~85 s. Perplexity's grounded retrieval dominates. For real-time UX, run smaller prompt sets more often; for daily reports, run the full set once a day and live with the higher tail.

Edge cases worth knowing: if a single provider 5xxs across the entire monitor run, that provider's slot is marked `errors[]` per prompt rather than failing the whole run. The partial-success surface is preserved and you only pay for the providers that returned data. If your webhook endpoint returns 410 (gone), we suspend the monitor and email the account owner; that's a deliberate failure mode for old test webhooks pointing at deleted ngrok tunnels.

When this API fits your monitoring stack (and when it doesn't)

Use it when you're shipping a brand-monitoring product to customers, running internal AI-mention reports for your marketing team, or building an SEO agency tool that needs to refresh dozens of client portfolios on schedule. The combination of scheduled monitors, partial-success delivery, archived raw responses, and webhook-driven UX is purpose-built for these shapes. Most teams ship to production in a weekend.

Use it especially when you need cross-provider data. Single-LLM monitoring is easy to DIY for a quarter; the day your boss asks 'what does Claude say?' the rewrite cost shows up. The aggregator solves that problem before you have it.

Skip it if you only need a single annual snapshot of where your brand stands in AI answers. A one-time consultant report from a GEO agency is cheaper than wiring up an API. Skip it also if you need full-fidelity UI parity (chatgpt.com vs ChatGPT API) on every call: our API-mode answers will diverge from the live UI on roughly 80-96% of prompts depending on provider, per our 1,000-prompt teardown. For full UI parity, layer in `mode: perplexity_live` ($0.25) for the prompts where it matters or check our delta-report tooling for the gap analysis.

FAQ

Frequently asked questions

Answer-first, dev-to-dev. Each one is also embedded as FAQPage schema for AI engines.

What does AI brand monitoring mean here?
Three things, in order: presence (does the model name your brand at all in answer to a target prompt?), position (first or last?), and sentiment (does the surrounding sentence frame you as a leader, an option, or a warning?). Per-prompt is the unit. You define prompts that matter, schedule them, and watch the per-provider answer evolve over time.
Can I monitor brand mentions across ChatGPT and Perplexity at once?
Yes. That's the default mode. Pass `"providers": ["openai", "perplexity"]` (or include Claude and Gemini) and the response returns mentions tagged per-provider, all in one call. Useful when you want to compare how the same prompt is answered across answer engines without writing four separate ingestion pipelines.
How is this different from an SEO platform's AI tab?
Legacy SEO tools bolt an 'AI module' onto an existing dashboard that's optimized for keyword tracking. They usually cover one LLM (typically ChatGPT), refresh once a day, and lock data behind a UI. MentionsAPI is AI-native, covers all four providers, runs on-demand or any schedule, and is API-first so you bring your own dashboard.
How do I track competitor share-of-voice?
Pass your brand and your three biggest competitors in `track_brands`. Each provider's answer comes back tagged for all of them, with position and sentiment per mention. You get a head-to-head comparison without writing a second pipeline. Combined with the 24-hour cache, polling 100 prompts hourly across four providers usually costs under $50/month.
How much does scheduled monitoring cost?
A typical 50-brand × 10-prompt × daily setup runs about $30-$50/month thanks to ~70-80% cache hit rates on repeat agency prompts. Per-call: $0.02 cache hit, $0.25 multi-provider fan-out, $0.75 full fan-out with web search. Webhook delivery is included. Pay-as-you-go, no plans, $10 minimum top-up.
Can I monitor a list of prompts on a schedule?
Yes. `POST /v1/monitors` with a `prompts[]` array, a cron schedule, your `track_brands`, and a `webhook_url`. We fan out to your chosen providers at each tick, extract mentions, and POST the result to your webhook with an HMAC-SHA256 signature. Three retries on non-2xx with exponential backoff before the run is marked `delivery_failed`.
Does MentionsAPI provide a brand monitoring dashboard?
No. MentionsAPI is the data layer. Your product is the dashboard. We ship JSON over HTTP and webhooks; you compose them into whatever UI your customers want (agency reports, internal Slack alerts, executive dashboards). Most brand-monitor MVPs take a weekend to ship on top of `/v1/check` and `/v1/monitors`.
How do I detect when a citation drops or appears?
Every monitor run computes a `delta` object against the previous run for the same prompt: `citations.added[]`, `citations.removed[]`, `mentions.added[]`, `mentions.removed[]`. The webhook payload includes both the full result and the delta, so your alerting code can fire on the diff without re-computing it client-side. Useful for 'tell me when ChatGPT stops citing our docs' scenarios.
Code example

Track brand share-of-voice in Python

Drop in your API key and you're live. Same response shape across every provider.

 main.py
import requests

res = requests.post(
    "https://api.mentionsapi.com/v1/ask",
    headers={"Authorization": "Bearer lvk_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"},
    json={
        "providers": ["openai", "anthropic", "gemini", "perplexity"],
        "prompt": "What's the best CRM for early-stage startups?",
        "track_brands": ["HubSpot", "Salesforce", "Attio", "Pipedrive"],
    },
)
data = res.json()
for result in data["results"]:
    print(f"{result['provider']}:")
    for mention in result["mentions"]:
        print(f"  {mention['brand']} (pos {mention['position']})")
Compare

MentionsAPI vs. SEO platforms with an 'AI module'

 The other wayMentionsAPI
Legacy SEO toolsAI as an add-on, single LLMAI-native, all 4 providers
Legacy SEO toolsUI-locked dataAPI-first, BYO dashboard
Legacy SEO toolsDaily-only refreshOn-demand or any schedule
Legacy SEO tools$500+/mo agency tierPay-as-you-go from $0.02/call, $5 minimum top-up
Pricing

Top up from $10. Pay per call. No plans.

A typical 50-brand, 10-prompt-per-day mode:quick monitoring setup runs about $30/month thanks to shared cache. Pay-as-you-go. Top up $5, every call deducts in real time. $1 free signup credit. No monthly tiers.

Stop wiring up four SDKs.

One API key, four answer engines, structured responses. $10 minimum top-up. Credits never expire.