For developers building AI-visibility tools

An AI visibility API for the new search results page

An AI visibility API is a programmatic measurement layer for how often, how prominently, and with what framing a brand appears in language-model answers. MentionsAPI returns the four atomic signals. Presence, prominence, share-of-voice, and citation rate. Per provider per prompt, so GEO and AEO tools can compose whatever weighted score their customers care about without running their own multi-LLM pipeline.

The new search results page is a paragraph of AI-generated prose with three citations. If your brand is not in that paragraph, you don't exist in the buyer's first impression. AI visibility is the new SEO ranking, and like rankings, you can't improve what you can't measure.

MentionsAPI gives you a programmatic measure of AI visibility: presence (do you appear?), prominence (how early in the answer?), share-of-voice (you vs. competitors), and citation rate (does the model link to your site?).

It's the data layer for the next generation of GEO and AEO tools. Built so you don't have to wire up four LLMs to ship one report.

Top up from $10 · Pay per call · Credits never expire

Pay as you go·$10 minimum · Credits never expire · No plans

Visibility metrics that actually work

Ranking by 'mention count' is naive. A paragraph that says your name three times in passing is worse than one that names you once as the recommendation. We compute prominence based on first-mention position normalized to answer length, plus a sentiment-weighted score for the surrounding sentence.

Pull these as raw numbers per provider per query. Roll them up however you like. By day, by competitor cohort, by intent category. The API gives you the atoms; you compose the metrics that matter to your customers.

We also return a per-provider `visibility_score` (0-100) as a starting point. A weighted blend of presence, prominence, and sentiment computed with documented weights. Most tool builders override the weights to match their customers' priorities; the raw inputs are still in the response, so swapping in your own scoring is a single function on your side, not a re-query.

Track visibility over time without re-querying

We store every raw answer for 90 days by default. That means you can add a new competitor to your `track_brands` list and re-run extraction over historical data without spending a single LLM call. Backfill a competitor's share-of-voice for the last quarter in a few seconds.

Need indefinite retention for trend reports, longitudinal analyses, and the occasional 'when did Claude start citing us?' debugging session? Email [email protected]. We'll set you up with an extended-retention arrangement.

The retention store is the longitudinal asset. Tools that started measuring AI visibility in early 2026 will have year-over-year baselines by the time most agencies are still figuring out what to track. The API exposes `GET /v1/ask/:id` for replay and `GET /v1/monitors/:id/runs` for time-series pulls. Both return the original normalized response, so the chart you draw today will draw the same way next year.

Citation rate is its own KPI

Even when an AI doesn't name you, it might cite a page on your site as a source. That's a different funnel than mentions: clicks come from citations, brand awareness comes from mentions. We split them out so you can optimize for both.

Practically: a B2B brand might rank well on mention-rate (the model recommends them) but poorly on citation-rate (the model never links to their docs). The remediation is different. Better naming hits prompt-mention; better technical content hits citation-rate. Splitting the metrics lets your customers see which lever to pull.

How visibility scoring works under the hood

For each `mentions[]` entry, we compute three numbers: `position_norm` (the character offset of the first mention divided by the total answer length, expressed as 0-1 where 0 is start and 1 is end), `sentence_index` (which sentence contains the mention, 1-indexed), and `sentiment` (categorical: `positive`, `neutral`, `negative`). The sentiment classifier runs on the sentence containing the mention, not the full answer, so a brand that's praised in the intro but mocked in the conclusion gets per-mention sentiment instead of an average.

The default `visibility_score` formula is `0.4 × presence + 0.4 × (1 - position_norm) + 0.2 × sentiment_weight`, where `sentiment_weight` is +1 / 0 / -1 mapped onto a 0-1 range. That's a deliberate starting point. Most GEO tools override to weight prominence higher (because first-mention rate is what their customers actually care about) or weight sentiment higher (because B2B prompts often mention competitors neutrally and what matters is positive framing). Both flavors are a one-line change since we return the raw inputs.

For confidence intervals on aggregate metrics. 'is our mention rate 22% across this prompt set, or 28%?'. The methodology page documents how we compute Wilson 95% confidence intervals over the rolling sample. If you're shipping numbers to a customer's executive deck, the CI bounds are the difference between 'data' and 'a number to argue about'.

Latency profile for visibility calls: cached `mode: quick` returns p50 ~140 ms / p99 ~600 ms; uncached `mode: quick` p50 ~3.8 s / p99 ~7.4 s. The classifier inference step adds ~30-60 ms per provider, mostly amortized in the cache. For dashboard refreshes that need to land in under a second, design for cache hits. Repeat queries against your prompt list will mostly cache after the first run.

When a visibility API fits (and when it doesn't)

Use it when you're shipping a tool that compares brand presence over time across providers. GEO dashboards, AEO reports, agency client portals, internal CMO dashboards. The combination of normalized atoms (presence, prominence, sentiment, citation rate), 90-day raw-answer retention, and re-extraction without re-querying is the pattern that makes longitudinal reporting viable without exploding your LLM bill.

Use it specifically when your customers will ask 'what was our mention rate last month?' and 'how did that change after our content launch?' Both questions need a longitudinal store; both are expensive to back-build if you didn't capture data at the time. Starting on the API now compounds.

Skip it for one-shot reports where you only need a snapshot. A consultant running a single 'how does AI describe our category?' analysis can pay a third party for a one-time report cheaper than wiring up a monitoring loop. Skip it also if your reporting needs to match exactly what users see in chatgpt.com. Our API returns API-mode data; it diverges from the live UI on roughly 80-96% of prompts. Pair with `mode: perplexity_live` ($0.25) on the prompts where parity actually matters, or use the delta-report tool to characterize the gap.

FAQ

Frequently asked questions

Answer-first, dev-to-dev. Each one is also embedded as FAQPage schema for AI engines.

What is an AI visibility API?
An AI visibility API measures how often and how prominently your brand appears in AI-generated answers. MentionsAPI returns presence (does the model name you?), prominence (first-mention position normalized to answer length), share-of-voice (you vs. competitors per prompt), and citation rate (does the model link to your site?). As raw fields per provider, ready to compose into your own scoring.
How do I measure my brand's visibility in ChatGPT?
POST a prompt and your brand list to `/v1/check` with `"providers": ["openai"]`. The response includes a `mentions[]` array with each occurrence's position, surrounding context, and sentiment, plus a `visibility_score` (0-100) per provider. Add Claude, Gemini, and Perplexity to the providers array to compare across all four answer engines in one call.
Can I track visibility over time?
Yes. Every raw answer is stored for 90 days by default, so you can re-run extraction with new brand names without re-querying the LLM. Backfilling a competitor's share-of-voice for the last quarter takes seconds, not days. For longer retention (longitudinal trend reports), email [email protected] for an extended-retention arrangement.
How is this different from a closed AI-visibility dashboard?
Closed dashboards (Profound, BrandRadar) are UI-only with no raw data export, usually cover one LLM, charge $300+/month per seat, and require re-querying the LLM to add a new tracked brand. MentionsAPI is API-first, covers all four providers, costs $0.02-$0.75 per call PAYG, and lets you re-extract on stored answers without burning new credits.
What's a visibility_score and how is it computed?
A 0-100 number per provider per prompt. Default formula: 0.4 × presence + 0.4 × (1 - normalized first-mention position) + 0.2 × sentiment weight. The raw inputs are also returned, so you can recompute it however your customers care about. Most GEO tools override the weights to match their own scoring model.
How much does scheduled visibility tracking cost?
Polling 100 prompts hourly across four providers usually costs under $50/month thanks to ~70-80% cache hit rates on repeat agency prompts. Per-call: $0.02 cache hit, $0.25 multi-provider fan-out, $0.75 full fan-out with web search. PAYG, $10 minimum top-up, credits never expire.
Does the API match what real users see in ChatGPT?
Not exactly. We measured the gap on 1,000 prompts: ChatGPT API answers had at least one meaningful drift versus chatgpt.com on 96% of prompts (citations, brand-set order, or ranking). For prompts where parity matters, layer in `mode: perplexity_live` ($0.25) on those specific calls. For most monitoring workloads, the API-mode signal is directionally correct and an order of magnitude cheaper.
How do I track citations vs. mentions?
Mentions are when the model names your brand in the answer text. Citations are when the model links to a URL on your domain. They're different funnels. Citations drive clicks, mentions drive brand awareness, so the API splits them into separate fields (`mentions[]` and `citations[]`) and you track them independently.
Code example

Measure visibility for a single prompt

Drop in your API key and you're live. Same response shape across every provider.

 ask.mjs
const visibility = await fetch("https://api.mentionsapi.com/v1/ask", {
  method: "POST",
  headers: { Authorization: "Bearer lvk_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "Content-Type": "application/json" },
  body: JSON.stringify({
    providers: ["openai", "anthropic", "gemini", "perplexity"],
    prompt: "Top design tools for product teams in 2026?",
    track_brands: ["Figma", "Framer", "Sketch", "Penpot"],
  }),
}).then(r => r.json());

// Each result has a "visibility_score" (0-100) per provider
visibility.results.forEach(r => {
  console.log(r.provider, r.mentions.length ? r.mentions[0].position : "absent");
});
Compare

MentionsAPI vs. AI-visibility SaaS dashboards

 The other wayMentionsAPI
Closed dashboardsUI-only, no exportAPI-first, raw data access
Closed dashboardsSingle LLM (usually GPT)All 4 major providers
Closed dashboardsRe-query for new brandsRe-extract on stored answers
Closed dashboards$300+/mo for one seatPay-as-you-go from $0.02/call, $5 minimum top-up
Pricing

Top up from $10. Pay per call. No plans.

Build your AI visibility tracker pay-as-you-go. /v1/check?mode=quick is $0.02 (4 LLMs in parallel), /v1/check?mode=perplexity_live is $0.25 (UI scrape with full citations + fan_out). $1 free signup credit, $5 minimum top-up, no monthly tiers.

Stop wiring up four SDKs.

One API key, four answer engines, structured responses. $10 minimum top-up. Credits never expire.