For developers building AI-visibility tools

The Generative Engine Optimization API for builders

MentionsAPI is the data layer that GEO tools query to fetch normalized, deduplicated brand-mention data across ChatGPT, Claude, Gemini, and Perplexity in a single request. Everyone is launching a GEO dashboard. We built the API they should be calling. Picks-and-shovels for the GEO movement, so you don't have to wire up four LLM SDKs to ship one report.

Generative Engine Optimization is the new SEO. The mechanics are different. You're not optimizing for keyword density, you're optimizing for how language models summarize your category. The tooling, however, is exactly the same shape: you need a programmatic feed of how AI describes your brand vs. competitors, refreshed regularly, with structured outputs your dashboard can consume.

MentionsAPI is the data layer underneath every serious GEO tool you'll see launch in 2026. We give you the fan-out across four major LLMs, the brand and citation extraction, the historical store. You build the GEO product on top.

If you've been pitching investors on a GEO tool, this is the part of the stack you don't have to write.

Top up from $10 · Pay per call · Credits never expire

Pay as you go·$10 minimum · Credits never expire · No plans

The atomic units of GEO

GEO scoring boils down to: prompt-level visibility (does the AI mention you?), prompt-level sentiment (does it frame you well?), citation-level traffic (does it link to you?), and competitor delta (you vs. them per prompt). Each of these maps directly to a field in the MentionsAPI response.

You compose them into whatever score your customers care about. We've seen tools build a 'GEO Score' as a weighted average of mention rate, prominence, sentiment, and citation rate. Easy to compute from our raw output.

We deliberately don't ship a single 'GEO score' from our side. The scoring weights are a product decision your customers care about, and the same atoms (mention rate, prominence, sentiment, citation rate) can be combined into a B2B-friendly metric (heavy weight on first-mention rate) or a consumer-friendly metric (heavy weight on positive sentiment). Bundling our weights into your product would force every tool on top of us to look the same. That's the wrong shape for an infrastructure API.

Built for the agency model

Most GEO tools are sold to SEO agencies who manage 20–200 client brands. The API supports multi-brand workspaces (V2) and bulk prompt runs. A single call can target up to 50 brands per provider. Caching means the second client running the same prompt list is nearly free.

Webhook delivery means agency dashboards can be near-real-time without polling. Set up a daily prompt run for a client and the dashboard updates itself when results land.

The agency cost story works because of cross-customer caching. When two of your clients track the same prompt ('best CRM for startups'), the second hit is a $0.02 cache return regardless of which agency is asking. Neither client subsidizes the other and neither sees a cost-of-goods spike on launch day. We've measured 70-80% cache hit rates on repeat agency prompts in production; that's the gap between 'GEO tool with healthy gross margins' and 'GEO tool that's bleeding on every customer'.

Why this is the moment for GEO

Search budgets are starting to rebalance toward AI presence. The agencies and tools that ship a GEO product first will own the relationship. The bottleneck isn't UI design. It's data infrastructure. We're that piece.

The contrarian read: every week another GEO dashboard launches on Product Hunt with a chart of your 'AI visibility score' over time, computed from a single ChatGPT API call. We measured the gap between API and UI on 1,000 prompts and ChatGPT's API misses 96% of what real chatgpt.com users see. The dashboards are real products; the underlying numbers, for most of those tools, are wrong by default. That's the technical debt that makes 'just call MentionsAPI' a feature, not a marketing line.

How the GEO data layer works under the hood

Every `/v1/check` request goes through the same pipeline that powers brand monitoring and visibility tracking, exposed at the GEO-specific atom level: presence (boolean per provider per brand), prominence (`position_norm` 0-1, smaller = earlier), sentiment (categorical per mention), and citation (canonical URL plus `providers_cited[]`). The atoms compose into whatever scoring function your dashboard wants.

For agency-scale workloads, two infrastructure choices matter: shared caching keyed on the canonicalized request body (prompt + provider set + tracked brands), and per-key rate limiting tuned for bulk runs. Default rate limits are 60 requests/minute on free tier and 600 requests/minute on funded accounts. Enough headroom for a 200-brand agency tool running daily refreshes without backpressure. If you're polling 1,000+ prompts/hour, email [email protected] and we'll lift the cap.

Webhook delivery (HMAC-SHA256-signed POSTs from `/v1/monitors` runs) is the primitive most agency dashboards build on. Signature key rotates per-monitor, three retries on non-2xx with exponential backoff, run history queryable via `GET /v1/monitors/:id/runs`. That combination is what makes 'live GEO dashboard for 200 client brands' actually feasible without writing your own job runner and HMAC validator.

Latency profile for the GEO use case: a 50-prompt × 4-provider monitor run completes in p50 ~2 minutes, p95 ~5 minutes including dispatch, parsing, dedup, and webhook POST. Fast enough to refresh client dashboards inside the agency's morning standup window. Cache hit rates on second-and-later runs typically push that to under a minute.

When to use this API (and when to build it yourself)

Use it when you're building a GEO product to sell. Agency dashboards, in-house brand-visibility tools, content-attribution platforms, or AI-search competitive intelligence. The combination of cross-provider fan-out, brand and citation extraction, scheduled monitors, webhook delivery, and 90-day raw retention is the entire data layer most GEO tools need. Time-to-MVP is hours-to-days, not engineer-months.

Use it specifically when your moat isn't the data layer. If your differentiation is the UI, the prompt taxonomy, the agency workflow integrations, or the analyst-friendly reporting, building the data infra yourself is months of work that won't show up in your sales motion. Calling MentionsAPI lets you spend that engineering budget on the differentiator.

Build your own if you have a hard requirement that contradicts our shape: full UI parity for every prompt (we're API-mode by default, with `mode: perplexity_live` available for parity-critical calls), in-VPC deployment with BYOK (currently Enterprise-only), or sub-100 ms p99 on uncached calls (we're at ~7.4 s p99 uncached because LLMs are slow. That's a network round-trip ceiling, not a software issue). Honest competitors here: LiteLLM if you want self-hosted aggregation, OpenRouter if you only need single-provider routing, and direct provider SDKs if you only ever want to call one model.

FAQ

Frequently asked questions

Answer-first, dev-to-dev. Each one is also embedded as FAQPage schema for AI engines.

What is Generative Engine Optimization (GEO)?
GEO is the practice of optimizing how language models summarize your category and brand in their answers. It replaces keyword-density SEO with prompt-level visibility tracking: 'does the AI mention us when asked about our category, in what position, with what framing, and which sources does it cite?'
What does a GEO API actually return?
The atomic units of GEO scoring: prompt-level visibility (does the AI mention you?), prompt-level sentiment (does it frame you well?), citation-level traffic (does it link to your domain?), and competitor delta (you vs. them per prompt). Each maps to a field in the `/v1/check` response. You compose them into whatever weighted GEO score your customers care about.
Can I use this to build a GEO tool for SEO agencies?
Yes. That's the main use case. The API supports bulk prompt runs (50 brands per provider in a single call), webhook delivery for near-real-time agency dashboards, and shared caching so the second client running the same prompt list is nearly free. White-label and multi-tenant arrangements are available. Email [email protected].
How is MentionsAPI different from building a GEO data layer in-house?
DIY means months of infra: four SDKs, four auth flows, citation parsers per provider, redirect resolution, retry logic, partial-failure handling, monitoring. MentionsAPI ships all of that as a single HTTP call. Most teams that try DIY first end up rebuilding what `/v1/check` already gives them. We just shipped it first.
How much does running a GEO product cost?
A typical agency tool tracking 50 brands across 10 prompts daily runs ~$30-$50/month thanks to shared cache. The math: 500 prompt-runs/day × $0.25 multi-provider fanout = $125/day uncached, but cache hit rates of 70-80% on repeat agency prompts drop that to ~$25-$40/day. PAYG, $10 minimum top-up.
Does the API support webhooks?
Yes. `/v1/monitors` accepts a `webhook_url` and fires an HMAC-SHA256-signed POST when each scheduled run completes. The payload includes the full normalized response plus a `delta` object with citation and mention diffs against the previous run. Three retries on non-2xx, signature key rotates per-monitor, run history queryable via `GET /v1/monitors/:id/runs`.
How do I detect when an LLM starts citing my domain?
Schedule the relevant prompt with `/v1/monitors`, filter the `citations[]` array on your domain client-side, and read the webhook payload's `citations_diff.added[]` for new citations or `citations_diff.removed[]` for dropped ones. The diff is computed server-side, so your alerting code is a one-line filter. No replay or client-side diffing needed.
Does the API match what real users see in chatgpt.com?
Not by default. We measured the gap on 1,000 prompts: ChatGPT API answers had at least one meaningful drift versus the live UI on 96% of prompts. For prompts where parity matters, layer in `mode: perplexity_live` ($0.25) on those specific calls. For most GEO tools, the API-mode signal is directionally correct and an order of magnitude cheaper to run at scale.
Code example

Daily GEO snapshot for a client portfolio

Drop in your API key and you're live. Same response shape across every provider.

 ask.mjs
const PROMPTS = [
  "Best project management software for remote teams",
  "Top alternatives to Asana",
  "Async-first PM tools",
];

const TRACK = ["Linear", "Asana", "Notion", "Jira"];

const results = await Promise.all(
  PROMPTS.map(prompt =>
    fetch("https://api.mentionsapi.com/v1/ask", {
      method: "POST",
      headers: { Authorization: "Bearer lvk_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" },
      body: JSON.stringify({
        providers: ["openai", "anthropic", "gemini", "perplexity"],
        prompt,
        track_brands: TRACK,
      }),
    }).then(r => r.json())
  )
);
// Push to your dashboard / DB
Compare

MentionsAPI vs. building a GEO data layer from scratch

 The other wayMentionsAPI
DIY pipelineMonths of infra workProduction-ready in an afternoon
DIY pipelinePer-LLM cost optimizationBundled, cached pricing
DIY pipelineYou handle outagesPartial-success fallbacks built in
DIY pipelineCustom monitoringWebhooks + audit logs included
Pricing

Top up from $10. Pay per call. No plans.

GEO tools serving agencies pay $0.02 per /v1/check?mode=quick call (4 LLMs in parallel) and $0.25 for perplexity_live UI scrapes. Cache hits drop most workloads to pennies. Pay-as-you-go. Top up $5 to start, $200 for a busy month. $1 free signup credit. No monthly tiers. White-label and multi-tenant arrangements: email [email protected].

Stop wiring up four SDKs.

One API key, four answer engines, structured responses. $10 minimum top-up. Credits never expire.