The atomic units of GEO
GEO scoring boils down to: prompt-level visibility (does the AI mention you?), prompt-level sentiment (does it frame you well?), citation-level traffic (does it link to you?), and competitor delta (you vs. them per prompt). Each of these maps directly to a field in the MentionsAPI response.
You compose them into whatever score your customers care about. We've seen tools build a 'GEO Score' as a weighted average of mention rate, prominence, sentiment, and citation rate. Easy to compute from our raw output.
We deliberately don't ship a single 'GEO score' from our side. The scoring weights are a product decision your customers care about, and the same atoms (mention rate, prominence, sentiment, citation rate) can be combined into a B2B-friendly metric (heavy weight on first-mention rate) or a consumer-friendly metric (heavy weight on positive sentiment). Bundling our weights into your product would force every tool on top of us to look the same. That's the wrong shape for an infrastructure API.
Built for the agency model
Most GEO tools are sold to SEO agencies who manage 20–200 client brands. The API supports multi-brand workspaces (V2) and bulk prompt runs. A single call can target up to 50 brands per provider. Caching means the second client running the same prompt list is nearly free.
Webhook delivery means agency dashboards can be near-real-time without polling. Set up a daily prompt run for a client and the dashboard updates itself when results land.
The agency cost story works because of cross-customer caching. When two of your clients track the same prompt ('best CRM for startups'), the second hit is a $0.02 cache return regardless of which agency is asking. Neither client subsidizes the other and neither sees a cost-of-goods spike on launch day. We've measured 70-80% cache hit rates on repeat agency prompts in production; that's the gap between 'GEO tool with healthy gross margins' and 'GEO tool that's bleeding on every customer'.
Why this is the moment for GEO
Search budgets are starting to rebalance toward AI presence. The agencies and tools that ship a GEO product first will own the relationship. The bottleneck isn't UI design. It's data infrastructure. We're that piece.
The contrarian read: every week another GEO dashboard launches on Product Hunt with a chart of your 'AI visibility score' over time, computed from a single ChatGPT API call. We measured the gap between API and UI on 1,000 prompts and ChatGPT's API misses 96% of what real chatgpt.com users see. The dashboards are real products; the underlying numbers, for most of those tools, are wrong by default. That's the technical debt that makes 'just call MentionsAPI' a feature, not a marketing line.
How the GEO data layer works under the hood
Every `/v1/check` request goes through the same pipeline that powers brand monitoring and visibility tracking, exposed at the GEO-specific atom level: presence (boolean per provider per brand), prominence (`position_norm` 0-1, smaller = earlier), sentiment (categorical per mention), and citation (canonical URL plus `providers_cited[]`). The atoms compose into whatever scoring function your dashboard wants.
For agency-scale workloads, two infrastructure choices matter: shared caching keyed on the canonicalized request body (prompt + provider set + tracked brands), and per-key rate limiting tuned for bulk runs. Default rate limits are 60 requests/minute on free tier and 600 requests/minute on funded accounts. Enough headroom for a 200-brand agency tool running daily refreshes without backpressure. If you're polling 1,000+ prompts/hour, email [email protected] and we'll lift the cap.
Webhook delivery (HMAC-SHA256-signed POSTs from `/v1/monitors` runs) is the primitive most agency dashboards build on. Signature key rotates per-monitor, three retries on non-2xx with exponential backoff, run history queryable via `GET /v1/monitors/:id/runs`. That combination is what makes 'live GEO dashboard for 200 client brands' actually feasible without writing your own job runner and HMAC validator.
Latency profile for the GEO use case: a 50-prompt × 4-provider monitor run completes in p50 ~2 minutes, p95 ~5 minutes including dispatch, parsing, dedup, and webhook POST. Fast enough to refresh client dashboards inside the agency's morning standup window. Cache hit rates on second-and-later runs typically push that to under a minute.
When to use this API (and when to build it yourself)
Use it when you're building a GEO product to sell. Agency dashboards, in-house brand-visibility tools, content-attribution platforms, or AI-search competitive intelligence. The combination of cross-provider fan-out, brand and citation extraction, scheduled monitors, webhook delivery, and 90-day raw retention is the entire data layer most GEO tools need. Time-to-MVP is hours-to-days, not engineer-months.
Use it specifically when your moat isn't the data layer. If your differentiation is the UI, the prompt taxonomy, the agency workflow integrations, or the analyst-friendly reporting, building the data infra yourself is months of work that won't show up in your sales motion. Calling MentionsAPI lets you spend that engineering budget on the differentiator.
Build your own if you have a hard requirement that contradicts our shape: full UI parity for every prompt (we're API-mode by default, with `mode: perplexity_live` available for parity-critical calls), in-VPC deployment with BYOK (currently Enterprise-only), or sub-100 ms p99 on uncached calls (we're at ~7.4 s p99 uncached because LLMs are slow. That's a network round-trip ceiling, not a software issue). Honest competitors here: LiteLLM if you want self-hosted aggregation, OpenRouter if you only need single-provider routing, and direct provider SDKs if you only ever want to call one model.