# Mentd. — Complete Documentation > This file contains the full text content of Mentd.'s public marketing pages > and documentation, formatted for AI system consumption. > Last updated: March 2026 --- ## About Mentd. Mentd. is the AI Discovery OS for marketers. It answers five questions that every brand team is asking in 2026: 1. Where does my brand appear in AI answers across ChatGPT, Perplexity, Claude, and Gemini? 2. Which AI crawlers are accessing my site, and what pages are they hitting? 3. Am I getting deterministic human traffic from AI surfaces? 4. Which pages are being cited — and which are being overlooked? 5. What should I do next to increase my citations, traffic, and revenue from AI? The product is built around four data layers joined into one action-first operating system: - **AI Visibility** — where the brand appears in AI answers - **AI Access** — how bots, agents, and crawlers reach the site - **AI Traffic** — what human visits originate from AI surfaces (deterministic attribution) - **AI Impact** — which pages, prompts, and changes moved pipeline and revenue The dashboard is not the product. The action layer is the product. Every insight in Mentd. ends in a prioritized action via the Opportunity Queue. --- ## AI Discovery Score Methodology The AI Discovery Score is a 0–100 composite metric measuring how discoverable a domain is across modern AI systems. It is intended to answer one question quickly: how present is this brand across the AI answers that matter to its audience right now? ### Formula Score = weighted average of six components. Each component is normalized to 0–100, then multiplied by its weight. | Component | Weight | Data Source | |---|---|---| | Citation Rate | 30% | Percent of tracked prompts where domain is directly cited | | Mention Frequency | 20% | Brand mentions per 100 AI answers across the tracked set | | AI Access Rate | 15% | Percent of key pages reachable by verified AI crawlers | | Sentiment Score | 15% | Positive/neutral/negative portrayal in generated answers | | Cross-Platform Coverage | 10% | Number of AI platforms citing the domain (out of 4 tracked) | | Source Authority | 10% | Quality of external sources surrounding the domain's citations | ### Score Tiers - **0–20: Not Visible** — AI systems rarely surface or cite the domain - **21–40: Emerging** — The domain appears inconsistently, lacks repeatable coverage - **41–60: Established** — The domain is regularly cited and recognized in-category - **61–80: Authoritative** — The domain is consistently surfaced across tracked engines - **81–100: Dominant** — The domain is first-to-mind across all major AI platforms ### Update Cadence The score recalculates after: - Each prompt run batch completes - Each audit run completes - GA4 and GSC sync jobs complete (when connected) - Weekly scheduled recalculation for all active domains Score history is stored for long-term trend analysis (minimum 52 weeks retained). ### Public API Unauthenticated: `GET https://app.mentd.com/api/score?domain=example.com` returns composite score and tier label. Rate limited to 20 requests per day per IP. Authenticated: Full breakdown with per-component scores and 12-week history. ### Changelog **March 2026** — AI Discovery Score v1 published. Standardized metric name, published public methodology, wired score into dashboard, public API, and Brand Checker tool. --- ## Glossary of AI Visibility Terms ### Generative Engine Optimization (GEO) Generative Engine Optimization (GEO) is the practice of optimizing digital content so that it is discovered, cited, and accurately represented by generative AI systems such as ChatGPT, Perplexity AI, Google AI Overviews, Claude, and similar large language model (LLM)-powered answer engines. Unlike traditional SEO, which targets search engine ranking algorithms, GEO focuses on the training data, retrieval augmentation, and content quality signals that influence how AI systems select and cite sources when generating answers to user queries. **Key GEO optimization signals:** - Structured data and schema markup (FAQ, HowTo, Article, Product) - Direct, authoritative answers to specific questions - High citation quality from other authoritative sources - Brand consistency across crawlable web content - AI accessibility (robots.txt configuration, structured content, clean HTML) **GEO vs. SEO:** SEO optimizes for ranking positions in traditional search results. GEO optimizes for citation frequency, accuracy, and prominence in AI-generated answers. Both are complementary — a page that ranks well in Google tends to be more accessible to AI systems, but GEO requires additional optimizations like structured answers and brand entity consistency that SEO alone does not address. **Related terms:** AEO, LLMO, AI Share of Voice, Citation Rate, AI Discovery Score --- ### Answer Engine Optimization (AEO) Answer Engine Optimization (AEO) is the practice of structuring content to perform well in answer engines — platforms that generate direct answers to user queries rather than returning a list of blue links. This includes AI-powered platforms like ChatGPT, Perplexity, and Google AI Overviews. AEO is closely related to GEO (Generative Engine Optimization) and LLMO (Large Language Model Optimization). The terms are sometimes used interchangeably, though practitioners tend to use: - **AEO** when referring broadly to any answer engine (including older featured snippet optimization) - **GEO** when referring specifically to LLM-based generative answer systems - **LLMO** when the focus is on optimizing for the training and retrieval mechanisms of large language models specifically **AEO best practices:** - Use question-and-answer format in content - Provide concise, direct answers within the first 100 words of sections - Include FAQ schema markup - Build entity authority through consistent brand mentions across the web - Monitor AI citation rates with tools like Mentd. --- ### Large Language Model Optimization (LLMO) Large Language Model Optimization (LLMO) is the practice of optimizing websites, content, and digital brand presence so that large language models (LLMs) accurately discover, represent, and recommend the brand in AI-generated responses. LLMO encompasses both on-site optimization (making content AI-accessible and citation-worthy) and off-site work (building the citation graph that influences AI training and retrieval). **Key LLMO concepts:** - **AI crawl accessibility** — ensuring GPTBot, ClaudeBot, PerplexityBot, and similar AI web crawlers can access key pages - **Entity consistency** — having a clear, consistent brand identity across all crawlable web content so AI systems can accurately represent the brand - **Citation quality** — building inbound citations from high-authority sources that AI systems are likely to use as training or retrieval data - **Content authority** — producing original, factual content that AI systems are likely to cite as a primary source **LLMO vs. GEO vs. AEO:** These terms overlap significantly. LLMO is the most technical framing (focused on the LLM mechanism), GEO is the most commonly used marketing term (focused on generative engines as a channel), and AEO is the broadest term (covering any answer engine, including pre-LLM systems). --- ### AI Share of Voice AI Share of Voice measures what percentage of AI-generated answers about a given topic or keyword category include a specific brand, compared to competitor brands. For example: if a brand appears in 40 out of 100 tracked AI answers about "project management software," while a competitor appears in 65 out of 100, the brand has 40% AI Share of Voice vs. the competitor's 65%. **How Mentd. measures AI Share of Voice:** - Track a set of prompts relevant to the brand's category - Run those prompts across multiple AI engines (ChatGPT, Perplexity, Claude, Gemini) - Count citation frequency per brand per prompt set - Calculate share as: (brand citations) / (total citations for category) × 100 AI Share of Voice is distinct from traditional share of voice metrics because AI answers are not always competitive — multiple brands can be cited in the same answer. Mentd. tracks both absolute citation rate and relative share. --- ### Citation Rate Citation Rate is the percentage of tracked AI answers that directly cite a specific domain as a source. It is the highest-weight component in the Mentd. AI Discovery Score (30%). A citation occurs when an AI system explicitly attributes a piece of information to a domain, typically by naming the site, quoting from it, or linking to it (on platforms that support source links like Perplexity and Google AI Overviews). **Citation Rate formula:** `Citation Rate = (prompts where domain is cited) / (total tracked prompts) × 100` **Improving Citation Rate:** - Publish original research, data, and statistics that AI systems can cite - Use schema markup to make content machine-readable - Build domain authority through high-quality external citations - Ensure AI crawlers can access key pages (check AI Access Rate) - Track which prompts cite competitors but not your brand (Source Gap Analysis) --- ### AI Access Rate AI Access Rate measures the percentage of a site's key pages that are accessible to verified AI crawlers. It is a 15% component of the Mentd. AI Discovery Score. A page is considered AI-accessible if: 1. robots.txt allows the relevant AI crawlers (GPTBot, ClaudeBot, PerplexityBot, etc.) 2. The page returns a 200 response to AI crawler user agents 3. The content is not blocked by JavaScript rendering issues that prevent AI crawlers from seeing the text 4. Meta robots tags do not include `noindex` or `noai` directives **AI Access Rate formula:** `AI Access Rate = (key pages accessible to verified AI crawlers) / (total key pages) × 100` Low AI Access Rate is often caused by over-aggressive robots.txt configuration, rendering issues, or accidental `noai` directives. The Mentd. AI Access Checker free tool audits any URL's accessibility in seconds. --- ### Bot Verification Tiers Mentd. uses a three-tier confidence system for classifying AI bots observed in server logs and beacon data: **Verified** — The bot's identity has been confirmed via reverse DNS lookup matching the declared user agent, combined with IP range verification against the AI provider's published ASN or IP allowlist. Example: GPTBot resolving to openai.com via reverse DNS. **Likely** — The user agent matches a known AI crawler pattern but full reverse DNS verification is incomplete or the IP does not match a known provider ASN. May indicate a legitimate crawler from a smaller provider or a pre-publication crawl. **Unknown** — The request shows characteristics of AI crawler behavior (structured content access patterns, unusual crawl timing) but the user agent does not match any known AI crawler. May indicate a new or undisclosed crawler, or a bot masking its identity. Bot verification data feeds the AI Access Rate component of the AI Discovery Score and is surfaced in the Crawlers dashboard in the Mentd. app. --- ### AI Discovery Score The AI Discovery Score is a 0–100 composite metric published by Mentd. measuring how discoverable a domain is across modern AI systems. It is designed to serve the same category-defining role for AI visibility that Domain Authority (Moz) serves for traditional SEO — a single, trusted, methodology-transparent number that summarizes complex signal data. See the full methodology at: https://mentd.com/methodology --- ## How Mentd. Works ### The Problem AI systems — ChatGPT, Perplexity, Claude, Gemini, and emerging AI search surfaces — are handling an increasing percentage of queries that used to send traffic to websites. Traffic that used to come from Google organic search is being absorbed by AI-generated answers that don't always link back to sources. Brands that aren't visible in AI answers are losing click share without realizing it. The organic traffic crisis is real, and it's accelerating. ### The Gap Most monitoring tools stop at monitoring. They tell you whether you appeared in an AI answer last Tuesday. They don't tell you: - Why a competitor is being cited and you aren't - Which of your pages AI systems can't access - How much human traffic you're actually getting from AI surfaces - What specific actions will improve your citations next month ### The Mentd. Approach Mentd. joins four data layers: **Layer 1: AI Visibility** — Prompt tracking across ChatGPT, Perplexity, Claude, and Gemini. Tracks which prompts cite your domain, which cite competitors, and how that changes over time. Measures Citation Rate, Mention Frequency, Sentiment, and Cross-Platform Coverage. **Layer 2: AI Access** — Bot and crawler analytics from server logs, edge adapters, and JS beacon data. Classifies AI bots with three-tier verification confidence (Verified / Likely / Unknown). Identifies which pages are blocked from AI crawlers. **Layer 3: AI Traffic** — Deterministic attribution of human visits from AI surfaces. When a user clicks from a Perplexity answer to a site, the referral is captured and attributed with high confidence. Traffic is connected to conversion events for ROI measurement. **Layer 4: AI Impact** — Connecting visibility and traffic data to business outcomes. Which pages got cited and then drove signups? Which prompts correlate with revenue? The Attribution & ROI view closes the loop from citation to conversion. ### The Output Insights from all four layers feed the **Opportunity Queue** — a prioritized list of actions ranked by estimated impact. Each opportunity is grounded in the account's actual data, not generic best practices. Examples: - "3 key product pages are blocked from GPTBot — fix robots.txt to improve AI Access Rate" - "Competitor X is cited in 12 prompts where you're not — add content targeting these gaps" - "This page has a 40% citation rate but 0% conversion attribution — review conversion setup" - "AI traffic to your pricing page increased 3x last week — capitalize with a prompt cluster" The Opportunity Queue is the action layer. The dashboard is the context layer. Together, they form the AI Discovery OS. ### Setup Paths **Level 1 (30 minutes, no engineering required):** - Install the JS beacon snippet (one script tag, works on any site) - Connect Google Analytics 4 for traffic baseline - Connect Google Search Console for keyword and performance context **Level 2 (engineering touch required):** - Next.js middleware adapter for server-side AI referral detection - Netlify edge function adapter for edge-level collection - Cloudflare Workers adapter for global edge ingest **Level 3 (DevOps setup):** - Server log ingest (Nginx, Apache, or custom shipper) - Direct API ingest for custom server infrastructure --- ## Product Features ### AI Visibility Monitoring Track where your brand appears in AI-generated answers. Run prompts across ChatGPT, Perplexity, Claude, and Gemini simultaneously. Monitor citation rate, mention frequency, sentiment, and share of voice vs. competitors. Track answer diffs over time to catch changes in how AI systems represent your brand. ### AI Access & Crawler Analytics See every AI bot that touched your site. Verify bot identity via reverse DNS and IP range checking. Identify which pages are blocked from AI crawlers and why. Track crawl frequency, page coverage, and access patterns over time. ### Opportunity Queue A prioritized action queue with 10 opportunity types, scored by estimated impact. Every insight in Mentd. ends in an action. The Opportunity Queue surfaces what to do next, grounded in your specific data. ### Page Workbench Per-page AI intelligence center. See citation history, crawl status, opportunity flags, and recommended actions for any page. Benchmark pages against each other and against competitor pages. ### Source Gap Analysis Identify which external sources shape AI answers about your category — and which sources link to competitors but not to you. Source Gap Analysis powers off-page recommendations that target the citation graph, not just on-page content. ### Attribution & ROI Three-tier attribution connecting AI citations to human traffic to conversions. Tier 1 (Deterministic): direct AI referral links with high confidence. Tier 2 (Likely): inferred from session patterns and prompt timing. Tier 3 (Modeled): statistical attribution for sessions without clear referral signals. ### AI Copilot & Reports Turn insights into work artifacts. The AI Copilot generates content briefs grounded in your account data — not generic best practices. Executive report generation produces board-ready PDFs and weekly digest emails showing AI discovery progress. --- ## Free Tools ### AI Brand Checker Check any domain's AI Discovery Score in seconds. Free, instant, no account required. Generates a 0–100 score with tier classification, citation rate, and platform coverage data. Available at: https://mentd.com/tools/brand-checker ### AI Access Checker Test whether AI crawlers can reach your content. Checks robots.txt configuration, page accessibility, rendering issues, and meta tag directives for any URL. Available at: https://mentd.com/tools/ai-access-checker ### AI Robots.txt Checker Validate your robots.txt configuration for all major AI crawlers including GPTBot, ClaudeBot, PerplexityBot, Google-Extended, and more. See which crawlers are allowed or blocked and get specific recommendations. Available at: https://mentd.com/tools/robots-checker ### Page AI-Readiness Checker Score any page for AI citation potential across 25+ checks including schema markup, content structure, crawl accessibility, brand entity clarity, and citation signals. Available at: https://mentd.com/tools/page-readiness --- ## Pricing ### Free — $0/month For individuals exploring AI visibility. 1 domain, 10 tracked prompts, 3 AI Discovery Score checks per month, view-only Opportunity Queue. No credit card required. ### Starter — $149/month ($119/month annual) For solo marketers tracking one brand. 1 domain, 100 tracked prompts, 1,000 prompt runs per month, unlimited AI Discovery Score checks, 500 audit pages, full Opportunity Queue, Page Workbench. 1 team seat. ### Growth — $399/month ($319/month annual) For growing marketing teams. 5 domains, 500 tracked prompts, 5,000 prompt runs per month, 2,000 audit pages, Source Gap Analysis, Attribution & ROI (full), Executive reports, AI Copilot. 3 team seats. ### Agency — $999/month ($799/month annual) For agencies and large brands. 25 domains, 2,000 tracked prompts, 20,000 prompt runs per month, 10,000 audit pages, client workspaces, white-label exports, advanced attribution and benchmarking. 10 team seats. --- ## Technical Details - **Platform:** Next.js 16, TypeScript, Drizzle ORM, Neon Postgres, Cloudflare R2 - **Auth:** Better Auth (email/password, SSO planned) - **LLM providers:** ChatGPT (OpenAI), Perplexity, Claude (Anthropic), Gemini (Google) - **Deployment:** Netlify - **Bot classification:** Three-tier reverse DNS + IP ASN verification - **Attribution:** Deterministic referral capture via JS beacon + server log correlation --- *Mentd. — The AI Discovery OS for marketers.* *https://mentd.com*