What is GEO? Generative Engine Optimisation for 2026 AI Search

What Makes AI Search Different From Traditional Search in 2026?
You've probably noticed something strange in your analytics lately. Traffic from "chatgpt.com" or "perplexity.ai" showing up alongside the usual Google referrals. Maybe you've searched for something and received an AI-generated answer with a handful of citations instead of ten blue links. This shift represents a fundamental change in how people find information online.
Traditional search engines return lists of websites and let you choose which to visit. AI search engines synthesise information from multiple sources, generate a complete answer, and cite only 2-7 domains per response according to recent analysis. This compression creates both opportunity and challenge. Getting cited means your content appears directly in the answer. Not getting cited means you're invisible, regardless of how well you rank on Google.
I've been experimenting with optimising older content for AI search engines throughout late 2025 and early 2026. This article shares my experimental framework, the seven metrics I'm testing, and the early results from applying these optimisation techniques. The hardest part? I won't know if this actually works for months, but I'm documenting the process transparently.
What is GEO?
GEO stands for Generative Engine Optimisation—the practice of adapting digital content to improve visibility in results produced by generative AI platforms like ChatGPT, Perplexity, Claude, and Google AI Overviews. Six researchers led by Princeton University introduced GEO in an academic paper published in November 2023, with their research presented at KDD 2024 (the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining).
The term describes strategies intended to influence how large language models retrieve, summarise, and present information when responding to user queries. Unlike traditional SEO that optimises for click-through rates and search rankings, GEO optimises for citation rates and answer inclusion. According to the Princeton research paper, the top optimisation methods can improve AI visibility by 30-40% compared to unoptimised content.
You can dominate Google's first page whilst remaining completely absent when someone asks ChatGPT for recommendations. This represents the fundamental disconnect between traditional SEO success and AI search visibility.
Only 10% of what ChatGPT cites for a given query appears in Google's top 10 organic results. That means 90% of AI citations come from sources outside Google's top rankings.
Why Does GEO Matter in 2026?
The numbers tell a compelling story about changing user behaviour. McKinsey reported in October 2025 that 50% of consumers were already using AI-powered search intentionally as their primary way to find information and make buying decisions. Gartner predicts traditional search engine volume will drop 25% by 2026 as AI chatbots and virtual agents become substitute answer engines.
The adoption rate accelerated dramatically throughout 2025. ChatGPT reached 800 million weekly active users by late 2025 according to the ChatGPT Users Statistics (January 2026) – Growth & Usage Data, whilst the platform processes over 2 billion daily queries. Perplexity AI grew to 22 million monthly active users and handled 780 million queries in May 2025, representing 239% growth in query volume from August 2024.
ChatGPT holds 81% AI chatbot market share as of January 2026 according to AI search statistics, whilst Search Engine Land reports that ChatGPT processes 72 billion messages monthly. The platform's average response contains 1,686 characters compared to 997 for Google according to ChatGPT usage analysis.
For B2B contexts specifically, industry research shows that 47% of B2B buyers now use AI for vendor research, with AI-referred visitors converting at rates 23 times higher than traditional organic search. If you're creating content in competitive spaces, you need to understand how AI engines select sources for citations.
How I'm Approaching GEO Experimentation
I've been tinkering with content optimisation since late 2025 based on research about how AI engines extract and cite content. My approach analyses existing blog posts against seven factors I theorised might influence AI visibility, then suggests specific transformations to improve performance. Throughout December 2025 and January 2026, I've been applying these optimisations to older content and tracking metrics before and after changes.
This is all experimental. I don't know yet if higher entity density actually improves AI ranking. I don't know if citation blocks lead to more ChatGPT mentions. The research suggests these factors matter, so I'm testing those assumptions systematically. This is early-stage experimentation—I'm sharing the process, not claiming I've solved GEO.
My approach to this centres on making each paragraph stand alone without context, increasing the density of named entities and verifiable facts, and providing complete answers in the first 40-60 words of each section. I'm also converting comparison prose to tables, adding FAQ sections with verified user questions, and improving source attribution throughout content.
The Seven Metrics I'm Testing
Based on academic research on GEO and analysis of how AI engines structure responses, I identified seven factors that appeared to correlate with citation likelihood. I'm measuring these before and after optimisation to build data on what actually moves the needle.
Semantic Completeness
Can each paragraph stand alone without context? Does it avoid references like "as mentioned above" or "see previous section"? Would an extracted chunk make sense in isolation? I'm targeting 85%+ on this metric, measured by analysing how many paragraphs require external context to understand.
AI engines extract snippets from content to build responses. If a paragraph references previous sections, it becomes unusable as a standalone citation. I've been rewriting paragraphs to include all necessary context internally, which should make them more citation-worthy.
Entity Recognition and Knowledge Graph Density
Are named entities—brands, products, people, organisations, concepts—clearly identified? Are entities connected and relevant to the topic? Are entities verifiable in knowledge graphs? I'm targeting 15-20 entities per 1,000 words, based on patterns observed in highly-cited content.
This metric proved harder to improve than expected. I added specific product names with version numbers, referenced actual companies and research organisations, and included proper nouns wherever relevant. Entity density increased, but I'm still validating whether this correlates with better AI visibility.
Factual Density with Attribution
Are statistics, percentages, or numerical data present? Are sources explicitly named? Are claims verifiable and timestamped? I'm targeting one data point per 150-200 words with explicit source attribution like "According to Gartner's 2025 report..." rather than vague references.
Princeton research indicates that adding statistics significantly boosts AI citations. I replaced adjectives with measurements, added percentages where previously using terms like "significantly" or "many", and included specific dates instead of "recently".
Answer Completeness
Does each section answer an implied question immediately? Is the answer buried in paragraph three, or does it lead? Are there FAQ opportunities from real user questions? I'm targeting direct answers in the first 40-60 words of each section—what I call "citation blocks".
Before optimisation, most sections built up to answers gradually. After optimisation, each section starts with a complete, standalone answer that AI engines can extract and cite directly. The remaining content provides context and supporting details.
Authority Signals
Are there visible author credentials? Are expert quotes included with full credentials? Are third-party sources cited? Is there Knowledge Graph presence? I'm targeting 90%+ on this metric by adding proper attribution and linking to authoritative sources.
For my blog specifically, this means referencing my Full Stack Software Development Diploma when discussing development topics, linking to actual projects I've built like Gamers Hub or the Python Avengers Data Dashboard, and citing external research with publication dates and URLs.
Content Structure for AI Extraction
Single H1, sequential H2/H3 hierarchy? Sections between 75-300 words each? Tables used for comparisons? Lists for processes? Headers that mirror search queries? I'm targeting 100% compliance with structural best practices that make content machine-readable.
I rewrote headers to answer specific questions ("What is X?" instead of "Introduction to X"), converted comparison paragraphs to tables, and broke longer sections into digestible chunks. Each structural change aims to make content easier for AI engines to parse and extract.
Freshness and Recency Signals
Is there a visible "last updated" date? Are statistics and examples current within 12 months? Are timestamps like "as of January 2026" present throughout content? I'm targeting updates within 30 days and current examples throughout.
This metric requires ongoing maintenance. I added "last updated" dates to all content, replaced outdated statistics with current 2025-2026 data, and included specific timestamps like "as of January 2026" whenever making claims about current state.
Early Results From GEO Experiments
I've applied this optimisation approach to several older blog posts throughout December 2025 and January 2026. The metric improvements are measurable. Whether these improvements actually correlate with better AI citation rates remains to be seen—I'll need months of data collection to validate that hypothesis.
These metrics measure content structure and AI-readiness, not actual citation rates. Higher scores indicate content better structured for AI extraction, but don't guarantee improved visibility in ChatGPT or Perplexity responses. Real validation requires 3-6 months of tracking.
Here's what the numbers show for content that went through the optimisation process:
| Metric | Before Optimisation | After Optimisation | Improvement |
|---|---|---|---|
| Semantic Completeness | 60% | 88% | +28 percentage points |
| Entity Density | 50% | 82% | +32 percentage points |
| Factual Density | 42% | 88% | +46 percentage points |
| Answer Completeness | 65% | 90% | +25 percentage points |
| Authority Signals | 70% | 92% | +22 percentage points |
| Structure Quality | 75% | 100% | +25 percentage points |
| Freshness Signals | 45% | 95% | +50 percentage points |
| Overall AI-Readiness | 54% | 84% | +30 percentage points |
The transformations I applied included:
- Added citation blocks to 4-6 major sections per article
- Converted 3-5 comparison paragraphs to markdown tables
- Replaced 15-20 vague adjectives with specific data points
- Restructured 5-7 sections for answer-first format
- Added FAQ sections with 4-5 verified questions from actual user searches
- Improved 20-30 entity references with attribution
- Added timestamps to 10-15 statistical claims
What I cannot tell you yet: whether these improvements lead to more ChatGPT mentions, higher Perplexity citation rates, or increased visibility in Google AI Overviews. That validation requires time.
You cannot instantly test if GEO optimisations work. Unlike A/B testing ads with 48-hour feedback loops, GEO requires 3-6 months of data collection. Search engines need time to crawl and index changes. AI platforms don't publish indexing schedules. This is planting seeds and waiting for harvest, not rapid iteration.
The Validation Timeline Challenge
Here's the part that makes GEO experimentation difficult: you cannot instantly test if optimisations work. Traditional SEO lets you track rankings within weeks. GEO operates on different timescales with unpublished indexing schedules.
Search engines need time to crawl, index, and evaluate content changes. Google typically takes 4-12 weeks to settle rankings after major content updates. AI search engines like ChatGPT and Perplexity don't publish their indexing schedules—I have no idea how long before optimised content appears in their responses.
I'm tracking several analytics streams to measure impact:
- Traditional Google rankings for target keywords (measurable within 6-8 weeks)
- Visibility in ChatGPT responses through manual testing (ongoing random sampling)
- Appearance in Google AI Overviews (tracking weekly for changes)
- Perplexity citation rates (monitoring query responses monthly)
- Organic traffic changes (requiring 3-6 months to identify trends)
- Referral traffic from AI platforms (tracking chatgpt.com and perplexity.ai in analytics)
The honest assessment? I made these optimisations in December 2025 and January 2026. I'll need to revisit this analysis in June or July 2026 to see actual impact. For now, I can only report metric improvements and hypothesis formation, not proven results.
This isn't like A/B testing advertisements where you get results in 48 hours. It's more like planting seeds and documenting soil quality and placement whilst waiting months to measure the harvest. The metric scores suggest improved AI-readiness, but citation rates remain theoretical until enough time passes for validation.
How GEO Relates to Traditional SEO
GEO and SEO complement rather than replace each other. Many of the same principles apply—quality content, clear structure, authoritative sources—but the optimisation targets differ. SEO optimises for click-through rates and rankings. GEO optimises for extraction and citation.
Research indicates that generative engines often pull from high-ranking content, making traditional SEO a foundation for GEO success. If your content doesn't rank well on Google, it's less likely to appear in AI training data or retrieval systems. Strong SEO provides the baseline visibility that GEO builds upon.
The structural requirements overlap significantly. Both benefit from clear hierarchy, semantic HTML, proper headings, and logical organisation. Both value authoritative sources and proper attribution. Both require regular updates to maintain freshness signals. Where they diverge is in content density and answer format.
SEO traditionally optimised for readability and engagement metrics—time on page, scroll depth, bounce rate. GEO optimises for extractability and self-contained information chunks. An SEO-optimised article might spread an answer across three paragraphs to increase time on page. A GEO-optimised article provides the complete answer in the first 40-60 words, then offers supporting context.
I'm treating GEO as an enhancement layer on top of solid SEO foundations. The content I'm optimising already ranked reasonably well for target keywords. The GEO framework adds citation blocks, increases entity density, and restructures for AI extraction whilst maintaining the readability and engagement that traditional SEO values.
What This Blog's Optimisation Reveals
I applied the GEO optimisation framework to this very article you're reading. That seemed appropriate—if I'm writing about content optimisation for AI search, the content itself should demonstrate those principles.
The process revealed interesting challenges in balancing AI-readiness with natural writing voice. Citation blocks work well for educational content but can feel mechanical if overused. Entity density requirements push towards more specific examples, which strengthens content but requires careful research to avoid fabrication. Factual density with attribution means finding verifiable statistics for claims rather than relying on general knowledge.
This article went through the seven-metric analysis. The results:
| Metric | Initial Draft | After Optimisation | Improvement |
|---|---|---|---|
| Semantic Completeness | 65% | 87% | +22 percentage points |
| Entity Density | 55% | 80% | +25 percentage points |
| Factual Density | 48% | 86% | +38 percentage points |
| Answer Completeness | 60% | 88% | +28 percentage points |
| Authority Signals | 75% | 90% | +15 percentage points |
| Structure Quality | 80% | 98% | +18 percentage points |
| Freshness Signals | 85% | 100% | +15 percentage points |
| Overall AI-Readiness | 62% | 86% | +24 percentage points |
The transformations applied to this article include citation blocks at the start of each major section, conversion of the metrics table and result tables, specific statistics with source attribution and dates, internal links to my actual projects and portfolio work, external links to authoritative research and industry analysis, FAQ section addressing common GEO questions, and timestamps throughout referencing current and up-to-date data.
Will this article get cited more frequently in AI responses than my previous work? Ask me in six months. For now, I can confirm that following the framework produces measurably more AI-ready content according to the metrics I'm tracking.
Common Challenges in GEO Implementation
Working through GEO optimisation on multiple articles revealed several recurring challenges worth documenting. These aren't theoretical problems, they're practical issues I encountered whilst applying the framework.
- Finding Current Statistics: Many topics lack recent, verifiable data. I spent hours searching for 2025-2026 statistics on AI search adoption because older 2023-2024 numbers felt outdated. Sometimes the data simply doesn't exist yet, forcing a choice between using slightly outdated statistics with clear dates or avoiding quantification entirely.
- Balancing Entity Density with Readability: Adding 15-20 entities per 1,000 words improves metrics but can make prose feel cluttered with proper nouns. I found myself rewriting sentences multiple times to incorporate specific product names, version numbers, and company names without sounding like a technical specification document.
- Creating Self-Contained Paragraphs: This might be the hardest structural change. Natural writing builds narratives across paragraphs. GEO-optimised writing requires each paragraph to stand alone, which fights against storytelling instincts. I caught myself using "this approach" or "these methods" dozens of times before realising each reference needed explicit context.
- Verifying All External Links: The framework requires that all external links work properly with HTTP 200 status codes. I discovered several broken links to research papers and industry reports that had changed URLs or been moved behind paywalls. Each required finding alternative sources or updated URLs.
- Table Conversion Decisions: Not every comparison benefits from table format. Converting prose to tables works brilliantly for product specifications or metric comparisons. It works poorly for nuanced discussions where context matters more than direct comparison. Learning when to use tables versus keeping prose required trial and error.
Those of you with a keen eye will still notice I use "this approach" or similar in places, sometimes it just can't be avoided in certain contexts.
External (and internal) links are an area that require constant checking, as links can often become broken, get removed, or have url structural changes. So it is important to keep on top of verifying links regularly.
What I'm Watching For in 2026
The GEO field remains new enough that best practices are still emerging through experimentation. I'm tracking several trends and questions throughout 2026 to refine my optimisation framework.
- AI Citation Patterns: Which content types get cited most frequently? Do how-to guides outperform theoretical discussions? Do data-heavy articles beat narrative storytelling? I'm analysing citation patterns across different content formats to identify what AI engines prefer.
- Platform-Specific Differences: ChatGPT, Perplexity, and Google AI Overviews may prioritise different signals. ChatGPT appears to favour conversational explanations. Perplexity emphasises research citations. Google AI Overviews seem to prefer structured data. I'm testing whether platform-specific optimisation outperforms general GEO best practices.
- Update Frequency Impact: Does updating content monthly improve citation rates compared to quarterly updates? The freshness signals metric suggests recent updates matter, but I'm validating whether frequent small updates outperform less frequent larger revisions.
- Internal Linking Structure: Do AI engines follow internal links when building context? I've added more internal links to my portfolio projects and game development projects. Testing whether interconnected content clusters improve citation likelihood compared to standalone articles.
- Long-Form Versus Focused Content: The research suggests AI engines extract from both long-form guides and shorter, more focused articles. I'm comparing citation rates between 3,000-word deep dives and 800-word focused explainers targeting specific questions.
Bonus Section: What Role Does Schema Markup Play in GEO?
Schema markup provides structured data that helps both traditional search engines and AI platforms understand your content's context and meaning. This JSON-LD code sits in your page's <head> section, describing entities, relationships, and content types in machine-readable format.
For platforms like WordPress with Yoast or RankMath plugins, schema generation happens automatically. For custom-built sites using Next.js, React, or other frameworks, you'll need to implement it manually.
The intersection between schema and GEO centres on machine readability. Google's Search Generative Experience (SGE) prioritises context-rich, schema-tagged content according to Google Search Central. When you mark up products, organisations, people, or concepts using schema.org vocabulary, you're explicitly telling AI systems what entities exist, what relationships connect them, and what properties define them.
Research on AI parsability shows that generative engines like ChatGPT and Google's Gemini don't just crawl content—they parse structured data first. Schema markup acts as metadata fuel for vector databases, embeddings, and retrieval-augmented generation systems that power AI search.
Relixir 2025 Analysis of 50 domains found that pages with properly implemented FAQ, HowTo, and Product schema markup achieved a median 22% increase in AI citations compared to pages without structured data. This citation lift was consistent across both B2B and e-commerce verticals.
Which Schema Types Deliver Measurable GEO Impact?
Based on the Relixir 2025 analysis of 50 domains, five schema types demonstrated measurable impact on AI citation rates:
- FAQPage Schema marks up question-and-answer pairs, making them easy for AI engines to extract as citation blocks. Search Engine Land identifies FAQ schema as one of the highest-impact changes for AI visibility. When someone asks ChatGPT or Perplexity a question that matches your FAQ schema, the platform can pull the structured answer directly rather than parsing paragraphs.
- Article Schema identifies authorship, publication dates, and content categorisation. AI engines use this to assess authority signals and freshness. The Relixir study found that Article schema with properly structured author credentials and publication timestamps improved citation likelihood, particularly for educational and research content.
- HowTo Schema structures step-by-step instructions with explicit ordering, time requirements, and materials needed. This maps directly to how AI engines present procedural information—they often generate numbered lists based on the sequential steps you've defined in schema markup. Analysis shows HowTo content with proper schema appears more frequently in zero-click searches where AI provides complete answers without requiring users to visit websites.
- Product Schema (for e-commerce or product-focused content) includes pricing, availability, ratings, and specifications. Whilst AI engines cannot process real-time transactions as of January 2026, they frequently cite product information when users research purchases or compare options. GEO case studies show that Shopify uses Product schema across thousands of listings, enabling richer AI-generated product comparisons.
- Organisation Schema establishes your business entity in knowledge graphs. This includes logo, contact information, social profiles, author profiles, and relationships to content you publish. Research on higher education GEO found that Organisation schema establishes institutional credibility, with FAQ schema demonstrating the highest citation probability amongst tested schema types.
How Do You Implement Schema Markup on Next.js Sites?
For Next.js sites specifically (which is what my blog runs on), schema implementation happens through JSON-LD using the Script component. Next.js official documentation covers this in their guides section. You're already declaring standard metadata like title, description, and Open Graph tags. Schema markup adds another layer using JSON-LD format.
Here's the standard implementation pattern for Next.js:
import Script from 'next/script'
export default function Page() {
const jsonLd = {
'@context': 'https://schema.org',
'@type': 'Article',
headline: 'Your Article Title',
description: 'Your article description',
author: {
'@type': 'Person',
name: 'Author Name',
url: 'https://example.com/about'
},
datePublished: '2026-01-19T09:00:00+00:00',
dateModified: '2026-01-19T09:00:00+00:00',
publisher: {
'@type': 'Organization',
name: 'Your Site Name',
logo: {
'@type': 'ImageObject',
url: 'https://example.com/logo.png'
}
}
}
return (
<>
<Script
id="article-schema"
type="application/ld+json"
dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd) }}
/>
{/* Your page content */}
</>
)
}For TypeScript projects, install the schema-dts package to get proper type definitions:
npm install schema-dts --save-devThen import types for schema validation:
import { WithContext, Article } from 'schema-dts'
const jsonLd: WithContext<Article> = {
'@context': 'https://schema.org',
'@type': 'Article',
headline: 'Your Article Title',
// TypeScript will now validate all properties
}For FAQ schema marking up multiple questions:
const faqSchema = {
'@context': 'https://schema.org',
'@type': 'FAQPage',
mainEntity: [
{
'@type': 'Question',
name: 'What is Generative Engine Optimisation?',
acceptedAnswer: {
'@type': 'Answer',
text: 'GEO is the practice of optimising content for citation by AI search engines like ChatGPT, Perplexity, and Google AI Overviews, introduced by Princeton University researchers in November 2023.'
}
},
{
'@type': 'Question',
name: 'How does GEO differ from traditional SEO?',
acceptedAnswer: {
'@type': 'Answer',
text: 'Traditional SEO targets ranking in search results lists, whilst GEO focuses on getting cited within AI-generated answers that synthesise information from multiple sources.'
}
}
]
}Invalid schema causes more harm than no schema. Broken markup triggers errors in Google Search Console and potentially harms traditional SEO whilst providing zero GEO benefit. Always validate at schema.org/validator and Google's Rich Results Test before publishing.
The Schema Validation Reality
I learned this through actual mistakes. Early implementations involved copying examples from documentation, modifying them quickly, and publishing without validation. Several articles ended up with malformed JSON that broke Google's rich results whilst providing no value to AI engines. The validation step takes 30 seconds but prevents hours of debugging.
The other common mistake involves including properties you don't actually have. If your Product schema declares aggregateRating but you have no customer reviews, you're providing false information. AI engines may cite that false data, damaging trust when users discover the discrepancy. Only include properties you can populate with genuine, verifiable information.
Does Schema Directly Improve GEO Rankings?
The Relixir 2025 study provides the most concrete evidence: properly implemented FAQ, HowTo, and Product schema achieved a median 22% increase in AI citations across 50 analysed domains. This uplift remained consistent across both B2B and e-commerce verticals, suggesting schema markup's impact on AI search performance is universal rather than industry-specific.
Search Engine Land reports that restructuring content with clear headings, direct answers, and Q&A formats combined with implementing schema markup on priority pages represents one of the highest-impact GEO optimisation strategies available in 2026.
Research on GEO best practices confirms that implementing proper schema markup forms one of the core principles of effective GEO alongside structuring content with direct answers, maintaining fact density, and citing authoritative sources.
For my own experimentation, I'm treating schema as an enhancement layer that costs minimal implementation time whilst potentially improving both traditional SEO and GEO outcomes. The Relixir data showing 22% citation lift provides enough evidence to warrant inclusion in my optimisation framework, particularly for FAQ and HowTo content where the impact appears strongest.
Start with your best-performing content that already ranks well. Audit against the seven metrics, identify quick wins, and optimise 3-5 articles as experiments. Track changes systematically. GEO builds on SEO foundations—it won't fix poor-quality content, but it can make strong content more citation-worthy for AI engines.
Where GEO Fits in Your Content Strategy for 2026
If you're building content in competitive spaces, GEO deserves attention alongside traditional SEO. The framework I've built focuses on making existing quality content more extractable and citation-worthy rather than requiring complete rewrites or new content creation.
Start by auditing your best-performing content against the seven metrics I outlined. Identify quick wins—articles that score well on some metrics but poorly on others. A piece with strong authority signals but weak answer completeness might benefit from adding citation blocks at the start of each section. Content with good structure but low factual density might improve by replacing adjectives with specific statistics.
Focus on content that already ranks well for target keywords or receives consistent traffic. GEO builds on SEO foundations rather than replacing them. If content doesn't rank or attract visitors through traditional search, optimising for AI extraction won't magically fix underlying quality or relevance problems.
Track your optimisations systematically. Document what changed, when changes went live, and which metrics improved. Set up monitoring for AI platform referrals in analytics. Test relevant queries monthly in ChatGPT and Perplexity to track citation patterns. This data becomes valuable as you refine your approach throughout 2026.
Expect this field to evolve rapidly. AI search platforms are adding features monthly. ChatGPT launched Agent Mode in July 2025 and Instant Checkout in September 2025, changing how users can interact with AI responses. Optimisation strategies that work in January 2026 may need adjustment by June 2026 as platforms evolve.
Frequently Asked Questions
Can Small Sites Compete in GEO?
GEO actually favours expertise and authority over domain age and backlink quantity. A challenger brand with deep expertise and authentic community engagement can outrank larger competitors. It's one of the few level playing fields in digital marketing right now. The key is demonstrating genuine first-hand experience rather than aggregating existing information.
How Do You Measure GEO Success?
Tracking Share of Model (SoM)—how often your brand appears in AI responses compared to competitors—provides the most direct success metric. Manual testing involves running relevant queries in ChatGPT, Perplexity, and AI Overviews weekly and documenting citation rates. Some businesses track referral traffic from chatgpt.com and perplexity.ai in analytics. Specialised GEO tools are launching throughout 2026 to automate this measurement.
Does GEO Work for E-Commerce?
E-commerce faces unique challenges in GEO because AI engines cannot access real-time inventory, pricing, or checkout systems. However, product education content, buying guides, comparison articles, and specification details can drive awareness that leads to branded searches. As of January 2026, 24% of consumers are comfortable with AI agents shopping for them, increasing to 32% among Gen Z consumers.
How Long Does GEO Take to Show Results?
Based on my experimentation, you should plan for 3-6 months before seeing measurable impact. AI search engines don't publish indexing schedules. Traditional search engines take 4-12 weeks to settle rankings after content updates. Citation patterns require statistical significance before drawing conclusions. This isn't like paid advertising with 48-hour feedback loops. Think of it as planting seeds—you improve the content and wait for validation.
Is GEO Just Another Trend?
AI search adoption grew from 8% to 40% in just one year according to market analysis. McKinsey predicts $750 billion in revenue flowing through AI search by 2028. This represents a fundamental shift in user behaviour, not a temporary trend. The terminology might evolve, but optimising for AI visibility appears permanent. Even if "GEO" as a term changes, the underlying practice of making content citation-worthy for AI systems will remain relevant.
Final Thoughts on GEO Experimentation
I've shared my experimental approach and research for GEO optimisation, the seven metrics I'm testing, and the early results from applying these techniques to blog content. The metric improvements are measurable and significant. Whether these improvements actually lead to better AI citation rates remains unproven until I gather months of validation data.
This represents my honest assessment of where GEO stands in January 2026. I'm building hypotheses based on research, testing optimisation techniques systematically, and tracking results transparently. I'll know in six months whether the framework I've built actually improves visibility in AI search engines or whether the correlation between metrics and citations proves weaker than expected.
The principles make logical sense. AI engines need self-contained information chunks to extract and cite. Entity-dense, fact-rich content with clear attribution provides exactly that. Structural optimisation makes content machine-readable. Citation blocks offer immediate answers that AI systems can present to users. But logic isn't proof. Results will validate or refute these hypotheses over time.
If you're experimenting with GEO optimisation yourself, I'd value hearing what patterns you observe. Which techniques move the needle? Which metrics correlate with actual citation rates? Where does the framework need refinement? This field is new enough that collaborative learning accelerates progress for everyone involved.
For now, I'm continuing to optimise content, track metrics, and gather data. Ask me in June 2026 if it worked.
The important thing is not to stop questioning. Curiosity has its own reason for existing. - Albert Einstein
