logo
Log InGet Started
Back to all posts
/ResearchCompanyProduct

What We’ve Learned About AI Visibility for SMBs

NONoah Moscovici
What We’ve Learned About AI Visibility for SMBs

Over the last eight months, we’ve been running a large set of experiments focused on one core question:

How do small and mid-sized businesses actually show up inside AI answers, and what reliably improves that visibility over time?

This work wasn’t done in isolation. We ran these experiments alongside a diverse group of early partners across the SMB space, spanning different industries, business models, and growth stages. Some were bootstrapped, some were venture-backed, and many were operating in highly competitive categories.

However, they all shared a common challenge: as buyers increasingly rely on AI assistants for research and recommendations, visibility inside AI conversations is becoming essential, yet unclear. Much of the traditional SEO playbook only partially applies, and in some cases, not at all.

Over these eight months, we analyzed 17,000+ AI conversations and 100,000+ response evaluations. All queries came directly from our SMB partners and reflected the actual questions their buyers ask during research and purchasing. Production data like this, focused specifically on small and mid-sized businesses, is rarely part of the conversation when people talk about AI visibility.

Many assumptions didn't hold up. But a few key patterns appeared consistently enough to guide how we think about AI visibility moving forward.

How we ran these experiments

Together with our SMB partners, we:

  • Published 500+ articles across different cadences, formats, and topical depths
  • Tracked 9,000+ brand mentions 13,000+ citations inside AI responses
  • Analyzed 180,000+ source references to understand which content gets surfaced
  • Tested a wide range of prompt types, from broad category questions to highly specific, problem-driven queries
  • Compared behavior across AI systems like ChatGPT and Google's AI Overview
  • Analyzed source distributions to understand when assistants rely on dominant sources (that ranks page 1 on Google) versus niche content (that may not rank until page 20+ on Google, if at all)
  • Observed how answers change as prompts include more context, constraints, or user-specific details
  • Analyzed the time difference between a source being published by our system and it showing up in AI results

The goal was identify meaningful patterns across industries for AI visibility.

Finding #1: Consistent content works for the massive long tail

One of the strongest parallels to traditional search is that AI visibility query volume is unevenly distributed.

There are highly competitive prompts that almost every company wants to appear in:

  • “best tools for X”
  • “top software for Y”
  • “best platform for Z”

These prompts are attractive because they’re broad and high-intent. They are also crowded. Across our experiments, consistent content creation did not reliably improve visibility for SMBs in these high-competition areas.

We did occasionally observe movement. In some cases, smaller brands broke through. But as a repeatable strategy, publishing more content alone was not enough to consistently change outcomes for these prompts. In other words, ranking for these queries was very much pay-to-play (either through paid PR or paid mentions), which is unideal for SMBs on a tighter budget.

However, once prompts became more specific, the behavior shifted dramatically.

For long-tail questions, we saw consistent and measurable gains from steady, targeted content creation:

  • Prompts tied to a specific role or industry
  • Questions describing concrete problems or workflows
  • Queries that include constraints such as budget, team size, or technical capability
  • Prompts that naturally reward clear differentiation or unique selling points
  • “How do I…” or “What should I do if…” style questions

In these cases, brands showed up more frequently, were positioned more accurately, and were more likely to be recommended as part of a solution.

Graph showing that broad, high-frequency questions are highly competitive, while more specific questions form a long tail of lower-competition opportunities where tailored content wins.
Graph showing that broad, high-frequency questions are highly competitive, while more specific questions form a long tail of lower-competition opportunities where tailored content wins.

The scale of this opportunity matters. This long tail is effectively endless. Every SMB niche contains thousands of variations in how buyers describe problems, evaluate options, and ask for guidance.

For SMBs, this long tail is where AI visibility is both achievable (we know how to repeatably move the needle) and sustainable ($0 extra spend!).

Finding #2: AI assistants are trending toward long-tail recommendations

The second major insight is tied to how AI systems themselves are evolving.

In traditional Google search, personalization is limited. Regardless of who searches for “what is the best product for X?”, the result is the same set of ten blue links, ordered by a generalized notion of relevance and authority. The search engine responds to the query, not the person. This is not the case for AI visibility.

Across our testing, we found that many AI answers are no longer shaped primarily by what is written in a single prompt. Instead, they are increasingly influenced by system-level context that exists outside the immediate question.

Side-by-side comparison showing traditional search results with the same generic links for everyone versus an AI assistant using user context and follow-up searches to provide personalized laptop recommendations.
Side-by-side comparison showing traditional search results with the same generic links for everyone versus an AI assistant using user context and follow-up searches to provide personalized laptop recommendations.

This includes persistent or inferred information such as:

  • Your role or profession
  • The type and size of business you operate
  • Budget expectations and price sensitivity
  • Preferences for simplicity versus configurability
  • Tools, workflows, or approaches you’ve referenced before
  • Your tolerance for complexity, risk, or change

Features like long-term memory, conversation history, account-level personalization, and behavioral inference mean that assistants are building an internal model of the user over time. This internal model directly influences the response you get and the sources used.

This is not an isolated behavior in one product. It reflects a broader industry trend. AI assistants are moving away from one-size-fits-all answers and toward experiences that are personalized, customized, and specific to the individual using them. That personalization is the core benefit of AI as an interface.

As a result, assistants are less likely to default to the same generic, widely known sources. Instead, they increasingly surface content and recommendations that align with the user’s profile, constraints, and likely needs.

For businesses, this changes how visibility works. Being “the best overall” matters less than being the best fit for a specific type of user in a specific situation.

This shift means visibility is less about winning a single category-level prompt and more about being present across many specific recommendation moments where fit and relevance matter.

Brands that consistently appeared in these situations shared common traits:

  • Clear positioning around who they are for
  • Specific articulation of the problems they solve
  • Content grounded in real-world use cases
  • Differentiation that is easy for AI systems to understand and reuse

What this means for SMBs

Taken together, these findings point to a clear conclusion:

For SMBs, long-tail AI visibility is the core strategy.

High-competition prompts will continue to exist, but will remain difficult to influence consistently without significant brand gravity. We recommend SMBs leave those prompts to the companies with millions in marketing spend.

Instead, the more durable opportunity lies in building visibility across the almost endless long-tail:

  • ICP-specific questions
  • Problem-driven prompts
  • Niche workflows
  • Constrained decision-making scenarios
  • Above all: your specific niche you do better than anyone else!

What makes this especially important is where AI systems are heading.

As assistants become more personalized and more context-aware at a system level, the usage of long-tail content will steadily increase while the old world of high competition broad content (like top ten X) will slowly decrease.

Start the work now and carve out your brand's territory in the long-tail question space. Align your content strategy with how AI systems work before that behavior becomes obvious to everyone else.

If you're an existing customer, don't worry - you're in good hands.

If you're an SMB unsure of how to AI-proof your marketing, let's talk. Worst case you get 30 minutes of free consulting with an expert and personalized insights :)

What We’ve Learned About AI Visibility for SMBs