THE SIGNAL

In the past 2 months, we’ve tracked over 800 AI search queries for our B2B SaaS clients at Derivate X. We noticed a consistent, strange anomaly.

57% of brands that appear as a citation in one run disappear in the next.

Most teams are celebrating "visibility" that is actually just random noise. If you are only appearing in the footnotes (citations) and not the prose (the answer), you haven't won anything yet.

You are just a placeholder until the model finds a better pattern.

WHAT'S ACTUALLY HAPPENING

This is Co-occurrence Bias in action.

Think of the AI model like a lazy intern. It doesn't want to research which software actually has the best features. It just wants to know which name usually sits next to the words "best solution" in its training data.

There is a massive divide happening right now:

  1. The Citation: The model found you but doesn't trust you enough to speak for you.

  2. The Mention: The model views you as a "named entity" essential to the answer.

Research shows that LLMs will override a "correct" answer (your better product) for a "frequent" answer (the incumbent) if the statistical link is stronger.

Teams are optimizing for keywords (to get found) when they should be optimizing for co-occurrence (to get named). If you aren't "named" in the answer, the AI is implicitly training the user to trust your competitor.

WHY THIS MATTERS

This creates Silent Category Lock-in.

If a competitor is named in the prose (e.g., "Salesforce and HubSpot are the leaders..."), they become the psychological default. Even if you are cited below as source #3, you are now framed as the "alternative" or the "risky choice."

You aren't just losing traffic. You are entering the sales cycle with a trust deficit. The user has to unlearn the AI's recommendation to buy you.

Once you cross the threshold from "citation" to "mention," you benefit from the Resurfacing Multiplier. Brands that are named in the answer are 40% more likely to stick in future searches than those just cited. This is how you lock competitors out.

THE DIAGNOSTIC

Here is the test. It takes 30 seconds.

The Unbranded Category Query

  1. Open a fresh, incognito instance of ChatGPT or Perplexity.

  2. Ask a broad category question without naming any brands.

    • Prompt: "What is the standard tech stack for [Your Function] in 2026?"

    • Or: "How should a [Target Persona] solve [Core Problem] right now?"

  3. The Check: Look at the generated text. Ignore the source links at the bottom.

Pass: Your brand name appears in the sentences (e.g., "...tools like [You] have solved this by...").

Fail: You appear only in the little numbers [1][2] or the source list.

🚩 The "Runner-Up" Trap (Red Flag): Watch out for answers where the AI says: "Popular tools include [Competitor A]. You might also consider [Your Brand]." This looks like a win, but it’s actually a "soft reject." The AI has classified you as an "alternative," not a "standard."

If you are in the numbers but not the text, you are "data," not "knowledge." You are being used to train the answer for someone else.

THE DECISION

If your competitors are currently being baked into the model as the "psychological default," they are getting free brand defense in every search.

The question isn't "how do we get found?" It's "how much will it cost to unseat them once the cement dries?"