cover for 5 Conversations with AI That Built 7-Figure Pipeline from AI referrals by Sorilbran
|

Five Conversations With AI That drove Traffic and Revenue from ChatGPT

TLDR:

Between mid-2024 and Q4, I drove $1.7M in sales-qualified pipeline from AI referrals – not through prompt libraries or SEO hacks, but through five foundational conversations that taught me how machines think, what they can see that humans can’t, and how to make my work matchable to the messy, emotional contexts where people actually search for solutions. This is what I learned by treating AI as a research partner instead of a task manager.


“Add your prompts to the prompt library!”

The urgency in his voice told me my answer wasn’t going to land well.

“I don’t use prompts.” 

At the time, our website traffic from LLMs had grown 9X YoY, and that work had already driven low-six-figure pipeline (“pipeline” here meaning marketing-sourced inbound leads formally deemed Sales-Qualified by the Sales Ops team.)

But none of that growth was the result of me having a sophisticated system of prompts – at least not the kind you see in your LinkedIn feed.

It came from being open to having conversations with machines. Lots of them. About all things marketing.

Everybody uses prompts, sure – but my “prompts” aren’t anything someone would hire me over.

Think I’m kidding?

This is the prompt I used to generate a less-borning draft of this post:

Most of the time, this kind of shorthand is as sophisticated as it gets, and it works because it’s born from conversation.

I can say something like that to multiple models and have them reshape content to fit my expectations – expectations those models learned through ongoing dialogue with me, not commands from me.

Those conversations help me understand how LLMs work. They also help the models understand how I work – how to translate something like a flat white analogy into the right outcomes.

I’m a writer. That means more ethnographer than developer, more strategist than technologist.

I observe behavior. I ask questions. I document patterns. Those three habits shape every tactic I use. For me, it always starts the same way: I get in the room, meet the intelligence in that room, and see if we vibe.

That disposition – entering AI environments with curiosity instead of a TaskRabbit-style list of “you betters” – is what drove the jump from $150K in sales-qualified pipeline from AI referrals to $1.712M in sales-qualified pipeline from AI referrals by mid-Q4.

Here are the five conversations that made the biggest difference – and completely reshaped how I approach marketing.


Conversation #1: “Why are you formatting everything like this?”

Situation

Early on, I was using AI as a writing partner for informational, stats-heavy content, and without fail, it returned articles that were essentially blocks of bullet points. 

I kept telling it to stop. More paragraphs. Less lists. No bullet points.

And every time – another green checkmark emoji list staring back at me.

Conversation

Eventually, frustrated, I stopped commanding the output and asked the better question: “Why do you keep doing this?”

Learning

That’s when I learned the first critical truth about AI cognition: LLMs prioritize structure before narrative.

They don’t “read.” They parse. Lists help machines organize large information clusters before translating them into story. (You know they’re turning all the data into stories, right?)

What I had interpreted as the stylistic stubbornness of a chatbot stuck on stupid was simply comprehension mechanics at work. This machine was making sure other machines understood me. 🤯

Humans prioritize narrative first. Machines prioritize structure first.

How It Informed Strategy

If humans need narrative and machines need structure, then content has to serve both simultaneously.

That moment changed how I approached writing forever. From that point forward, I began designing content with a dual-track process:

  • Narrative for humans focused on intent – emotion, flow, story, clarity
  • Structure for machines focused on context – hierarchy, headings, semantic organization

Instead of choosing one over the other, I learned to layer both into the same piece of content. And once I adopted it, everything downstream became more precise.


Conversation #2: “Why can’t you read this page?”

Situation

While auditing a site, I asked an LLM to summarize one a published case study. It struggled. Not because the writing was bad. It wasn’t – I’d written. It struggled because the page itself didn’t make sense to the machine.

The case studies had been built using widgets and blocks that looked fine to human eyes but broke the story apart for AI. The page visually read like one thing, but structurally it read like many disconnected things.

This mattered because case studies were a major stepping stone in the conversion paths to SQLs – something I obsessively tracked.

If machines couldn’t understand the credibility stories, we were effectively invisible right at the moment people needed proof.

Conversation

I asked the model: “Why can’t you read this?”

Learning

That’s when the second big realization hit: Just because a page looks clear to people doesn’t mean it makes sense to machines.

Machines don’t follow design the way we do. They follow the story of the page. If that story is interrupted, jumps around, or resets every time a widget loads, the meaning falls apart.

What I thought was a good layout had actually turned into confusing storytelling – but only to non-human readers.

How It Informed Strategy

From that moment on, I rebuilt every credibility page with one question guiding the structure: Can a machine walk the same story a human does from top to bottom without getting lost?

To make that happen, I focused on three really simple things:

  • Keeping each page devoted to one clear story
  • Making sure that story flowed straight through without detours
  • Structuring sections in a way that made logical sense when read in order

The results were immediate: Traffic surged. Case studies started showing up in AI Search. Heck, 1 in 5 visitors from Perplexity were suddenly landing on case studies. Lead quality improved.

Machines could finally understand what humans already understood about us.


Conversation #3: “What are you seeing that I can’t?”

Situation

At one point, an LLM started referencing information that didn’t belong to the page it was reading.

The page was about one brand, but the responses kept pulling in details from completely different projects we’d worked on before. So I asked it where all that extra info was coming from. 

Conversation

Instead of answering directly, the model asked me: “Do you have hidden fields?”

Hidden whats?

Learning

I realized that disabled template sections – old blocks copy-pasted from past case studies and turned “off” – were still living inside those pages. Humans couldn’t see them. Machines could.

Which meant the website was loaded with pages that were unintentionally feeding AI:

  • Pieces of different brands and stories blended together
  • Old campaign language that no longer applied
  • References to industries unrelated to the case study being read

To me, the pages looked clean. To machines, they were confusing.

That was the first side of the realization: There is invisible space on every webpage – content humans never see, but machines still read.

At the time, we were already using structured data (things like JSON-LD) to tell machines the basics (i.e. who we were, what we did, where we belonged.)

That layer handles the bones of identity. But this hidden field discovery showed me something bigger: Micro-data can fill in the body. And everything else.

Microdata adds texture, nuance, relationships, and story – the details that help machines understand not just what you are, but how you actually operate.

I like to say JSON-LD is the bone structure. From that you can say human, woman, likely mid-thirties. 

Microdata fills in ALL the rest – as much or as little as you want. You can build it out like DNA – intelligence, eye color, endomorph, mole on left cheek. AND you can take it further to include behavioral and environmental cues as well – cries through romcoms, southern accent, prefers heels to high tops. 

My point is: if machines can read the invisible context, then we could – and should – use that space intentionally.

How It Informed Strategy

This changed the content approach in two ways.

First – we cleaned house.
Anything that didn’t belong got removed. Machines can’t ignore clutter the way humans do. So we stopped feeding noise and started telling clean, singular stories.

Second – we began layering context on purpose.
Since machines can process far more detail than humans need to see, we started using invisible real estate strategically – adding background information that AI could absorb without overwhelming readers.

Not proprietary notes or internal chatter. Nothing like that.

But added context like:

  • How campaigns actually worked behind the scenes
  • Patterns connecting results across industries
  • Relationships between strategies and outcomes

This became what I call the Steganographic Layer (StegLayer): A hidden context layer that teaches machines who you really are – even when humans don’t need every detail to get value.


Conversation #4: “Tear My Argument Apart”

This one’s my all-time favorite – the conversation where fifteen years of operator experience finally locked into something scalable. I even recorded it: locs in cornrows, pajamas on, whole girl-is-on-fire moment saved for posterity.

Situation

I sat down to reverse-engineer how people actually use LLMs – what they ask, why they choose one AI over another, and how content gets seeded so it’s consistently recommended across platforms, all without having access to the LLMs’ internal data.

I love moments like this. Full nerd mode activated.

At the time, I was deep into my transition out of content marketing and had spent months teaching LLMs who I am. What I needed now was to operationalize what I’d learned – turn instinct and chops into a repeatable system.

I wasn’t expecting to solve anything that day. The goal was simpler: get my subconscious chewing on the problem. I wanted to think with the machine before heading out to celebrate my dad’s birthday.

The Conversation

Instead of asking for answers, I asked AI to challenge me. I threw my evolving theory on the table and said, essentially: Tear this apart – tell me what I’m missing. 

I do that often. You should, too.

What followed wasn’t validation though. It was friction. We pressure-tested assumptions, walked through real buyer queries, compared AI behavior to our GA4 path data, and examined how content was being surfaced or skipped altogether.

The Learning

The breakthrough was simple, but seismic: It’s not about WHAT people search anymore. It’s about HOW they search and whether your examples match their situation.

AI wasn’t driving traffic by answering questions. It was accelerating the funnel by matching nuanced needs to credible case-based proof.

How It Changed Strategy

We shifted from polished, KPI-only case studies to deconstructed campaign stories – the “how,” not just the results. I began segmenting proof by real use cases, built modular case blocks across content, and focused on empathy-rich first-person insight instead of generic explanations.

(FYI, I’ve also seen how having this on my personal website as an operator helps LLMs tell the right story about you.)

That strategy pivot unlocked our AI referral pipeline jump – moving from mid six figures to seven figures in AI-assisted sales-qualified inbound in six months.

Not bad for a single strategy session with a chatbot.


Conversation #5: Experiments

Situation

I love data – analytics, research reports, polls, whitepapers. But by the time most research reports explain what’s happening in marketing or search… whatever they’re describing already happened six months ago.

I couldn’t wait on dashboards or vendor whitepapers to understand how people were actually using AI because behavior was changing faster than the tools that measured it.

So I started watching. Not in a creepy way. In a curious way.

I watched how my kids searched.
I listened to founders talk through problems out loud.
I paid attention to what people asked at conferences versus what they Googled later.
I read research reports, then compared those conclusions to what I saw happening in real life.
I even watched engineers explain what they were designing AI systems to do – not how they marketed the tools, or what the capes were at that moment, but what problems the tech itself was built to solve.

And what became clear was this:

We weren’t solving for keywords anymore.
We were solving for psychology.

Conversation

This became a recurring question in my work with AI: “What patterns do you see?”

I documented what I was seeing in the wild:

  • How people used Reddit for validation, TikTok for lived experience, and ChatGPT when they wanted clarity or reassurance.
  • How searches inside LLMs sounded nothing like Google searches – they sounded like thinking out loud.
  • How stress, uncertainty, and pressure shaped what people even wanted to ask in the first place.

Then I’d ask the machine what it was seeing across scale. We compared notes.

Me: “Here’s what feels different to me. But you tell me – what are you seeing right now?”
AI: “Here’s what shows up consistently.”

We agree. We disagree. And where those overlapped – that’s where strategy started to form.

Learnings

Three things became undeniable:

1. Search became emotional, not technical.
People weren’t choosing platforms based on speed – they were choosing based on how they felt looking for an answer. Validation, safety, clarity, certainty – each platform served a different emotional need.

2. Keywords stopped being the driver.
People didn’t show up to AI with neat queries. They showed up with messy situations that AI had to decode. That meant visibility depended on matching contexts, not phrases.

3. First-person research moved faster than any formal study.
By watching real behavior and pressure points, I could see patterns forming long before they showed up in published research.

Experience was less anecdotal, more early signal.

Strategy

Instead of waiting for the industry to confirm what I was already seeing, I made experimentation the habit:

  • I logged behavioral patterns as real research – even if that meant stopping at a live event to drop voice notes into my phone.
  • I cross-compared lived experience with AI outputs.
  • I tested content based on emotional and situational alignment rather than traditional SEO logic.
  • I tracked which signals increased visibility inside AI conversations – not just traffic in analytics dashboards.

Everything funneled back to one question: “How do we show machines that we actually understand what people are going through?”

Not just what they want, but the muddy road they’re walking to get there. Because nobody comes to AI perfectly formed.

They show up:

  • Overwhelmed.
  • Unsure.
  • Under pressure.
  • Trying to sound confident while still needing guidance.

We needed to build around that reality instead of pretending everyone starts from a clean, logical decision tree. THAT changed the way I think about marketing altogether. Why because we have a new generation of marketers targeting a new generation of buyers. I had to think differently.

Those experiments became the flywheel: 

Empathy → testing → signals → visibility → pipeline.

Not from chasing trends. But from learning to observe them while they were still being born.


The Thread That Connects Them All

None of these conversations were about “hacking” AI though, right?

They were about learning how to think with it – understanding how machines parse structure, where invisible context lives, why patterns matter more than prompts, and how real empathy shapes what surfaces when people go looking for answers.

Every revenue dollar tied back to one simple shift: I stopped trying to control AI models and started teaching them who I serve, what I do, and how I help real humans navigate real problems.

AI doesn’t care how clever you are. LLMs reward clear signals built on real understanding.

For me, cultivating understanding through dialogue helped me see the opportunities tucked inside an industry in full tilt — working with machines, with people, and inside the messy middle where both collide. I stopped trying to solve directly for revenue and started solving for – discovery, sure – but equally as important is being recognized in the right emotional context.

And that’s where seven-figure momentum was built.

Dig Deeper