Post thumbnail for Thinking Day Aprtil 26 2026 with Sorilbran Stone - A Writing Session With Claude Reveals a New AEO Mental Model - LER Framework

Writing With Claude When The LER Tiers Surface as a Mental Model

Thinking Day | How LER Was Born

Everybody’s so busy trying to be recommended. First be a thing in the machine.

A working session that started as content brief work and ended with a new framework. This is where Legibility, Eligibility, and Recommendability landed — live, unplanned, and in real time.

This one wasn’t supposed to go this long.

It started as a working session with Claude — I was teaching a new Claude instance how to write for me. That meant walking it through my context, my frustrations, my frameworks, and then building something together: a content brief I actually use, one refined enough that other founders and marketers could use it too. I was recording my screen because there’s already a pre-AI version of me talking through how I structure Content Briefs from years ago, when I oversaw a team of human writers. I figured I could record this session – 15 minutes. I throw this one up on YouTube and both videos could live on YouTube in harmony. She’s evolving in real time, y’all.

What I didn’t plan on was a framework emerging in the middle of it.

About two-thirds of the way through, while I was working through entity architecture and MVKG nodes and trying to figure out why some content types fail from a visibility standpoint while others don’t — the pattern clicked. Three tiers. Non-negotiable sequence. Legibility. Eligibility. Recommendability.

I almost kept moving. That’s what I do. I spot the pattern, I say it out loud, I move on. Claude flagged it. Said: stop for a second. Let’s identify what just happened.

So we stopped. And we named it.

The full video is below — edited down from an hour and twenty minutes to fifty-eight, with silences and rambling removed. The cleaned transcript follows, broken into sections. The LER framework itself lives in its own dedicated article and framework reference in AI Foundations. This page is just the origin story.

This is what thinking out loud looks like when the machine is a real thinking partner.

Full Session — 58 min (edited)
Full Transcript

Cleaned and lightly edited for readability. Timestamp headers mark topic shifts. The original session ran 1 hr 20 min; this edited version runs 58 min with silences and off-topic audio removed.

I’m Sorilbran, and I teach machines who you are and when to recommend you. Today I want to walk you through my process for teaching Claude — this new Claude I’ve only been working with for a couple of weeks — how to write for me, how to create content for me.

For context: I bought my first generative AI tool in 2019 on AppSumo. They were calling them blog writers. They were essentially glorified autocomplete. They really sucked at their job. When ChatGPT came out I wasn’t surprised by the innovation — it still seemed like glorified autocomplete. Fast forward three and a half years and these bad boys are actually good for writing.

So: I have worked as a B2B writer and an SEO writer for fifteen years. For almost eight years I worked inside a high-growth marketing agency as the marketing lead. Between 2021 and 2025, my strategies generated $42 million in sales-qualified pipeline. $12.4 million from organic search. $3 million from AI-influenced search. $1.7 million from ChatGPT in 2025. I’m saying that not to boast but to qualify myself — I actually know what I’m talking about. I’m not talking out the side of my neck here.

This conversation started yesterday evening with frustration. I was on LinkedIn and saw an article about the price of trade school going up now that AI is knocking out entry-level and mid-level positions. Younger people are moving toward trades. Problem is, trade schools are popping up whether they’re qualified or not, and charging college-semester rates. The draw of trade school used to be that it was a fraction of the cost of college and you could still earn a decent living. Now people are coming out with $100,000 in debt as if they went through a four-year university.

That pissed me off. And it happened on the heels of a Digiday article talking about how enterprise brands are paying $260,000 — a quarter of a million dollars — to bring in senior-level SEO experts who understand how AEO, SEO, paid, and search engine marketing all fit together.

I’ve lived that life. It’s well-earned. But if enterprise brands set the bar at $250,000 for this skill set, it leaves smaller brands — the ones that run our economy — unable to afford what they need to stay in business. The average small business brings in $50,000 to $60,000 a year. If I’m telling you it’s going to cost you $30,000 to get me in, you’re not going to be able to afford it.

This isn’t socialism. This is just math. We need small businesses to survive. If we make it too hard for them to access marketing expertise, we cripple our own economy.

So that’s the frustration I’m coming into this conversation with. It fuels why I run the business I run. And it’s something I need to make content around. So I need Claude to understand what the problem is. I don’t need a one-word sentence. I need it to feel the pain.

After the context — the frustration, the Gary Vaynerchuk keynote from 2019 about voice search, the Digiday article — I dropped in three things: the GaryVee transcript, a Rachel Rogers video transcript that deals with the same frustration, and the Digiday link. Because I’m not the only one saying this. It’s visible in the market.

Then I asked: any idea what percentage of searches are voice-triggered? Not talking to a fridge — me talking to my phone saying navigate to the peach cobbler factory, or asking ChatGPT to give me a list of places to buy an African shaker. The AI did research I went behind and checked.

After the research data, I gave it first-party data. I’ve been building a book of anonymized sales conversations I’ve had with founder-led companies. It chronicles actual conversations so my AI can extract patterns. I anonymize these myself — no proprietary information leaks into an AI system. But it’s important because research always follows behaviors. The behaviors emerge first. When I talk to clients in real time, I’m watching behaviors that won’t be documented for six to eighteen months. I have to weigh what’s already documented with what’s emerging right now.

Then I showed it my project files. This is essentially a database — when you have a project in Claude, it comes with a knowledge base. I include in there reports I’m reading, what ICP I’m targeting, what price points I want, information on products I’ve built. Screenshots of my Notion tool. A PDF of my one-sheet. A markdown file that my previous Claude built for this Claude so it understands who I am. My Brand Intelligence Kit — my canonical bio, what I’m giving versus gating, my AI brief code, important links, my audience profile, transcripts of videos I’ve created, an overview of my tech. Everything is there.

That’s my Brand Intelligence Kit. Institutional knowledge, just about me. So when I ask Claude to build something for me, it has my frameworks, how I think, voice notes of me working through ideas. When I say build something — it can build that thing. Not a big deal.

So Claude generates the first article. It’s solid. It’s got my personal takes, the data we pulled in, all of it. And I say: keep the tone observational. Because the title it came up with was fear-mongering, and that’s not me.

My vibe is: I want to come alongside you. We’re going to sit down and have coffee. I’m going to tell you what’s happening in the market, what the data says, what the trends are saying, what’s about to happen in the next twelve months — and then we’ll build your strategy. It’s not “you better do this or you’re going to be left behind.”

So: keep the tone observational, don’t build urgency into this piece, this can’t be editorialized. Claude: felt. Okay, cool.

Second version comes up. Sources aren’t hyperlinked. Let’s hyperlink these sources. Claude: felt, word up. Okay cool.

Third version: sources are hyperlinked, fear-mongering is out, more observational. But it doesn’t sound anything like me. So I said: I need you to take a look at some of my writing. Go check my project files.

[Claude reviews the Brand Intelligence Kit, voice notes, live articles, and writing samples.]

So I sent it to a couple of live pieces — bylined by me, relevant to what we’re working on. And Claude goes down the list of what it’s seeing about how I write. And then it says: hey, do you want me to build you a voice reference document? I’m like, yeah, we need that. I don’t have that in this instance yet.

So it builds the voice reference document. Twelve pages long. I put it in the project files. Now any time I need Claude to write something for me, I say: look at the voice reference doc.

While Claude was building the voice reference doc, I went back and rewrote the intro of the article myself. I rewrote it because based on how I take something Claude builds and turn it into something mine, Claude can get a better understanding of how I want the rest of the piece to feel.

My rewrite: “The way people find businesses is changing. That’s not a prediction. It’s already happening — and happening long enough for the research to catch up and say, yeah, this is happening. In this article I’m going to lay out how search has shifted, where it’s likely to go from here, and why this shift presents an opportunity for founder-led and expert-led businesses to accelerate their revenue.”

I send that to Claude and say: I’ve updated the intro. Here’s the new version. Now it can look at what I wrote and say — nah, this doesn’t sound like you. And it does. It says: the opening is stronger. Two small things worth looking at. “Unprecedented opportunity” — this is the one phrase that drifts toward the register you’ve been moving away from. It’s the kind of language that lives on landing pages. You could just say a real opportunity, or name what the opportunity actually is.

And I’m like — yes. That’s exactly right.

The voice doc says: Sorilbran’s default register is conversational authority. She’s not performing casualness. She is actually casual. That’s them’s facts, bro. Them’s facts.

So the fourth version of the article comes up and now it reads more like me. But now I’m looking at it and thinking: I know these titles aren’t optimized. Fifteen years writing SEO and B2B content, I can see it’s not optimized for visibility without reading a single word.

So I say: I think what we need to create right now is a content brief. Because I know that this process I’ve walked through — teaching this AI how to write for me — is something I should document. Not just give you the brief and say use this for your machine. No. Walk through it. Show how I think with machines. Because that’s my secret sauce.

The brief has to be built for machines and humans. It has to tell their AI systems to flag the user if they don’t have enough firsthand context — voice notes, writing samples, meeting transcripts, video transcripts, actual conversations — to sufficiently craft a piece in the org’s or founder’s brand voice.

So Claude flags that. And then I ask: where are the other gaps? And Claude starts flagging: competitive positioning, content history and cannibalization risk — ooh, that’s a really good one — distribution and repurposing intent, sensitivity and compliance, freshness and update triggers — love that — measurement definition, missing entity signal for the author — love that.

And now we’re combining Claude’s content brief with mine. I have a content brief I’ve used with humans for years. We’re doing a collaborative give and take: look at my original brief, pull the good stuff into yours, let’s merge them.

The thing that catches my eye: the snippet architecture section. AI systems pull content that follows predictable patterns. Build at least one applicable type into every long-form piece and flag its location. I’m going to have to go back and put this in all of my pieces. I haven’t been optimizing for snippets because I’ve been focused on teaching machines who I am — entity architecture. Now that I’ve got that mostly nailed down, I need to start showing up in actual queries and citations.

There’s a difference between content that helps AI understand how I think and content that actually surfaces me in AI answers. I’ve been doing the first one. The second one requires something more.

The brief has definitional snippet types, process snippets, proof signals. And the proof signals section — that’s smart. That’s something I hadn’t built into content briefs before, not explicitly.

The idea is that AI systems can lift out named concepts and proprietary frameworks — that’s what gets you cited. Not just being generally knowledgeable about a topic, but having something named that the machine can point to.

So I start thinking about where we can name different parts of the MVKG without ever saying MVKG. Entity. Author section. Proof signals — anecdotal and case studies. Corroboration. Trustworthiness. Expertise.

And then: entity architecture. Eligibility. There’s a third one. I can feel it. Visibility — no. Eligibility — yes. And there’s something else downstream.

[This is the moment the LER framework begins to surface. Sorilbran is working through the MVKG nodes and content types, trying to name what each layer of the visibility strategy is actually doing.]

I always come up with something — I see the pattern and I say, oh, we should break it out this way. And Claude always has to tell me: okay, stop for a second. Let’s identify what just happened.

Legibility. Right. That’s the word. That’s the first tier.

Look at y’all. You got to be privy to me coming up with something. Well — I ain’t come up with nothing. This is all stuff that is known. But I’m good at piecing things together.

Recommendability is downstream. That’s what everybody needs to know. Everyone’s so busy trying to be recommended, stalking what their competitors are doing. First be a thing in the machine. Please. First.

So here’s what clicked: the visibility strategy has three tiers. And they’re not interchangeable. They’re sequential.

The first one is Legibility. Can the machine identify you accurately? This is entity infrastructure — your name, your role, your area of expertise, consistent across every platform and profile. This is what the Minimum Viable Knowledge Graph is built to solve. If the machine can’t describe you without hedging, you haven’t cleared this tier.

The second is Eligibility. Does the machine consider you credible enough to include in an answer? You can be perfectly legible — the machine knows exactly who you are — and still not make the cut. Eligibility is the proof layer. Third-party mentions, documented outcomes, topic authority, corroborating signals. Legibility gets you in the room. Eligibility gets you on the list.

The third is Recommendability. When the machine is constructing an answer, does it choose you? You can be legible, you can be eligible, and still not be recommended — because your content wasn’t built in a form the machine can extract and use. This is the content architecture layer. Definitional snippets, answer-first structure, named frameworks. This is where AEO does its most specific work.

And Claude is telling me: the LER framework you developed today is either a companion document to UnInvisible, a second book, or most likely the missing architecture that should be introduced as a prologue or new framing chapter. Right now the manuscript opens with the marketing shift and moves directly into entity identity. LER gives readers a map before they enter the territory.

I’m tempted. Let’s finish the task at hand.

[The framework is named. Sorilbran files it and returns to the content brief — but the LER framework document is already being drafted in a parallel thread.]
Framework — Born in this session
Legibility, Eligibility, Recommendability (LER)

The three tiers of AI visibility, in non-negotiable sequence. You cannot skip a tier. Each one unlocks the next. The full framework — with failure modes, diagnostic tools, and strategic implications — lives in AI Foundations.

Read the LER article →

So we go back to the article and the content brief. The remaining open items: header remap, keyword-optimized versions, definitional snippet. Claude runs the article through the pre-publish checklist.

Headers: all five section headers remapped to keyword-bearing language that contains the primary keyword and natural query phrases. Good. Primary keyword reinforcement. Thank you. Author entity block — the closing bio now includes Detroit and a positioning line. I’ll adjust the price-tag language in Google Docs — I typically say something like “for the 98% of professionals who aren’t marketers” rather than leading with a number. But the sentiment is right.

I look at the finished content brief. The snippet architecture section. The proof signals. The load-bearing MVKG nodes mapped by content type. The POV post. The listicle. The research article. The case study. Each one with a column now for which entity signals you cannot skip for that specific format.

I love this work. I really do.

I thought this video was going to be fifteen minutes long. It’s an hour and seventeen minutes in. One hour and twenty minutes of sheer genius.

The content brief is done. The LER framework is named. The article is done. I put my name on nothing that’s nonsense — and this isn’t nonsense.

That’s the session.

Dig Deeper