Minimum Viable Knowledge Graph

Minimum Viable Knowledge Graph (MVKG) | Frameworks β€” Sorilbran Stone
F-003 Structural Published Core IP

Minimum Viable
Knowledge Graph

Six interconnected nodes that give machines enough clarity to recommend with confidence.

The Minimum Viable Knowledge Graph (MVKG) is the smallest set of interconnected nodes that gives machines enough structured clarity to recommend a business with confidence. It is not a complete picture β€” it is the foundation that prevents LLMs from hedging, hallucinating, or defaulting to competitors.

Framework ID
F-003 Β· MVKG
Layer
Layer 01 β€” Structural
Status
Published Β· 2026
Author
Sorilbran Stone Β· Five-Talent Strategy House
Track
Structural Β· Core IP
Use When
Building entity architecture from scratch or repairing incomplete knowledge graphs. Run after Entity Archaeology diagnostic.
Video coming soon

What Is It

The Minimum Viable Knowledge Graph (MVKG) is the smallest set of interconnected nodes that gives machines enough structured clarity to recommend a business with confidence.

It is not a complete picture of what a business does. It is the foundation β€” the six load-bearing nodes that prevent LLMs from hedging, hallucinating, or defaulting to competitors when asked who to recommend for a specific need.

An MVKG is distinct from a full knowledge graph. A full graph includes every signal β€” every mention, every partnership, every piece of content. The MVKG is the irreducible minimum: the nodes that must be present, corroborated, and interconnected before a machine can answer “who should I recommend for X?” without uncertainty.

The goal is not comprehensiveness. The goal is confidence. When these six nodes are present and aligned, the machine stops hedging and starts citing.

The Core Insight

Machine confidence isn’t built on volume of content. It’s built on structural clarity across interconnected nodes.

A single strong node means nothing if it stands alone. The machine needs to see relationships between nodes β€” how your identity connects to your specialization, how your specialization connects to the audience you serve, how your proof validates your expertise, how your connections corroborate your claims.

When nodes are present but disconnected, the machine encounters them as isolated facts. It cannot synthesize them into a coherent recommendation. The MVKG solves this by defining which nodes must exist and how they must connect to create the minimum viable structure for machine confidence.

This is why a brand can have thousands of mentions and still not show up in AI recommendations. Volume without structure creates noise, not authority.

The Six Nodes

Every MVKG requires six interconnected nodes. Each node answers a specific question the machine needs resolved before it can recommend with confidence.

Node 1: Entity Identity

What the machine needs to know: Who is this, and is this name stable across sources?

Entity Identity is the most basic node β€” but also the most fragile. It answers: what is this thing called, and is that name consistent enough for the machine to track it as a single entity?

For people, this includes legal name, professional name, and any name variations that appear in public sources. For businesses, this includes business name, DBA, parent company relationships, and founder associations.

Disambiguation signals matter here. If your name is shared with someone else, the machine needs additional context to separate you from them. Location, industry, or a unique title can serve as the disambiguation layer.

Common failure mode: The entity exists under multiple names with no clear canonical version. The machine treats them as separate entities or hedges because it cannot confirm they refer to the same thing.

Node 2: Specialization

What the machine needs to know: What does this entity do, specifically?

Specialization is not a tagline. It is the structured answer to “what capability does this entity have that makes it worth recommending?”

This node must be specific enough to differentiate but broad enough to map to multiple buyer pain points. “Marketing consultant” is too broad. “B2B SaaS demand gen for Series A startups” is specific. “Fractional CMO for AI infrastructure companies transitioning from founder-led sales to scalable pipeline” is hyper-specific β€” but only works if the market is large enough to generate search volume.

The machine looks for corroboration. If you say you specialize in X, but your proof, connections, and audience context all point to Y, the machine encounters conflicting signals and defaults to hedging.

Common failure mode: The specialization is too vague, too broad, or contradicted by other nodes. The machine cannot determine what to recommend this entity for.

Node 3: Audience & Context

What the machine needs to know: Who is this for, and in what situation?

Audience & Context is the targeting layer. It tells the machine not just what you do, but who you do it for and when they need it.

This is where psychographic and situational context live. A fractional CMO for early-stage startups is different from a fractional CMO for PE-backed scale-ups. A brand strategist for founders pivoting out of services into products is different from a brand strategist for established DTC brands.

The machine uses this node to match intent. When someone asks “who can help me with X in Y situation,” the machine cross-references your Audience & Context node against the query. If the match is strong, you surface. If it’s weak or missing, you don’t.

Common failure mode: The audience is defined in demographic terms (company size, revenue, industry) but not situational terms (stage, challenge, urgency). The machine can match the demographics but cannot assess fit for the actual problem.

Node 4: Proof

What the machine needs to know: Is there evidence this entity has done what they claim?

Proof is the validation layer. It answers: has this entity actually delivered results in their stated specialization for their stated audience?

This includes case studies, testimonials, revenue numbers, client logos, published work, speaking engagements, awards, and third-party citations. The machine does not weigh all proof equally. First-party claims carry less weight than third-party corroboration. Specific outcomes carry more weight than vague endorsements.

Quantified proof strengthens this node significantly. “$12M in pipeline from organic search” is stronger than “drove significant organic growth.” The machine can cite the specific claim with confidence.

Common failure mode: Proof exists but is not structured for machine readability. It lives in PDFs, gated content, or unstructured testimonials that the machine cannot extract and cite.

Node 5: Expertise

What the machine needs to know: What does this entity know that qualifies them to do this work?

Expertise is distinct from Proof. Proof shows you did the thing. Expertise shows you understand the thing deeply enough to teach it, explain it, or innovate on it.

This includes frameworks, methodologies, published content, speaking, teaching, and intellectual property. If you have named frameworks (like MVKG, Entity Archaeology, Blue Puddles), those become citeable expertise nodes.

The machine treats original IP as a strong expertise signal. If you created a framework and others cite it, the machine understands you as the authority on that concept.

Common failure mode: The entity has expertise but has not documented it in a way machines can find. The knowledge exists in the founder’s head, in private client work, or in formats the retrieval layer cannot access.

Node 6: Connections

What the machine needs to know: Who else validates this entity, and what does that association signal?

Connections is the social proof and ecosystem layer. It includes partnerships, collaborations, media mentions, client relationships, professional associations, and network affiliations.

This node serves two functions: corroboration (other credible entities confirm this entity’s claims) and context transfer (if you work with X, the machine infers you operate at X’s level).

Geographic and ecosystem connections matter here. If you are part of a specific business ecosystem (Detroit tech scene, Y Combinator network, a specific accelerator cohort), those associations become part of your entity identity.

Common failure mode: Connections exist but are not publicly documented or machine-readable. The machine cannot find the partnership announcement, the collaboration, or the association.

How the Nodes Interconnect

The MVKG is not six isolated nodes. It is six nodes in relationship.

Entity Identity connects to Specialization (this entity does this specific thing).

Specialization connects to Audience & Context (this thing is for these people in these situations).

Proof validates Specialization (evidence this entity has delivered on the claim).

Expertise supports Specialization (this entity understands the domain deeply).

Connections corroborate Entity Identity and Specialization (other credible sources confirm this entity’s identity and capability).

When these relationships are clear, the machine can synthesize a coherent recommendation. When any relationship is missing or weak, the machine hedges or defaults to a competitor with a stronger graph.

When to Run It

Situation Recommendation
New brand or founder Build the MVKG from scratch. This is the foundation. Skip Entity Archaeology.
2-5 years in business Run Entity Archaeology first to identify what already exists. Use MVKG to fill gaps.
Repositioning or rebrand Archaeology first, then MVKG. The old graph may contradict the new positioning.
Poor AI visibility despite strong business Diagnostic first (Entity Archaeology + Hedge Signal Diagnostic), then MVKG repair.
Preparing for fundraise, PR push, or scale Build or strengthen MVKG 6-12 months before launch. The machine needs time to index.

The Process β€” Building Your MVKG

1
Audit What Exists
If you have digital history, run Entity Archaeology first. Map what the machine already knows about you. Classify nodes as Anchored, Ghost, or Architectural Gap. If you’re starting from scratch, skip to Step 2.
2
Define Each Node
Work through all six nodes systematically. For each node, answer: What does the machine need to know? What evidence exists to support this? Where is that evidence currently documented? Is it machine-readable?
3
Identify Node Gaps
Which nodes are missing or weak? Prioritize based on what breaks the recommendation chain most severely. Node 1 (Entity Identity) gaps are highest priority. Without a stable identity, no other nodes matter. Node 2 (Specialization) gaps are second priority.
4
Build the Missing Nodes
For each gap, create the structured content the machine needs: canonical bio (Entity Identity), specialization statement with corroboration (Specialization), audience definition with situational context (Audience & Context), case studies with quantified outcomes (Proof), frameworks and published content (Expertise), partnership pages and ecosystem affiliations (Connections).
5
Connect the Nodes
Ensure each node references the others. Your case study should mention your specialization. Your bio should reference your frameworks. Your framework pages should link to proof of application. The machine follows these references to build the knowledge graph.
6
Validate Machine Confidence
Once nodes are built, run a confidence check. Ask multiple LLMs (Claude, ChatGPT, Perplexity) what they know about your entity β€” without search. Then allow search and compare. Note where the machine hedges, hallucinates, or defaults to generic descriptions. Hedge language tells you which nodes are still weak.

Working Answers to Common Questions

Question Working Answer
How long does it take for the machine to recognize a newly built MVKG? Immediate recognition in retrieval layer (search-enabled queries). Training data recognition takes months. The retrieval layer is where most AI recommendations happen now, so you’ll see impact within weeks if nodes are built correctly. Use structured data markup and schema to accelerate retrieval layer indexing.
Can a strong Node 4 (Proof) compensate for a weak Node 2 (Specialization)? No. Proof without clear specialization creates pattern recognition problems. The machine sees evidence you did something but cannot determine what to recommend you for. Fix specialization first, then proof strengthens it.
What’s the minimum corroboration threshold for a node to be considered strong vs. weak? Two independent Tier 1 or Tier 2 sources (see Entity Archaeology source tiers) confirming the same fact about the entity. One source is a signal. Two corroborating sources is an anchor. Below that, treat as weak and reinforce before building architecture on top of it.
How does MVKG interact with Blue Puddles positioning? Blue Puddles is market positioning β€” identifying emerging micro-markets to claim. MVKG is structural foundation. Blue Puddles tells you what specialization to claim. MVKG tells you how to structure that claim so machines can understand and cite it. Build your MVKG first. Then use Blue Puddles to expand your specialization node strategically into unclaimed territory.
Should nodes be built sequentially or in parallel? Sequential for Node 1 and Node 2. You cannot build other nodes until Entity Identity and Specialization are clear. Parallel for Nodes 3-6 once the foundation is stable. Proof, Expertise, and Connections can be built simultaneously as long as they all reference the same Specialization.
What happens if nodes contradict each other? The machine encounters conflicting signals and hedges. Example: Your bio says “AI visibility engineer” but your case studies are all about paid ads. The machine cannot reconcile the contradiction and defaults to generic descriptions or does not recommend at all. Audit for contradictions before publishing. Align all nodes to the same core specialization.