The AEO Playbook for Chiropractic: How to Be the Practice AI Actually Recommends
98 queries. 3 AI models. 25 practices. 2,293 citations. Here’s what predicts whether a chiropractor shows up in AI answers — and a step-by-step system for making it happen.
Josh Grant’s Definitive 2026 Guide to AEO is the best operating manual for AI search visibility that exists. His frameworks are built from running AEO at Webflow, where 1.2% CMS market share turned into ~60% of AI-generated answers in their category. The playbook works.
But it was built for SaaS.
I’ve spent the last two months running the first AEO study in a healthcare practitioner vertical — chiropractic care, specifically. 98 queries across ChatGPT, Gemini, and Perplexity. 25 practices audited. Structural fingerprinting across 35 attributes. Off-site ecosystem mapping across 30+ platform categories. Full correlation matrices between signal variables and AI citation frequency.
The core AEO principle — structure your content so models can actually use it — validated completely. The specific implementation looks nothing like SaaS.
This is the chiropractic-adapted playbook. Every recommendation has data behind it. Every “do this” has a “here’s why” with a correlation coefficient or a competitive landscape finding attached. If you run a chiropractic practice and want AI models to recommend you instead of the 1,041 other practice sites they currently treat as interchangeable, this is the operating system.
The Competitive Landscape (Why This Works Right Now)
Before the playbook: the opportunity.
I audited 31 chiropractic practice sites in the niche study and 25 in the local study. Here’s what the competitive landscape looks like:
Zero practices have FAQ Schema markup. Zero.
Zero practices have LocalBusiness or MedicalBusiness Schema.
Zero practices have MedicalCondition Schema.
Zero practices cite specific research studies by name on their condition pages.
Zero practices have clinical decision frameworks (”if X, then Y”) in their content.
Zero practices have comparison tables (chiropractic vs. PT vs. injection vs. surgery).
Zero practices address Fear/Safety questions with dedicated content.
The average structural fingerprint score across the top 6 local practices was 16.7 out of 35. The range: 10 to 22. Nobody terrible. Nobody good. Everyone equally mediocre.
In Grant’s framework, this is a market with no interpretability advantage to overcome. The first practice to implement structured, high-information-gain content has literally zero competition for those signals.
Grant’s Webflow had to outstructure competitors who were already structured. You don’t. You’re walking into an empty field.
Step 1: Fix Your Entity Clarity (Before You Touch Content)
Grant’s playbook starts with content structure. For chiropractic, you start earlier — because AI models need to know you exist in a specific location before they can recommend you for anything.
The local study produced the strongest correlations I’ve measured in either phase of research:
Owner response rate on Google reviews: r = +0.826 (p = 0.043)
Location-name match (practice name contains the city): r = +0.846 (p = 0.034)
Review condition density (specific conditions mentioned per 1,000 words of review text): r = +0.631
These are the signals that predict whether AI models recommend your practice. Not review count (r = −0.453, negative). Not domain authority (r = −0.106, essentially zero). Not how many platforms you’re on (r = −0.533, negative).
Every volume metric runs negative. Every clarity metric runs positive.
The proof: Bee Cave Chiropractic has 18 Google reviews and 77 AI citations across three models. Elite Wellness has 377 Google reviews and 62 citations. That’s 4.28 citations per review versus 0.16 — a 26.6× difference in citation efficiency.Bee Cave Chiropractic wins because its name IS the search query (”best chiropractor in Bee Cave”), its doctor is named consistently in every review, it responds to 100% of reviews, and its reviews mention specific conditions at the highest rate in the market.
What to do this week:
Respond to every Google review. Every single one. Within 48 hours. And every response should name the condition treated (”We’re glad your shoulder pain has improved”) and reference the treating doctor by name. This is the single highest-correlation signal in the study. If you’re at 70% response rate and your competitor is at 100%, you’re losing a race you didn’t know existed.
Audit your practice name against your geography. If your practice name doesn’t contain your city or neighborhood, you need to inject that geographic signal everywhere else — page titles, H1 headings, meta descriptions, footer, schema markup, image alt text. “Back Pain Treatment” becomes “Back Pain Treatment in [Your City], TX — Dr. [Name], DC.” On every page.
Consolidate your practitioner naming. If external directories still list a previous doctor, former partner, or alternate spelling, fix it. Models can’t recommend a practice when they can’t resolve who actually works there. Audit Yellow Pages, Yelp, Healthgrades, your Chamber of Commerce listing, Nextdoor — anywhere your practice appears.
Claim missing health directories. In our study, Healthgrades received 43 AI citations across three models. Zocdoc received 35. Gemini and Perplexity actively pull from these platforms as citation sources. If you’re not on Healthgrades, WebMD, and Vitals with complete profiles — conditions treated, credentials, location, photos — you’re missing citation surfaces that models are already using.
Step 2: Deploy Schema (The Schema Desert)
Grant calls schema “the way you teach models what your content is, how concepts connect, what belongs together.” He’s right. And in his world, competitors already have it.
In chiropractic, nobody has it.
I call this the Schema Desert — zero of 31 sites in the niche study and zero of 25 sites in the local study implement any structured data markup. Not FAQ Schema, not LocalBusiness Schema, not MedicalCondition Schema. Nothing.
Grant’s Webflow needed schema to compete with structured competitors. You need schema to compete with nobody. The first practice in your market to deploy structured data has a structural advantage that will take competitors months or years to notice and replicate.
Why this matters for AI specifically: Schema is how you give models a machine-readable map of your content. Without it, models have to infer what your page is about from unstructured text. With it, you’re explicitly telling the model: this page is about sciatica, this is the FAQ section, this is our business address, this is the treating practitioner, these are the conditions we treat. Models parse schema faster, trust it more, and reuse it more consistently.
What to deploy:
LocalBusiness + MedicalBusiness Schema on your homepage and any area/location pages. Include practice name, address, phone, hours, geo coordinates, service area, medical specialties, and the treating doctor linked as an employee with credentials.
FAQPage Schema on every condition page. 5-7 question/answer pairs per page, marked up as structured data. The questions should come from what patients actually ask (more on this in Step 4). Each Q&A pair becomes an atomic unit that models can extract and reuse across dozens of queries.
MedicalCondition Schema on every condition page. Structured data identifying the condition name, relevant anatomy, possible treatments, risk factors. This gives models a knowledge graph node for every condition you treat.
Organization Schema connecting your practice entity → location → practitioner → services → conditions treated. Include sameAs links to your Google Business Profile, Yelp, Instagram, YouTube, Healthgrades, and every other platform where your practice appears. This is how you tell models that all of these profiles are the same entity.
If you’re on Webflow, WordPress, or any modern CMS, this is custom code injection in page settings. Not a redesign. Not a new website. JSON-LD blocks added to existing pages. A developer can do this in a day or two. The impact is permanent.
Step 3: Rewrite Your Condition Pages (The Publishability Hypothesis)
This is where the chiropractic playbook diverges most sharply from the general AEO playbook.
Grant says: build “structured reasoning units — atomic, machine-ingestible building blocks that can be pulled into hundreds of adjacent queries.” He says each unit needs a high-signal summary, a context layer, and examples with decision logic.
That’s right. But in healthcare, the specific form that takes is different — because of three constraints that don’t exist in SaaS: regulatory risk (state boards and FTC police outcome claims), competitive risk (generic protocols can be copied), and liability risk (specific timelines create patient expectations).
The study produced a framework I call the Publishability Hypothesis: the content most likely to escape citation atomization and accumulate concentrated AI authority is content that publishes clinical reasoning — how you think through a case — rather than clinical claims — what outcomes to expect.
Here’s the difference:
CLAIM (what 99% of chiro sites publish): “Chiropractic care can help with sciatica. Call us today for a consultation!”
REASONING (what gets cited): “When evaluating a patient with lumbar disc herniation and radiculopathy, we consider several factors. A 2010 JMPT systematic review showed moderate evidence for short-term improvement with spinal manipulation. The ACP 2017 guideline recommends manipulation as first-line for acute low back pain. In our practice, we reassess functional progress at 2-week intervals using the Oswestry Disability Index, and if measurable improvement hasn’t occurred by 4-6 weeks, we discuss referral options including PT co-management or orthopedic consultation.”
The second version cites specific research. It shows clinical judgment. It contains decision logic (”if X, then Y”). It names a specific outcome measure. It includes referral criteria. And it creates zero liability — it describes a process, not a promise.
The data behind this: In Phase 1, the structural fingerprint analysis showed that information gain is the single strongest predictor of AI citation, with a 2.45-point gap on a 5-point scale between cited practice sites (average 3.75/5) and non-cited practice sites (average 1.3/5). Information gain measures whether a page provides information an AI model couldn’t generate from its training data alone. “Chiropractic can help sciatica” is something any model already knows — zero information gain. A named study, a specific reassessment timeline, a referral threshold — that’s new information the model can’t fabricate.
The template for every condition page:
H1: [Condition] Treatment in [Your City] — Dr. [Name], DC
What Is [Condition]? Patient-readable explanation. Not “acute lumbar radiculopathy.” “That shooting pain that runs from your lower back down your leg.”
What the Research Says. Named studies. Not “research shows.” Cite the author, journal, year, and key finding. Link to PubMed when possible. This is where you connect to the medical authority layer that AI models already trust. You’re not replacing Mayo Clinic — you’re bridging from their evidence to your clinical reasoning.
How We Approach [Condition]. Clinical reasoning. Decision frameworks. “We assess X. If we find Y, we recommend Z. If not, we consider A.” Include your specific treatment modalities and how your approach (corrective care, relief care, whatever your philosophy) applies to this condition.
Typical Timeline and Reassessment. Ranges, not promises. “Most patients see initial improvement within 2-4 weeks.” Include reassessment criteria: “If functional improvement hasn’t occurred by week 4, we [specific next step].” Named outcome measures where applicable (pain scale, range of motion, ODI).
When to See Someone Else. This is the section that separates you from every other practice site on the internet. Referral pathways. “If your symptoms include progressive neurological deficit, we refer for advanced imaging.” “If you’ve had symptoms for more than 12 weeks without improvement, a pain management or orthopedic consultation may be appropriate.” This is the trust differentiator that no competitor will build — because telling patients when NOT to see you feels counterintuitive. Which is exactly why it’s a moat.
Questions to Ask Any Chiropractor About [Condition]. Zero liability. Teaches patients how to evaluate care quality. “What outcome measures do you use?” “What’s your reassessment timeline if I’m not improving?” This content seeds evaluation language into patient vocabulary — and those patients carry that language into their Google reviews and AI conversations.
Frequently Asked Questions. 5-7 Q&A pairs with FAQPage Schema. Source the questions from what patients actually ask (see Step 4). Each answer should be a standalone reasoning unit — clear enough that a model could extract it and use it to answer a query without any other context.
Comparison Table. Chiropractic vs. PT vs. injection vs. surgery for this condition. Columns: Approach, Best For, Timeline, Evidence Level, When to Consider. Honest. Balanced. No competitor has these.
Apply this template to every condition you treat. Use the same structure on every page. Models learn patterns — when your sciatica page has the same heading structure as your neck pain page, which has the same structure as your headache page, models learn to trust and reuse your pattern across the entire site. Template consistency scored as a strong predictor in the structural fingerprint analysis (1.40-point gap between cited and non-cited sites).
Step 4: Mine the Questions Patients Are Actually Asking
Grant’s question mining framework identifies six question sources: Reddit, sales calls, support tickets, social media, reviews, and internal search. For chiropractic, three of those don’t exist — no Gong recordings, no G2 reviews, no enterprise chat tools.
But the sources that do exist are richer.
In the study, I built a 178-question patient corpus from Reddit threads, Google’s People Also Ask boxes, Perplexity suggested questions, and chiropractic podcast Q&As. When I clustered those questions by intent, they fell into four buckets — and the distribution was not what I expected.
Evaluation (23%): “Best chiropractor near me for sciatica.” “How to choose a chiropractor.” “Chiropractor vs. physical therapist.”
Outcome (22%): “Will chiropractic help my herniated disc?” “How long does it take to see results?” “Does chiropractic actually work?”
Process (22%): “How many visits for back pain?” “What happens at a first visit?” “How much does a chiropractor cost?”
Fear/Safety (33%): “Is chiropractic safe?” “Chiropractic stroke risk.” “Is my chiropractor scamming me?” “Is it normal to feel worse after an adjustment?” “Is chiropractic safe during pregnancy?”
Fear is the largest single intent category. One-third of all patient questions. And zero practices in the study have dedicated content addressing them.
This is the gap Grant describes when he writes about “the questions that matter most rarely look impressive in a dashboard.” Fear questions don’t have high search volume individually. They’re phrased inconsistently. They’re emotional. They’re the questions patients are embarrassed to type into Google but will absolutely type into ChatGPT because it feels more private.
And they’re the questions where AI models are least confident in their answers — which means they’re most hungry for a clear, authoritative, evidence-based source to cite.
Where to find your questions:
Patient conversations. If you use a wearable recording device like the Plaud Note, you capture 15-30 patient conversations per day in raw, unfiltered language. After 30 days, you have 400-600 transcribed interactions. Every question asked. Every concern voiced. Every term used. The synthetic persona piece covers this in detail.
Front desk phone calls. Patients call when they’ve already asked ChatGPT and still have questions. The gap between what AI told them and what they’re calling to confirm IS the content opportunity.
Google reviews. Grant says “reviews are just questions asked too late.” In the chiro study, review condition density (r = +0.631) is the strongest positive review signal. Reviews that mention specific conditions, treatments, and outcomes contain the exact language patients used when deciding. Mine them for vocabulary, not just sentiment.
Google’s People Also Ask. Type your top 10 conditions into Google and expand every PAA box. You’ll find 50-100 questions in an hour.
Perplexity suggestions. Ask Perplexity a patient question and look at the suggested follow-ups. These are the adjacent questions the model expects to be asked next.
Reddit (for language, not engagement). The r/chiropractic, r/backpain, and r/sciatica subreddits contain raw patient language. But a warning from the data: Reddit is actively hostile to chiropractic as a category. Zero mentions of any Bee Cave practice across 9,323 Reddit rows. Gemini — which ingests Reddit heavily — routes patients away from chiropractors on open-ended health queries. Mine Reddit for vocabulary and question patterns. Do not invest in Reddit as a channel.
What to do with the questions:
Cluster them by intent bucket. Map them to your condition pages. The FAQ sections on each condition page should directly answer the specific questions patients ask about that condition. The Fear/Safety questions deserve dedicated standalone content (Step 5). Every question that doesn’t map to an existing page reveals a content gap.
Step 5: Build the Fear Content Nobody Else Will
This is the highest-leverage content opportunity in the entire study.
33% of patient questions. Zero competitive supply. The largest intent cluster in the corpus with nobody addressing it.
And here’s what makes it even more compelling: in the cross-model audit, one practice accidentally over-indexes on Fear queries — getting mentioned at 33% (GPT) and 42% (Perplexity) of Fear-related responses without having any Fear content at all. The signal is coming from review language and trust cues. Imagine what happens when you deliberately build evidence-based content for the queries patients are most anxious about.
The Fear content series (in priority order):
“Is Chiropractic Safe? What the Research Actually Shows.” Addresses stroke risk, nerve damage, general safety anxiety. This is the largest single Fear cluster — 18+ questions in the corpus. Name the studies. Cite the actual risk statistics. Be honest about what the evidence supports and what it doesn’t. AI models hedge heavily on these queries right now because they don’t have a balanced, practitioner-credentialed source to cite. Be that source.
“Is It Normal to Feel Worse After a Chiropractic Adjustment?” Post-adjustment soreness vs. red flags. A decision tree: “if you experience X, that’s normal soreness that typically resolves in 24-48 hours. If you experience Y, contact us. If you experience Z, go to the ER.” Clinical reasoning applied to a patient’s in-the-moment anxiety.
“How to Tell If Your Chiropractor Is Evidence-Based.” Addresses pseudoscience skepticism. A “questions to ask” format that positions your practice as the transparent, trustworthy option. Include what evidence-based practitioners do differently and how patients can evaluate any chiropractor’s approach.
“Is My Treatment Plan Normal? Red Flags vs. Green Flags.” Directly addresses the “is my chiropractor scamming me” cluster. Decision framework format: here’s what a reasonable treatment plan looks like for common conditions, here’s when to ask questions, here’s when to get a second opinion. This is the trust content that no competitor will build because it feels like it could cost them patients. It won’t. It builds trust that compounds.
“Is Chiropractic Safe During Pregnancy?” Population-specific Fear. Named studies available. Clear scope boundaries. Referral criteria. This content serves a distinct patient segment with specific anxiety.
“What to Expect at Your First Chiropractic Visit.” Bridges Fear and Process. Reduces the anxiety that stops first-time patients from booking. Walk through the actual visit step by step, using the vocabulary patients use (not clinical terminology).
Each Fear page follows the same Publishability Hypothesis template from Step 3. Named research. Honest scope boundaries. The “when to see someone else” section. FAQ Schema. The professional courage required to publish this content IS the competitive moat. Schema can be copied. Heading structure can be copied. Willingness to say “the historical foundation of our profession has real problems, but here’s what the evidence actually supports” cannot.
Step 6: Launch Video (The Surface Nobody Has)
The study found that TexStar Chiropractic is the #2 most-cited practice in Bee Cave (68 citations) — achieved without the location-name match advantage that gives the #1 practice its edge. TexStar is the only top-tier practice with a YouTube program.
YouTube matters for AI because Google’s Gemini integrates YouTube content into AI overviews, Perplexity indexes video transcripts, and video creates entity signals — doctor name + practice name + location + condition — in a format that 5 of 6 top competitors in our study completely lack.
The video content flywheel I documented in an earlier piece turns one filming session into six structured assets: YouTube video → blog post → FAQ schema → condition page embed → email capture → cross-links. The production framework exists.
What to film first (mapped to study data):
Film 5 condition-specific videos covering the conditions most mentioned in your Google reviews. Title format: “[Condition] Treatment | Dr. [Name] | [Practice Name] [City] [State].” Feature yourself on camera — Fear content works especially well on video because patients can assess the practitioner’s trustworthiness visually.
Include the same FAQ questions from your condition pages in the YouTube description. Add chapter markers. Link to the corresponding condition page on your site. Every video is a new entity signal node that connects your practice name, your doctor name, your location, and a specific condition in a format models can parse.
You don’t need production quality. You need consistency and specificity. One video per week for 5 weeks creates a surface area that most competitors don’t have at all.
Step 7: Coach Review Quality (Not Quantity)
This is the single most counterintuitive finding in the study, and it directly contradicts what every reputation management company is telling you.
Review count correlates negatively with AI citations. r = −0.453. The practice with the most reviews (377) has the worst citation efficiency (0.16 citations per review). The practice with the fewest reviews (18) has the best (4.28 citations per review).
The difference: review condition density. The winning practice has 14.7 conditions mentioned per 1,000 words of review text — specific conditions like migraines, TMJ, lupus, car accident injuries. The losing practice has 8.7 conditions per 1,000 words. More words, proportionally fewer condition-specific insights.
A review that says “Great experience! Highly recommend! 5 stars!” gives an AI model nothing to work with. A review that says “Dr. Swanson helped my chronic migraines — after three visits the frequency dropped from weekly to monthly” gives the model a practitioner, a condition, a treatment duration, and an outcome.
How to engineer this without being manipulative:
Change your review request prompt. Instead of “How was your experience?” ask two questions: “What brought you in?” and “What changed?”
The first prompt naturally generates condition-specific language. The second naturally generates outcome language. You’re not coaching patients on what to write — you’re asking questions that produce informative answers instead of empty ones.
Also: respond to every review (100% response rate, r = +0.826), and in your responses, reference the condition by name and reference the treating doctor by name. This doubles the amount of condition-specific, entity-specific text available to models on your review profile.
Stop chasing review count. Start engineering review information density. If you already have 100+ reviews, you have enough volume. What you need is reviews where patients name what they came in for, who treated them, and what happened.
Step 8: Know What NOT to Do
The study identified several common marketing activities that show zero or negative correlation with AI citations. Avoiding these is as important as executing the playbook.
Don’t chase more Google reviews if you already have a solid base. Review count correlates negatively. More volume dilutes condition density. Focus quality.
Don’t invest in Reddit. Zero mentions of any practice in our study market across 9,323 Reddit rows. The Reddit environment is actively hostile to chiropractic. Gemini routes patients away from chiropractors on open-ended health queries that draw from Reddit training data. Dead channel.
Don’t build backlinks for domain authority. DR correlates at r = −0.106 with AI citations. One practice in the study has $462K in estimated organic traffic value and only 32 AI citations. SEO authority doesn’t translate to AI visibility.
Don’t spread across more social platforms. Platform count correlates negatively (r = −0.533). Depth on the right platforms beats breadth across many.
Don’t publish generic blog content. “5 Benefits of Chiropractic” is exactly what an AI model can generate from its training data. There is zero reason for a model to cite you for information it already has. Zero information gain = zero citation value.
The Model-Specific Reality (Why Cross-Model Optimization Matters)
One finding the general AEO playbook doesn’t address: different AI models weight different signals.
GPT-4o prioritizes entity-geography alignment. The correlation between location-name match and GPT citations: r = 0.913. GPT cares most about identity resolution — can it cleanly determine that your practice is in the location the patient is asking about?
Perplexity prioritizes domain authority. DR vs. Perplexity citations: r = 0.721. This is the one model where traditional SEO metrics have measurable predictive value. Perplexity also leans heaviest on health directory platforms (Healthgrades: 26 citations, Zocdoc: 23).
Gemini inversely weights both and applies the heaviest YMYL filter — 24.3% of its citations go to medical authority sites (43% more than the other models). Gemini is the skeptic. It gave the highest number of zero-citation responses and is most likely to route patients away from chiropractors entirely on open-ended health queries.
The strategic implication: Optimizing for one model can hurt performance on another. A practice that invests heavily in domain authority to win on Perplexity may see no lift on Gemini. The only signals that run positive across all three models are the clarity metrics: review quality, naming consistency, response rate, information gain, clinical reasoning.
Cross-model optimization requires signal clarity. There is no shortcut that works everywhere.
The Complete System (What This Looks Like When It’s Running)
Grant describes the AEO flywheel as: Discover demand → Build structured answers → Add schema → Optimize for model consumption → Measure → Feed insights back.
Here’s what that flywheel looks like adapted for a chiropractic practice:
Discover: Mine patient conversations (Plaud or call recordings), Google reviews, PAA boxes, and Perplexity suggestions for questions. Cluster by intent: Evaluation, Fear/Safety, Outcome, Process. Identify which conditions and which intent buckets have no content.
Build: Write condition pages using the Publishability Hypothesis template. Build Fear content nobody else will touch. Create comparison tables. Film condition-specific videos. Every piece follows the same structural template — models learn patterns and reward consistency.
Structure: Deploy FAQ Schema, LocalBusiness Schema, MedicalCondition Schema, Organization Schema. Connect entities: practice → location → practitioner → conditions → services. This is the structural layer that zero competitors have.
Optimize: Inject geographic entity strings everywhere. Consolidate practitioner naming. Claim health directories with complete profiles. Achieve 100% review response rate with condition-specific language.
Measure: Run the same patient queries through ChatGPT, Gemini, and Perplexity monthly. Track: Are you being mentioned? For which conditions? By which models? Are your pages being cited as sources? Is your entity being resolved correctly? Which Fear queries are you winning? This is your visibility dashboard.
Refine: New patient questions surface new content gaps. Review language evolves. New research gets published. Update condition pages with fresh citations. Film new videos addressing emerging questions. The system stays alive because patient conversations never stop generating signal.
Timing: Why First Movers Have Disproportionate Advantage
Right now — early 2026 — structure is the moat. Nobody in chiropractic has it. The first practice to implement this playbook in any local market has zero competition for structured data signals, zero competition for Fear content, zero competition for comparison tables, zero competition for clinical reasoning published in extractable format.
That window won’t stay open. Once the first movers prove the model works, fast followers will copy the structural patterns. And eventually — maybe 2028, maybe later — AI models will get better at detecting what’s synthesized from public sources versus what contains genuinely original clinical data.
When that happens, the moat shifts from structure to information provenance — proprietary outcome data embedded in structured content. “Across 127 patients we’ve treated for sciatica, average time to 50% improvement is 3.2 weeks” is a data point no content agency can fabricate and no competitor can reverse-engineer. That’s the Phase 3 moat.
But you can’t build Phase 3 without Phase 1. And Phase 1 is free real estate right now.
There’s also a secondary compounding effect: practices that are consistently cited across models for 2-3 years build entity trust that accumulates. Being cited today makes you more likely to be cited tomorrow. It’s the Matthew Effect applied to AI recommendations — the rich get richer, but only if the foundation is built early.
Every month you run this system, the advantage grows. Every month you don’t, conversations disappear into the air instead of becoming content assets, and a competitor who started earlier gets harder to catch.
The playbook is here. The competition isn’t. Move.
This playbook is built on the chiropractic AEO study (98 queries × 3 models × 25 practices), the video content flywheel analysis, the synthetic persona framework, and Josh Grant’s Definitive 2026 Guide to AEO and Question Mining Guide. Full study data including correlation matrices, structural fingerprint scorecards, and archetype classifications available here.
Phase 2 — testing whether this playbook actually produces measurable citation lift on a real practice — is underway. Subscribe to follow the experiment.
If you run a practice and want to understand where your signals stand, I’m reachable here.



