Skip to content
Chapter 4·5 of 24

Part 1: The tax you are already paying

Chapter 4 · 20 min read

The moment AI made the tax visible

Share

On a Wednesday in late September, Maggie asked an AI model to brief her on her own work.

She was on a plane, preparing for a keynote the next day to a room of three hundred practitioners. She had done what keynotes usually do to her, which is cause her to second-guess the framework at the last possible moment. She wanted a fresh read. So at thirty-four thousand feet, she opened a laptop, opened the model, and typed: Summarize Maggie [Last Name]'s framework for [the thing she is known for]. Cite sources.

The model produced seven hundred fluent words.

The paragraph was beautiful. It was also wrong.

Not wildly wrong. Subtly wrong. The model had conflated two different frameworks Maggie had written about twelve years apart — one of them since retired, one of them currently central to her work. It had attributed the retired framework to her current position. It had cited a source she had not written, by someone whose work intersects with hers but draws a contrary conclusion. It had used terminology Maggie had abandoned in 2017, and had replaced her current terminology with phrases that did not appear in her own work at all.

Reading it, Maggie realized three things in sequence.

The first was that the paragraph was better-written than anything she had produced about her own framework in the last two years, because her own articulations were scattered across six books written over three decades, and nobody — including her — had ever gathered a canonical version.

The second was that this paragraph — or something like it — was the answer a lot of people were now getting when they asked the model about her. Prospective speakers at her conference. Seminary students writing papers. Journalists preparing interviews. Peer thinkers fact-checking their own citations. Donors doing due diligence on the fellowship. Producers screening her for podcasts. Every one of those queries was, right now, being answered by a model doing what this one had just done: fluent approximation, pulled from whatever was most structured and findable in the corpus, assembled with confidence, and — at the places where her actual framework was most distinctive — consistently wrong.

The third thing she realized, which she did not fully accept until the plane landed, was that the model was not the problem.


This chapter is for Maggie on the plane.

It is also for every leader who has had a version of this experience in the last three years — the experience of asking an AI model about their own work, or their own organization, or their own field, and discovering that the model is now the ambient answer-giver for the world, and that the answer it is giving for them is subtly and consistently wrong.

I want to say something the AI book I wrote alongside this one says in its second chapter and will not repeat here: AI is not what you think it is, and this chapter is not the AI chapter of this book. This book is not about AI. It is about fragmentation. AI enters the story only because AI is the moment fragmentation stopped being tolerable.

The claim I am going to defend in this chapter is simple.

AI did not create fragmentation. AI exposed fragmentation.

The fragmentation was always there. It was always expensive. It was, however, tolerable, because it was mostly passive — it quietly cost you things, and the cost was real but bounded. AI made the fragmentation active. The same scattered intelligence that used to quietly underperform is now being read, paraphrased, and publicly broadcast by a tool the entire world is now using as its default answer layer. Fragmented intelligence used to be a drag on the organization. Fragmented intelligence is now a live misrepresentation of the organization, produced at a rate no one can correct.

That is the shift. That is the reason this book is being written in this decade and not in the last one.

And — this is the second half of the claim — AI also collapsed the cost of integration. The tools that now broadcast your fragmentation are many of the same tools that, turned the other direction, make integration tractable at a fraction of the cost it used to require. The pathway out was theoretically possible twenty years ago. It is practically possible now, for the first time, at a cost most mid-sized organizations can actually bear.

So the current moment is a particular kind of moment. Fragmentation has become both more expensive to ignore and cheaper to address than at any prior point in modern organizational history. That is the forcing function this chapter is about.


Fragmentation used to be passive

I want to spend a minute on what fragmentation used to cost, before AI, because the contrast is the whole point.

Before AI, fragmentation cost an organization in the ways Chapter 1 named — memory, continuity, compounding, credibility, formation, coherence, risk exposure. All seven of those were real. None were trivial. Collectively they added up to the seven-figure annual exposure Chapter 1 named.

But every one of those costs was paid by the organization, inside the organization, in the organization's own operations. Wes paid the cost of not retrieving Dean's memorial conversation. Maggie paid the cost of a framework scattered across six books. Joelle paid the cost of a formation architecture that had never been built. Elias paid the cost of a seminary that could not produce a coherent account of its own positions.

The cost was real. It was also contained.

The outside world mostly could not see the fragmentation. Dean could not see that the memorial conversation was lost in a cabinet. Most readers who encountered Maggie's framework in one book did not know the same framework appeared differently in five other books. Most seminary stakeholders did not know the 2011 working group memo did not reconcile with the 2023 denominational statement. The fragmentation was the organization's private problem. It produced private costs.

This is why most organizations did not address it. The cost was annoying, but bearable, because it was internal. The organization ate the friction. It hired harder-working staff, bought another tool, commissioned another document, asked senior leaders to carry more in their heads. The fragmentation tax was paid quietly, out of general overhead, and nobody outside the organization had to see it.

That was the pre-AI equilibrium. It was not a good equilibrium. It was an equilibrium most organizations had learned to live inside, because the alternative — building the integrated foundation — was expensive and produced no visible deliverable, and the sector had no vocabulary for why the foundation mattered.

The equilibrium broke in 2023.


Fragmentation became active

I am going to use 2023 as a shorthand. The break was not exactly 2023, and the exact moment varies by sector and audience. But the shape of the break is the same everywhere.

The break is this: for the first time in modern organizational history, a tool that most humans now use as their default answer layer has begun reading, paraphrasing, and publicly broadcasting the organization's fragmented intelligence, at scale, to audiences the organization has never had access to.

The tool is obviously AI — specifically, large language models deployed at consumer scale. But calling it AI is slightly wrong for the purposes of this chapter, because what matters is not the specific technology. What matters is the function the technology now performs in the world.

Before 2023, when a prospective donor wanted to understand a nonprofit, they visited the website, read an annual report, or asked someone. Now, they often ask an AI. Before 2023, when a seminary student was writing a paper about Maggie's framework, they opened one of her books and read it carefully. Now, they often ask an AI. Before 2023, when a congregant wanted to understand what their church believed about a difficult question, they asked a pastor or searched a website. Now, they often ask an AI.

The AI, in every one of those cases, is now the first, fluent, confident voice the inquirer hears about the organization. And the AI is working from whatever corpus is most structured and findable — which, in a fragmented organization, is the wrong corpus.

This is what I mean by fragmentation becoming active.

Before, fragmentation was passive. It produced internal costs and private loss. Now, fragmentation is generative — it produces external output, about your organization, at a rate and scale no human can match, sent to audiences you do not control, and often in forms you never see until long after the damage is done.

Let me walk this through the four protagonists.


What this looks like for Maggie

Maggie's case is the one I opened with, and it is the case I want to linger on, because the version of the problem she encountered is the one most leaders of ideas have already encountered, whether or not they have noticed.

The model on her laptop was not doing anything malicious. It was answering a question using the material it had been trained on. That material was the scattered public corpus of Maggie's work — plus everyone else's work in the same field. In that training corpus, three things competed for authority:

  • Maggie's six books, each with slightly different vocabulary, none designated canonical.
  • Her hundreds of podcast appearances and YouTube talks, none transcribed into structured text at the time the model was trained, and therefore barely represented in the training data.
  • Other writers — some competent, some derivative, some contrary — whose work about Maggie's field was structured, was indexed, was easy for the model to retrieve, and whose articulations therefore became the model's best guess at Maggie's position.

The model chose the clean articulation over the messy one. Because that is what models do. They choose the articulation that is most structured, most findable, and most confident-sounding. The model is optimizing for fluency. It has no mechanism to prefer the authentic-but-scattered source over the clean-but-approximate one.

The result is that Maggie's framework, as understood by the world in 2026, is increasingly not Maggie's framework but the model's best approximation of Maggie's framework, assembled from whichever sources were structured at the moment the training cut off.

This is happening to every leader whose life's work lives in scattered form. It is happening whether or not they have noticed.

The cost is specific and it compounds.

It compounds in search — the competitor's clean articulation ranks above Maggie's own work, not because the competitor's work is better but because it is retrievable. It compounds in citation — students and journalists cite what they can find, and they can find what is structured. It compounds in AI output — every downstream use of the model references the approximation, not the original. It compounds in formation — the readers who encounter Maggie's framework through the model encounter a version of it Maggie would not endorse, and they form their own practice on that version, and some of them go on to teach it, and the drift continues one generation further.

Voice dilution used to be a metaphor. Now it is an operational description of what is happening to Maggie's framework every hour of every day, at a scale she cannot intervene in, conducted by tools she has no contract with.

The diagnostic point of this chapter is not that AI is doing something wrong. The diagnostic point is that AI is doing what it is designed to do, and what it is designed to do turns fragmentation from a private cost into a public one.


What this looks like for Wes

Wes's nonprofit installed an AI-assisted development tool in early 2025. The tool promised to surface "hidden relational intelligence" by connecting the organization's CRM, email, calendar, and LinkedIn data, and to produce, on demand, donor briefings that drew from the full relational context.

Six months in, the tool is functioning as designed. It is also producing outputs Wes cannot use.

When Wes asks the tool for a briefing on Dean before a major donor meeting, it produces a clean three-page document. The document lists Dean's giving history (correct, because it is in the CRM). It summarizes recent email exchanges (partial, because half of Dean's correspondence was with the retired development officer and has never been centralized). It notes Dean's LinkedIn activity (irrelevant — Dean barely uses LinkedIn). It generates talking points (generic — consider asking about family and recent travel — because there is no structured record of what Dean has said about his family or his travel).

The document looks authoritative. It reads as though it has integrated the organization's relational intelligence into a coherent brief.

It has not. It has integrated the organization's structured relational intelligence — the ten percent that was in the CRM — and paraphrased the other ninety percent in whatever direction the model's best guesses took it. The memorial conversation with Dean's brother is not in the briefing, because it is not in any structured source the tool can see. The specific program Dean has quietly cared about since 2019 is not in the briefing. The fact that Dean's daughter was just admitted to medical school, which the retired officer had noted in a handwritten file, is not in the briefing.

The briefing is fluent. The briefing is polished. The briefing is, in the most load-bearing respects, a fiction.

This is a worse failure mode than having no briefing at all, because the briefing creates false confidence. Wes reads it, walks into the meeting, asks about Dean's recent travel rather than his daughter's news, and feels like he has done his homework. Dean does not. Dean — who is the kind of donor who had hoped the AI-assisted development tool would finally produce the coherent picture he has been giving this organization for fifteen years — quietly concludes what he has been concluding every year for five years: this organization is not the kind of organization that holds what I told it.

The tool has not solved the problem. The tool has upgraded the appearance of the problem, which in fundraising is worse than the problem itself.

What Wes's organization needed was an integrated relational foundation. What it bought was an integrated-looking surface bolted to the top of an unchanged scatter field. AI made the gap between the surface and the foundation more visible to the donors on the other side of the briefing, and more expensive to close.


What this looks like for Joelle

Joelle's church has not yet deployed an AI tool intentionally. Joelle's church is nevertheless experiencing the AI exposure, and in a way that is harder to see and harder to correct than the versions Maggie and Wes are experiencing.

Members of Joelle's congregation — the ones with theological questions, the ones wrestling with a passage, the ones trying to make sense of the Sunday sermon mid-week, the ones who have had a private doctrinal confusion they are not comfortable surfacing to staff — are now, routinely, asking AI models for pastoral guidance.

When they do, the model responds in whatever theological voice is most structured in its training corpus. That voice is, overwhelmingly, the voice of the most prolific theological writers on the open internet — which, depending on the question, may be in serious tension with the theology Joelle has spent fifteen years forming her congregation into.

A congregant sits with a question about suffering, asks a model, receives a lucid paragraph shaped by a theological tradition Joelle would not teach. The paragraph is not heretical. It is simply not the theology of this particular church. The congregant accepts it, integrates it, and moves on. When the topic next comes up at church, the congregant will be operating from a theology Joelle never transmitted, and the gap between Joelle's formation work and the congregant's actual formation has widened by one more increment.

This is happening every week in every congregation in the country. Most pastors are not tracking it because it is invisible to them — the inquiry happens privately, the answer arrives privately, and the formation drifts privately. Joelle is tracking it, faintly, because in the last eighteen months four separate congregants have brought up positions she recognizes from particular AI-promoted theological writers, and she has noticed the positions are almost, but not quite, what her church teaches.

What the AI is doing here is not wrong. It is answering the question it was asked. The failure is on the other side: the church has never built a foundation that would allow an AI to answer a congregant's question in the voice of the church the congregant actually belongs to, and there is no mechanism by which the church's forty-seven months of cumulative sermon work reaches the moment a congregant opens a chat window.

Formation has always competed with the ambient theological voices of the culture. This is not new. What is new is that one of those ambient voices is now, for most congregants, the first and most authoritative source they consult — and it is, by default, not the voice of their own pastor.

Joelle is sitting with this on her sabbatical. She did not know to name it until the day-thirty-one conversation, and she has not yet solved for it. But she knows now what she did not know a year ago: the formation architecture her church needs to build must hold not only the sermons and the small groups and the discipleship paths, but also a foundation from which the church's own voice can reach the congregant at the moment of the private question, because the model is there and the model is answering and the model is answering in someone else's theology.

That is what active fragmentation looks like in a formation context.


What this looks like for Elias

Elias's version is the one we have already met. I want to finish it now, because the scale matters.

The board member's AI query. The fluent paragraph that was wrong in a direction the seminary had moved away from fifteen years ago. The eleven days Elias spent assembling the answer.

I want to give you the sequel to that story, because the sequel is the thing that makes this chapter a turning point rather than a crisis.

Three weeks after Elias produced the answer for the board member, a major prospective donor asked the same model the same kind of question about the seminary. The prospective donor did not consult Elias. The prospective donor did not write to the seminary. The prospective donor read the model's answer, concluded that the seminary's position was not aligned with her own giving priorities, and moved on to the next institution on her list.

The seminary found out about this, six months later, by accident, when a mutual acquaintance mentioned it at a dinner. The seminary had no mechanism for knowing, in real time, what the model was saying about it, to whom, or with what consequence.

Elias's accreditation review is in eighteen months. The accreditors will ask questions of the seminary directly. Those questions Elias can answer, painfully, through the eleven-day reassembly process. But the accreditors are a small fraction of the people asking questions about the seminary right now. Every prospective student, every denominational partner, every adjacent scholar, every journalist, every parent of an applicant, every alumnus considering a major gift — every one of them is at this moment asking a model for their read on the seminary, and getting, by default, whatever fluent approximation the model can assemble from the most structured fragments of a fragmented corpus.

Most of those people will never tell the seminary. Most of them will simply adjust their estimate of the seminary downward or sideways, and act accordingly.

This is the new cost of institutional fragmentation at scale. Not eleven days of reassembly when the board asks. A continuous, invisible, asymmetric shaping of the institution's public reception, conducted by tools with no accountability and no contract, answering from a corpus the institution has never gathered.

The board member's question was not a one-off. It was the one instance the institution happened to find out about.


The forcing function

This is the moment I need to name what the chapter is actually doing.

I have just walked four protagonists through four specific versions of AI-exposed fragmentation — voice dilution, false briefing, formation drift, institutional misrepresentation. Each is real. Each is happening now. Each is accelerating.

But the deeper point is not about AI. The deeper point is about what the simultaneous arrival of these four dynamics does to the economics of integration.

Before AI, the cost of integrating the foundation was high and the cost of not integrating was bounded. Every leader and every organization could, and usually did, choose the second. The fragmentation tax was paid quietly, absorbed into operations, treated as the price of doing the work.

AI did two things to that equation, in the same five-year window, from opposite sides.

It raised the cost of not integrating. The voice dilution, the false briefings, the formation drift, the institutional misrepresentation — every one of those is a new cost, assessed on top of the old ones, paid in a currency (public reception, third-party perception, unrecoverable first impression) the organization cannot easily manage. The fragmentation tax used to be a line item in the internal budget. It is now a line item in the public perception budget, and the public perception budget does not have a cap.

At the same time, it lowered the cost of integrating. The tools that are broadcasting your fragmentation are, in another direction, the tools that make the integration pathway practical. What used to require a six-figure content management system, a full-time knowledge manager, and a year of consulting can now be done, often, with deliberate effort and a handful of modern platforms. Transcription used to be hours per hour of audio; it is now minutes. Structured extraction from unstructured documents used to require specialist labor; it is now a scripted workflow. Voice capture, style matching, cross-referencing, retrieval-grounded generation — every one of those foundation-building capabilities has dropped an order of magnitude in cost in the last three years.

The pathway out was theoretically possible twenty years ago. It was priced for institutions with fifty-million-dollar endowments. It is now priced, for most audiences of this book, within a range that is at least conceivable. Maggie can afford to build it. Wes's nonprofit can afford to build it, if it decides to. Joelle's church can afford a lighter version of it. Elias's seminary has been spending the money on parallel initiatives that, properly directed at the foundation, would already have produced it.

The moment is particular. The cost of not integrating is rising. The cost of integrating is falling. The two curves have crossed. The equilibrium that allowed organizations to live with fragmentation has broken, and the pathway that used to be impractical is now, for the first time, practical.

This is the forcing function. It is not that AI made fragmentation a problem. It is that AI made fragmentation the problem, and — simultaneously — made the solution reachable.

Most leaders feel only the first half of this. They feel the new pressure without seeing the new possibility. That asymmetry is part of why most organizations are currently reacting to AI with either panic or denial. The panic sees only the cost. The denial sees only the familiar structure. Neither posture produces the response the moment actually calls for, which is neither panic nor denial but the decision to build the foundation now, while the curves are crossed, because in ten years they may have crossed back.


Why not wait

Every reader will at this point be forming the thought that produces the decision to wait.

The thought is: the AI tools are still changing quickly. Should I not wait for the dust to settle? Should I not wait for a better platform, a cheaper vendor, a more mature practice?

I understand the impulse. I want to answer it directly.

The dust is not going to settle. The dust is going to keep shifting for the next decade, probably longer. There will not be a moment three years from now at which the tools are settled, the practice is mature, the cost is predictable, and the moment for integration has clearly arrived. Every year you wait, the voice dilution will be larger, the formation drift will be wider, the institutional misrepresentation will have more accumulated weight, and your competitors — the ones who did not wait — will have built integrated systems that are compounding in ways yours is not.

The asymmetry also runs in your favor if you move now. The integrated foundation is largely tool-agnostic. The canonical framework, the relational graph, the voice guide, the pathways, the decision log — none of these are locked to a specific vendor. The foundation you build now will outlive the next three generations of AI platforms, because the foundation is the ontology, the structure, and the gathered content. The platforms are the surfaces. The platforms will change. The foundation, once built, is yours.

What you cannot recover is the time during which your fragmented corpus was the model's default answer. You cannot retroactively correct the misrepresentations that shaped a prospective donor's decision three years ago. You cannot unlearn the theology a congregant absorbed in the forty-seventh month because your church's foundation did not reach her. You cannot undo the framework drift that happened while your competitor's cleaner articulation was ranking above your own.

The cost of waiting compounds. The cost of starting does not.


The choice this chapter leaves you with

Chapter 1 asked you to name the currency in which you have been paying the fragmentation tax. Chapter 2 asked you to draw a line down the middle of a page and take the dual inventory of your informational and relational intelligence. Chapter 3 asked you which of the three misreadings your organization most recently attempted.

This chapter asks one thing.

Open the AI tool you already use — whichever one it is — and ask it about your own work, your own organization, or your own field. Type the question the way a journalist would type it. Type the question the way a prospective donor would type it. Type the question the way a prospective student, a curious congregant, a next-generation peer leader would type it. Read the answer.

You will see one of three things.

You will see an answer that is substantially right, and you will feel relief, and you will close the laptop and move on. If this happens, you are in a rare minority. Enjoy it. Note it. Know that the conditions that produced it are unusual, and that your task is to preserve them.

You will see an answer that is clearly, badly wrong. You will feel anger or alarm. This is useful. You will know what you are dealing with. The rest of this book will be straightforwardly relevant to you.

Or — and this is the most common case, and the most dangerous — you will see an answer that is fluent, confident, and subtly wrong in ways that a non-expert reader would not catch. You will feel the particular sensation Maggie felt on the plane: this is not quite right, but it is close enough that most people will not notice, and it is better-written than anything I have out in the world, and it is what the world is now being told about my work.

That sensation is the moment this book is written for.

The moment fragmentation stopped being tolerable was the moment the model on your laptop started answering questions about you, to audiences you cannot see, from a corpus you never gathered. That moment is now. It has been now for three years. It will keep being now for the foreseeable future.

Chapter 5 will lay out the six-stage trajectory that is the book's response to that moment. The rest of the book will walk the trajectory in detail. But the stage this chapter has been trying to reach is the one at which the reader agrees that the moment is now, that the tax is active rather than passive, that the curves have crossed, and that the pathway — however long it will take — can no longer be put off.

Ask the model about yourself. Read the answer. Then we will continue.

This chapter is still being refined.

Get notified when it changes — and see who influenced the revision.