On this page
The search experiment
If you lead strategy, research, or communications for a mission-driven organization—executive director, program head, movement leader with a public body of work—search your own field. Pick the three subjects your organization is most serious about, and look at what the sector is collectively surfacing on those subjects right now — in search, in feeds, in podcast charts, in conference lineups, in the answers an AI assistant gives when a colleague types the question.
Notice which voices dominate. Notice how many of them have been doing the work for fewer than five years. Notice how many of the people whose names come up when you ask your most trusted peers — the ones whose judgment your own judgment has been shaped by — are nowhere to be seen in the surfaces where the sector is now forming its opinions.
The gap between the voices your field most needs to hear and the voices the field is actually hearing is not random, and it is not a temporary distortion. It is the predictable output of how sorting works in the new information environment, and it is going to widen unless leaders understand what has shifted. This piece is about why the expertise is being buried and what restores its visibility.
The old sorting
For most of the era in which this sector's leaders formed their reading habits, expertise was sorted for by three overlapping mechanisms, each of which did a reasonable job.
The first was human gatekeeping. Editors decided what ran. Publishers decided what shipped. Conference programmers decided who got the stage. Journal committees decided what got reviewed. Gatekeeping was imperfect, and it had well-documented biases, but it reliably correlated with some kind of vetting. A reader encountering a piece in a serious venue could use the venue itself as a proxy for someone with time and taste thought this was worth readers' attention.
The second was search infrastructure that rewarded authority markers. Inbound links from credible sites. Citations. Domain reputation. Structured metadata. Long-term indexing behavior. For roughly two decades, serious practitioners could expect that a reasonably published body of work would, over time, surface when their subject was searched. The infrastructure was calibrated, imperfectly but usefully, to the kind of evidence that expertise leaves behind.
The third was reader shortcut. A serious reader learned the names of the voices worth following, subscribed directly, and built personal reading rhythms that did not depend on what a platform happened to show them on a given day. Newsletters, RSS feeds, small discussion lists, and the long-accumulated memory of who to trust on this question operated outside platform sorting almost entirely.
These three mechanisms, working together, produced a rough sort in which expertise was usually findable for anyone willing to look. The sort was not fair. It had known failure modes. It was, however, coherent enough that a leader who wanted to know the serious voices on a subject could find them.
The new sorting
The sort has changed. Three forces have reshaped it, each of which operates by different logic than any of the three mechanisms above.
Generative models surface what reads well. An AI assistant answering a question in the sector pulls from the sources its training and retrieval rank as most plausible. The ranking correlates with things like fluency, citation density, schema, and frequency — not with lived expertise. A voice with thirty years of practice who writes in a distinctive, non-standard form gets ranked below a voice with two years of practice who writes in the smooth convention the model was trained on. The answer the assistant produces is a function of this ranking, and the reader using the assistant never sees the voices the ranking demoted.
Algorithmic feeds reward engagement velocity. Short-term engagement — quick reads, emotionally legible takes, clean hooks — is the metric that most discovery surfaces optimize for, whether they admit it or not. Expert voices are characteristically long, careful, slow, and hedged in the right places. These are the traits least rewarded by engagement velocity. The voice that produces ninety slow minutes on a hard question loses at the discovery layer to the voice that produces ninety seconds of a fluent opinion on the same question. The slow voice is still correct. It is also not showing up.
Platform consolidation has thinned direct reader infrastructure. Newsletters have partially recovered as a surface, but the broader ecosystem that used to support direct reader relationships — personal blogs, small discussion forums, sector-specific reader communities — has thinned enough that the third mechanism of the old sort is operating at a fraction of its former capacity. Most readers, most of the time, are encountering the sector through surfaces they do not control.
The result is a new sort that correlates poorly with expertise. It correlates with produceability, fluency, and engagement legibility. A voice that scores well on those attributes can dominate discovery without having spent any time on the actual work. A voice that scores poorly on those attributes can hold a reputation among the two hundred practitioners who matter and be invisible to the broader field.
Three invisibility vectors
There are three specific ways expertise gets buried inside the new sort, and naming them is the precondition for fixing them.
The first is volume. The sheer quantity of plausible content being produced in every sector has risen sharply, and expertise drowns in the rise. A single considered piece by a serious practitioner, published into a feed that is receiving hundreds of new pieces per hour, has a discoverability problem that has nothing to do with the piece's quality. The piece is not bad. The environment is louder. At some volume, even a very good voice is mathematically invisible without indexes, pathways, and cross-links that discovery systems can follow—not because the work is weak, but because the environment is deaf by default.
The second is format mismatch. Expert voices tend toward forms that the new sorting penalizes. Long essays. Books. Lectures. Extended arguments. These forms are expert-legible because they are the forms in which real thinking can be done, but the discovery surfaces reward shorter, faster, more summary-shaped output. An expert who refuses to compress their thinking into the short form will not be discovered by the short-form surfaces. An expert who does compress their thinking into the short form usually loses the thing the long form was preserving. Either way, the format is the problem.
The third is lack of structured surfaces. Experts in this sector, by and large, publish into forms that were not built for the new sort. A book comes out, sells modestly, and disappears from discovery. An essay is posted on a blog and is found only by the people who already know where to look. A lecture exists as a ninety-minute video with no transcript, no structured metadata, and no internal cross-links. The work is produced. The work is not indexed in any way the new sorting mechanisms can recognize. A discovery system cannot surface what it cannot parse.
Volume, format, surface. Three vectors of invisibility, none of which are about the quality of the expertise and all of which compound over time.
The citation economy
Citation economy means what gets cited by AI systems when a reader asks a question—which bodies of work can be retrieved as evidence, and which never enter the stack no matter how good they are on the page.
There is a further shift that most leaders in the sector have not yet registered, and it is the most consequential one for the rest of the decade.
Readers are increasingly using AI systems to answer their questions, and the answers they get are a function of which voices those systems are willing to cite. An expert who is not represented inside the training data and retrieval corpus of the major assistants is, to a growing share of readers, effectively non-existent. The reader asks the question, gets an answer that draws on other voices, and never encounters the expert at all. The old question — does the practitioner have a website — has been replaced by a new one: does the system cite them.
Being citable is not the same as being published. It requires a specific configuration of presence: structured, interlinked, parseable, persistent, and appearing across enough distinct surfaces that a retrieval system can ground an answer in the work. Most serious practitioners in this sector are nowhere near this configuration. Their work exists — sometimes in abundance — but in forms that the citation economy does not see.
This is not a marketing problem in the conventional sense. It is a stewardship problem. The readers who most need serious expertise on a subject are increasingly being answered without that expertise in the room, not because the expertise is hiding, but because the discovery layer of their inquiry has no mechanism for finding it. The reader leaves the exchange with a plausible answer that lacks the voice that would have shifted it.
What restores visibility
The remedy is not optimization in the SEO sense. The remedy is architectural—how the body of work is shaped, linked, and kept findable—and it follows from the earlier chapters of this book.
Expertise becomes visible again when a practitioner builds a canonical body of work — a specific set of arguments, frameworks, and pieces that represent what they actually stand on — and publishes it into structured, interlinked, citable surfaces. The body of work has to be coherent enough that the discovery layer can ground answers in it. It has to be specific enough that the positions are legible, not merely polished. It has to be interlinked so that a reader or a system that finds one piece can reach the rest. And it has to be persistent across enough surfaces that the citation economy can find it regardless of which entry point a reader uses.
This is the work the rest of this book will describe more concretely. It is not a small project. It is also not optional for serious practitioners who want to remain visible to the readers who most need them in the decade ahead.
Why this matters past the marketing question
It would be easy to read this piece as a marketing argument and miss the point. The subject is not reach. The subject is whether the people whose lives, decisions, and work would be meaningfully shifted by this sector's expertise are able to find that expertise when they go looking for it. In the old sort, they often could. In the new sort, increasingly, they cannot.
The invisible expert is not a professional inconvenience. The invisible expert is a stewardship failure with downstream costs that are carried by the people the expertise exists to serve. A sector that allows its serious voices to become unfindable through the default discovery paths is a sector that has failed to take the responsibility of being findable seriously.
That is the weight of this piece. The sort has changed. The old habits of publishing will not adapt on their own. The next chapter names the deeper reason for the failure: the unit of serious work has changed, and the single great book, the single great talk, the single great article — the forms the sector has been building on for decades — no longer carry the weight they once did alone.
Read next: The Death of Isolated Work — why the standalone asset no longer produces movement, and what has replaced it.

