On this page
The archive experiment
If you answer for what the organization publishes—communications lead, executive director, program head—try a small experiment. Open your organization's content archive. Pick ten pieces at random, produced in the last eighteen months. For each piece, ask a reasonable stakeholder — a long-time staff member, a senior volunteer, an engaged donor — whether they remember reading it, and if so, what they remember of its argument.
Most organizations, doing this honestly, discover a recall rate that is embarrassing to them but should not be. Of ten pieces, perhaps one or two are remembered in any specific form. Three or four are vaguely recognized. The rest land as something like, yes, I think I saw that, I don't recall what it said. The staff member who produced the piece often cannot describe its argument without consulting the draft.
The standard interpretation of this result is that the organization needs better content or more content or more distinctive content. None of those framings is quite right. The problem is upstream of quality. The problem is that the work was not built to move — through the reader's life, through the organization's body of work, through time — and so, predictably, it did not. This piece is about what it means for a piece of work to move, why most organizational output fails to, and what a moving piece actually has inside it.
What it means for a piece of work to move
A piece of work moves when it does four things, any one of which is rare and all four of which are the mark of a piece that will still matter in a year.
It changes how a serious reader thinks. Not adds to what they already think. Shifts some part of it. The reader finishes the piece carrying a sentence they did not carry before, and over the following weeks they find themselves using the sentence in conversations with people who were not in the original reading.
It gets shared in contexts that matter. Not a broad count of shares on a platform that rewards virality. A specific count of shares among readers whose own work, careers, and positions are shaped by what they encounter. A piece that is forwarded by three people whose judgment matters is worth a thousand shares by an audience that was going to scroll past it anyway.
It feeds the next piece of work. Inside the organization, the piece becomes something a colleague references in their own writing. It becomes a paragraph other staff quote in donor meetings. It becomes a citation inside the next article. Externally, the piece earns the right to be cited by someone else doing related work. A piece that feeds nothing downstream, inside or outside, is sealed off from compounding.
It compounds the organization's position. The piece leaves the organization better-positioned, by a small amount, than it was before the piece existed. Over time, the accumulated position is the organization's public standing in its field. Content that does not add to the position is, at best, neutral. Much of it is actively costly, because its existence consumes staff attention that could have been spent on content that does add.
Content that does none of these four things is static, even if it is well-written. Static content is filed, not read. The archive experiment at the beginning of this piece is a direct consequence of producing mostly static content over a long period.
Why most content is static
Most organizational content is static for specific, diagnosable reasons.
It is produced on a deadline rather than on an argument. The piece exists because the weekly or monthly slot needed filling, not because there was a thing the organization needed to say. Deadline-driven content is the single largest category of static work in the sector, and it is almost always the category staff are most relieved to ship and least proud of a month later.
It is optimized for the surface rather than for the reader. Headlines are written for click-through. Paragraphs are structured for skimmability. Lists appear where arguments should be. The piece is engineered to perform in a feed, which means the piece was engineered for the environment of its distribution rather than for the moment when a serious reader actually reads it. The engineering succeeds at the surface level and fails at every deeper level.
It is disconnected from the organization's core library. The piece does not reference the organization's anchor arguments, because the organization has not named what those anchor arguments are. The piece is, by construction, a free-floating asset, which means even the reader who liked it has nowhere to go next. A piece that leads to nothing is a piece that can only move the reader once, which is to say barely at all.
It is written to fill a slot rather than to carry an argument. The writer, reasonably, wrote what the slot asked for. What the slot asked for was a plausible-looking piece of content at a specific length, on a roughly defined topic, by a particular date. None of those parameters generate a moving piece. The slot is the enemy of the argument, and most organizational content is written from inside the slot rather than from inside the argument.
AI makes each of these four failure modes worse. It lowers the cost of hitting a deadline, which multiplies the volume of deadline-driven work. It lowers the cost of producing surface-optimized text, which multiplies the volume of feed-engineered work. It produces reasonable-seeming filler in the absence of an anchor framework, which multiplies the volume of disconnected work. And it fills slots beautifully, at the cost of the argument the slot was never well-designed to carry.
None of this is a comment on AI's capability. It is a comment on what happens when a capability is applied inside a production model that was already skewed toward static work. The skew gets amplified. The static multiplies. The archive grows and the recall rate falls.
Output versus work
There is a distinction the sector has mostly lost that is worth restoring here. Output is measured in volume. Work is measured in movement. They are not the same object.
Most organizations, especially under pressure, measure themselves in output. Pieces shipped per quarter. Posts per week. Newsletter cadence maintained. Campaigns launched. The measurement is legible to the board, legible to the staff, and completely uncorrelated with whether the organization is doing the thing it was founded to do. An organization can have a pristine output graph and produce no work at all, in the sense the word work used to mean.
Work, by contrast, is measured in whether the substance the organization ships is reshaping anything. A single piece that moves the thinking of fifty serious readers, and then feeds three downstream pieces that do the same for another fifty each, has done more work than a year of weekly posts that nobody can recall. The second piece loses on output. It wins on work. In the decade ahead, the organizations that confuse output for work will be outcompeted by the ones that hold the distinction steady, not because they try harder but because their work compounds while everyone else's volume merely accumulates.
This is the argument running through this part of the book. AI is not, by itself, the cause of the problem. AI is the accelerant applied to a confusion that was already present. Organizations that measured output before AI now produce more output. Organizations that measured work before AI now produce more work. The gap between the two kinds of organization is widening, and the widening is almost entirely invisible on the scorecard that measures output.
What a moving piece actually has
A moving piece has four things inside it, by design.
It has one idea the whole piece exists to earn. One sentence, usually near the top, that the rest of the piece is built to justify. A reader who remembers nothing else from the piece should be able to carry that one sentence out of it. If the writer cannot name the sentence after finishing the draft, the piece does not have one, and no amount of polishing will introduce one.
It has a clear from-to. The reader arrives in one state and leaves in another. The piece knows which state it is moving the reader from and which state it is moving the reader to. A piece without a from-to is a piece that is producing words, not movement. The reader's mental picture is the same after as before.
It has a debt to prior work. The piece leans on prior work — inside the organization's body, or inside the tradition the organization operates within — and it pays the debt visibly. It references. It cites. It builds. A piece that pretends to stand alone is usually thinner than it thinks it is, and the thinness registers with the serious reader whether the writer notices or not.
It has a hook that survives a second reading. The first reading test is easy. Every surface-engineered piece passes it. The second reading test is the honest one. Can the piece be read twice without becoming embarrassing? Most static content fails this test immediately. Most moving content was built with it in mind from the first draft.
These four elements are available to any piece of work, in any medium, at any length. They are not a style. They are a movement test. An article, a talk, a podcast episode, a donor letter, a board memo, a flagship teaching session, a course module — all of them can pass or fail the test, and the pass rate across most organizational archives is strikingly low.
The handoff
In an AI-saturated production environment, moving content is the only content that will pay its freight. Static content will be outcompeted by static content produced more cheaply somewhere else. The organizations whose work moves — because it carries an idea, runs a from-to, earns its debts to prior anchor work, and survives a second reading — will be disproportionately visible, cited, and treated as reference inside the next decade of the sector.
But moving content, no matter how well-built, runs into a second problem that the sector has also not yet registered. The old markers that told serious readers which content was worth their time have stopped working. Polished writing is now free to produce. Well-designed surfaces are now trivial. The signals that used to filter the good from the forgettable have collapsed, and any organization that does not notice the collapse will find that even its moving work gets lost in the noise.
That collapse is the subject of the next piece.
Read next: The Collapse of Signal in the AI Age — why the old markers of credibility have stopped working, and what still cuts through.

