Skip to content
Canon
CanonThe moment04 / 23

This Is Not a Tools Problem

By Josh Shepherd6 min read
On this page

The three-platform nonprofit

A mid-sized nonprofit bought licenses for three AI platforms last year. The procurement was clean. A committee reviewed options, a consultant was retained, the pricing was negotiated down, the rollout was staged, and a short training deck was produced and circulated. All of this looked responsible from a procurement standpoint, and the leadership team, reasonably, felt that they had acted decisively on something their peers were still debating.

A year later the executive director is reviewing the last three donor campaigns and finds that they sound, to her ear, like the last three campaigns from every peer organization she reads. The language is fluent. The structure is competent. Nothing is wrong in any way she can quickly describe. And yet the writing no longer sounds like her organization. The particular cadence that had always distinguished their appeals — the cadence that a long-time donor had once told her was the reason he read the letters all the way through — is gone. She cannot tell which staff member produced which paragraph. She cannot tell which paragraphs were drafted by a machine and which were not. Neither can the staff. Neither can the donor.

Three platforms. No bad actors. No obvious failure in any meeting. And the organization's distinctive voice, which took a decade to build, has been repriced to zero in a year.

This is not a tools problem. The nonprofit's procurement worked. Its training ran. Its licenses are active. Every technical piece did what it said it would do. What failed was upstream of the tools, and until leadership is willing to name the upstream failure, buying a fourth platform will not help.

What "tools problem" means, and why it is the default

"Tools problem" is the framing most organizations default to when AI enters the conversation. It sounds like this in practice. We need an AI strategy. Let's get the tech team involved. Let's pilot something. Let's pick a platform. Let's do a training. Let's write a policy once we've tried a few things. The sequence is almost universal, and it is almost universally wrong.

It feels right because it mirrors how organizations successfully adopted other technologies. Cloud migration was a tools problem. CRM adoption was a tools problem. Email marketing platform selection was a tools problem. In each of those cases, getting the tech team involved, piloting options, and picking a platform produced roughly the right outcome, roughly on schedule. The pattern worked.

AI breaks the pattern. Not because AI is more complex technically — that is a separate and overstated claim — but because the question AI is asking the organization is not a tools question. The question is what are you trying to form, protect, and compound, and how does a capability this general change the answer. Nothing in a tools-first process is designed to surface that question. Tools-first processes ask what can this do for us. They do not ask what does this do to us. For most prior technologies the distinction did not matter. For AI, it is the only distinction that matters.

The tell is universal. A tools-first AI adoption produces, inside eighteen months, some version of the three-platform nonprofit's outcome. Licenses are active. Staff are trained. Dashboards improve. The organization's distinctive voice, judgment, or craft has thinned. Leadership is surprised, because the process they ran was the process that had always worked before.

The three layers above the tools

AI does not land on an organization as a tool. It lands across three layers above the tools, and every one of those layers is owned by leadership rather than by the tech function.

The first layer is leadership itself. Who decides where AI is allowed to touch the work. Who decides what the organization's governance is, in plain language, in one paragraph a new hire can read in thirty seconds. Who adjudicates when a staff member's use of AI is in tension with a mission that predates the tool. These are not questions a tech team can answer on behalf of an organization. They are the kind of questions that, if delegated, produce a governance the senior team will later have to walk back publicly.

The second layer is formation. Every use of AI inside an organization is forming the people using it. It is shaping their judgment, their craft, their sense of what "good" looks like, their tolerance for their own first drafts, their capacity to read a piece of writing critically, and their sense of ownership over the work that leaves the building with their name on it. None of that shows up on a procurement rubric. All of it shows up, two years later, in the quality of the organization's people. Formation is a leadership concern, not a tools concern, and no tools process is instrumented to detect it.

The third layer is humanity. There are parts of the work that must not be mediated by AI, not because using AI would be inefficient but because the mediation would destroy the thing being done. Pastoral care. Hard conversations with donors. A eulogy. A supervisor's direct feedback to a staff member. A prophetic word offered inside a movement. What belongs in that category is a question the organization has to answer for itself, slowly, with its theology and its history in the room. That answer cannot be produced by a tools process. It can only be produced by leadership willing to sit with the question long enough to give the answer weight.

Leadership, formation, humanity. AI crosses all three. Tools live downstream of all three, and a procurement-first adoption skips all three. That is why the three-platform nonprofit's procurement worked and its organization thinned. The procurement was never the level of the problem.

Who owns this

Because AI adoption is a leadership, formation, and human problem rather than a tools problem, ownership cannot be delegated. The work belongs to the senior team — the executive director, the founder, the senior pastor, the principal, the managing partner — and it belongs to them in a specific way.

The test is simple. If the senior leader cannot articulate the organization's relationship to AI in one paragraph, the organization does not have a relationship to AI. It has tool sprawl. A paragraph the leader could say, unprompted, from memory, in a hallway, to a board member, that names what AI is and is not for inside this organization, what it is allowed to touch, what it is forbidden to touch, and what posture the organization takes toward the capability. If the paragraph does not yet exist, the organization has not yet led.

This test is uncomfortable because it is answerable. Most senior leaders, asked this question honestly, will discover they cannot produce the paragraph. The reason is not incompetence. The reason is that the paragraph was delegated — to a committee, to a consultant, to a platform selection, to a training deck, to a policy document sitting in a shared drive — and delegation at that level always produces a paragraph the leader cannot repeat, which means a paragraph the organization cannot rely on.

The work of reclaiming ownership is not a big gesture. It is the senior leader sitting with the question long enough, in conversation with the people whose work is most affected, to produce a paragraph the organization can stand on. The paragraph will be revised. That is fine. A paragraph that can be revised is a paragraph that exists. A paragraph that was delegated in the first place cannot be revised by anyone; it can only be replaced.

What changes once AI is correctly located

Once AI is located as a leadership problem rather than a tools problem, the right sequence becomes visible. The tools question moves to the end of the sequence rather than the beginning. Governance, learning, and formation come first. The work gets slower in the first quarter and much faster in the third. The organization stops producing fragmented adoption stories and starts producing a coherent position.

This is the structure the rest of this book will earn the right to name. For now the piece has done one thing: it has refused the default move. We need an AI strategy is the correct sentence. Let's get the tech team involved is the wrong next move. The right next move is the senior team needs to sit with this before anything else happens. That is not a scheduling preference. It is the only sequence that does not produce the three-platform nonprofit a year from now.

Before I lay out the sequence itself, one more thing has to be named — the reason senior leaders have been reluctant to sit with this work in the first place. It is not strategic indifference. It is a specific kind of disorientation that leaders are, at the moment, too embarrassed to describe out loud. That embarrassment is itself a leadership problem, and it is the subject of the chapter that closes this section of the book.


Read next: Why This Moment Feels Disorienting — and why the shame about feeling behind is producing the worst decisions in the sector.

ShareEmail

Continue reading

More in the the moment

  • Canon

    The Frontier You Didn't Choose

    Sit through enough board meetings this year and the pattern becomes recognizable. Somewhere in the second hour, AI comes up. Half the room wants a policy. The other half wants a ban. Both sides are articulate. Both sides assume the other side is being naive. N

    7 min read
  • Canon

    The Two Equal Errors

    One executive director issued a blanket no last year. No AI in the organization. No AI in donor communications. No AI in research. No AI for staff drafting, even in rough form, even as a starting point. The policy was short, and it was honoring. She felt that

    7 min read
  • Canon

    Integrity vs. Impact

    Eighteen months ago, a respected executive director made what looked like a reasonable call. His newsletter had a small, serious readership. Two thousand names, mostly senior practitioners in his field, several of them longtime donors. The list grew slowly. It

    8 min read