Skip to content
Sandbox
SeriesSandbox curriculum04 / 09

The Eight Patterns: Where Value Hides

By Josh Shepherd11 min read
On this page

A twenty-minute scan

The senior team sat down for a twenty-minute exercise. Eight categories, three minutes each, one simple question each time: where inside our work, in the last month, has something like this shown up?

Before the scan, the team's working belief was that they did not know where AI fit in their work. After the scan, they had fourteen candidates on a whiteboard. Nine of them were surprises to the executive in the room. Three were serious enough to test inside the next month.

The team had not become smarter in twenty minutes. They had borrowed the right lenses.

What this piece is

Most organizations are told to find AI use cases. The instruction is useless because they do not know where to look. An hour of open-ended brainstorming usually produces the same three ideas (write faster, summarize longer, draft first passes) and an exhausted team. Pattern recognition, the first of the three layers of sandbox work, is what replaces open-ended hunting with a disciplined scan.

There are eight canonical patterns where AI value tends to hide in real organizational work. Knowing them is the difference between we should do something about AI and there are three places this might fit; let's test two of them in the next six weeks.

The patterns are not ranked, and they are not a menu. They are a catalog. Running through all eight, in turn, once a quarter, is the discipline. Each pattern has four things worth writing down: the shape of the opportunity, the kinds of examples it produces, the type of value it promises, and the typical trap the pattern lays if you take it without discernment.

A note on the frame. You are not generating ideas. You are scanning your existing work through eight lenses. The work is the work you already have. The patterns do not ask you to imagine new programs, new audiences, new products. They ask you to look at what you are already doing and notice where eight specific kinds of opportunity hide inside it.


Pattern 1. Repetition

Shape. Tasks your organization does over and over, largely similar each time, that consume real staff time.

Examples. Donor thank-you emails. Meeting recaps. Weekly internal summaries. Intake notes. Event confirmation sequences. Board report preambles. Volunteer onboarding communications. The work that gets done every week by somebody who has stopped thinking about it as work and started thinking about it as an inbox.

AI value type. Speed. Often large speed gains, because repetition is exactly what generators are good at.

Typical trap. Speed applied carelessly to repetition erodes the relational signal the repetition was quietly carrying. A donor thank-you note is repetitive for the staff and not repetitive for the donor; the donor experiences this specific note, on this specific gift, at this specific moment. The staff's experience of repetition is not the recipient's. The pattern is real and the danger is real. Repetition-category use cases almost always need to be paired with an explicit answer to the question what relational signal is this task actually carrying, and how will we preserve it?

Repetition is named first because it is the most tempting, the most obvious, and the most often misread. Nearly every failed AI rollout in a mission-driven organization has a repetition-category use case at its center.

Pattern 2. Translation

Shape. The same idea, moving to a different audience or a different format.

Examples. A sermon becoming a small-group guide. A research report becoming a donor-facing two-pager. An internal memo becoming a board brief. A long article becoming five social posts. A complex policy becoming a staff FAQ. The source exists. The translation exists because a different audience needs the same substance in a different register.

AI value type. Scale. One source idea can now reach several audiences with effort that used to require several writers.

Typical trap. Translation that flattens the source material into a lowest-common-denominator version of itself. The nuance in the original came from the writer's judgment about what the original audience needed; translation by a general-purpose assistant tends to default to general-purpose prose for a general-purpose reader. The donor-facing two-pager reads like every nonprofit's donor-facing two-pager. The small-group guide reads like every small-group guide. The translation moved. The distinctiveness did not travel with it.

The useful question for any translation-category use case is: what from the original needs to survive the trip, and who is the person responsible for making sure it did? Without an answer, translation becomes sanding.

Pattern 3. Synthesis

Shape. Too much information, needing clarity. Multiple sources, needing one read. Long documents that must be understood but not memorized.

Examples. A thirty-page strategic plan needing a three-page brief. A stack of interview transcripts needing a read across the themes. Six months of board minutes needing a survey for a new member. A literature review needing a working summary. Due-diligence documents needing a triage read before deciding what to look at more closely.

AI value type. Cognition. Augmented reading. The assistant acts as a first pass that lets the human reader arrive at the material with a map instead of from zero.

Typical trap. Synthesis that produces a clean, confident narrative from sources that actually disagree. Generators are trained to produce coherent prose. Disagreement is not coherent. A synthesis that flattens contested material into a smooth summary has not saved the reader time; it has misled them about the material they were trying to understand.

The useful discipline is to treat synthesis outputs as drafts of the reader's own synthesis, never as the synthesis itself. The human must still read enough of the source to know whether the summary lied by omission. If they cannot, the synthesis use case is not ready to graduate; it is producing comfort, not comprehension.

Pattern 4. Generation

Shape. Blank-page problems. First-draft work. The moment where something has to exist that currently does not.

Examples. A grant proposal outline. A curriculum scaffold. A job description draft. A press release first pass. An email sequence for a campaign that does not yet exist. A staff policy on a topic the organization has not written on before.

AI value type. Acceleration. The blank page becomes a scaffolded page, often in minutes. The writer's effort shifts from creation to revision, which is a different and often easier kind of work.

Typical trap. The generated first draft becomes the whole draft, because nobody in the workflow was scoped to do the revision work the shortcut assumed. The grant proposal goes out reading like a thousand other grant proposals. The curriculum scaffold becomes the curriculum. The press release has a paragraph nobody noticed was generic. The acceleration was real and the final output is weaker than if the human had started from scratch and written less.

Generation-category use cases almost always need an explicit reviser in the workflow and a clear standard for what revised means. Generation without revision produces drafts; generation with revision produces work. The category is valuable only when the second step is as real as the first.

Pattern 5. Transformation

Shape. Existing content that needs to be improved or reshaped. The source is fine; the form needs work.

Examples. Tone adjustment on a piece written in a voice that does not quite fit the audience. Clarity rewrites on a paragraph that says the right thing in the wrong order. Length compression on a report that is structurally sound at twenty pages and needs to be five. A style normalization pass across a team's varied writing.

AI value type. Quality lift, in the form the producer already wants.

Typical trap. Transformation that smooths out the distinctive edges the source was quietly carrying. The awkward sentence that was load-bearing. The specific phrasing that was the author's. The rhythm that signaled the piece had a human behind it. General-purpose transformation defaults to general-purpose prose. The piece gets more readable on a surface metric and less distinctive on the metric that mattered.

This is where The Three Kinds of Value, and specifically meaningful quality, earn their keep. A transformation that moves the work toward the sector median is not a quality win; it is a quality loss disguised as polish. The useful discipline is to ask, after any transformation-category edit, whether the piece reads as more specifically you or less. If the answer is less, the transformation was wrong.

Pattern 6. Structuring

Shape. Unorganized thinking that needs to become structured output.

Examples. A strategy doc built from a long conversation. A program plan built from a retreat's whiteboard. A framework extracted from a senior leader's spoken reasoning. A decision memo built from a ramble of considerations. Meeting minutes turned into a decision log.

AI value type. Coherence. The organization's existing thinking becomes legible to its own people, faster.

Typical trap. Structure imposed too early, locking the organization into a frame before the thinking is real. Generators are very good at producing five-bullet lists, three-part frameworks, and two-by-two matrices. They will produce these whether or not the underlying thought actually has that shape. An organization that trusts structuring use cases too much ends up with a closet full of elegant frames that do not describe its work.

Structuring is most valuable when the underlying thinking is already developed and just needs to be organized. It is dangerous when the underlying thinking is thin and the structure makes it look developed. The test: could the senior leader still explain the content without the frame? If not, the frame is doing the thinking, which means the thinking has not happened yet.

Pattern 7. Decision Support

Shape. Weighing options. Thinking through scenarios. Structuring the considerations around a real choice.

Examples. Program planning with multiple possible shapes. Staffing scenarios under budget constraints. Risk-surface reads on a new partnership. Pricing or naming decisions where several plausible answers exist and the trade-offs are not obvious.

AI value type. Reasoning augmentation. The assistant helps surface considerations the leader might have missed, articulate the trade-offs, and stress-test the preferred option.

Typical trap. The assistant's framing substitutes for the leader's. Decision support becomes decision outsourcing. The leader asks the assistant what to do. The assistant, trained to be helpful, answers. The leader finds the answer plausible and adopts it. The organization now has a decision made by an instrument that does not know the organization and cannot carry the consequences.

The honest use of decision-support use cases is narrow. The assistant is allowed to help structure the considerations, surface options the leader did not think of, and articulate trade-offs in plain language. The assistant is not allowed to make the call or to suggest one with enough confidence that the leader defers. The discipline is the same as the one a good advisor keeps around a young executive: I can help you think, but I cannot do the deciding; if I start doing the deciding, you should fire me.

This is also the pattern most in conversation with canon #16, Skills as Formation, Not Training. Decision-support use cases are the single largest formation risk in a sandbox. They either accelerate the development of a leader's judgment or they quietly outsource it. The difference depends entirely on how the leader uses the instrument, and that difference is rarely visible until years later.

Pattern 8. Personalization

Shape. The same content, tailored to individuals. One message, many custom versions.

Examples. Donor follow-ups that reference the specific history of each donor. Coaching responses shaped to the individual being coached. Member outreach tuned to each member's situation. Congregational communications that speak to individual contexts.

AI value type. Relational scaling. High risk, high reward. This pattern is flagged.

Typical trap. Personalization produces the feeling of care without the fact of care, and the recipient eventually notices. A note that is personalized on the surface and generic in substance is a specific kind of modern violence, and recipients become fluent at recognizing it faster than producers become fluent at hiding it. The revenue lift from scaled personalization is real in the short term and corrosive in the long term, because the trust being spent down is the kind that does not refill on a quarterly timeline.

Personalization is the only one of the eight patterns that requires a higher bar of senior review before it enters the sandbox at all. Every other pattern can be tested on appropriate non-critical work without deep theological or relational review. Personalization cannot. It is the pattern most likely to produce use cases that score green on every structural metric and red on every human one. The discipline here is specifically the work of The Ethical & Relational Flag, and personalization-category use cases should not move through the sandbox without it.


Using the eight patterns

Once a quarter, the senior team sits down for an hour and runs the catalog. Eight patterns, eight passes. For each pattern, the single question: where, inside our real work in the last three months, has this shown up?

The answers accumulate on a shared page. Not yet experiments. Not yet scored. Candidate use cases, specific enough that someone could, in principle, write a one-paragraph description of what the work is and why it fits the pattern. The scan ends with a list of candidates, usually somewhere between a dozen and three dozen.

From that list, a smaller group picks the three to five candidates worth testing first. The basis for picking is the work of The Three Kinds of Value: which candidates look most like they could save time wisely, earn revenue legitimately, or improve quality meaningfully. The filter is applied before the experiments begin, which is how you avoid spending six weeks testing a use case that, had you paused, you would not have wanted to graduate even if it worked.

The scan is meant to be quarterly because the organization's work changes, and because pattern recognition itself is a trained eye that gets sharper with use. A team's first quarterly scan produces mostly surface candidates. The third scan produces subtler, better-sourced candidates. By the fifth scan, the team has internalized the lenses and begins noticing candidates in the normal course of their work, which is the point. The catalog becomes a habit of seeing rather than an exercise.

What pattern recognition actually is

Pattern recognition is not a talent. It is a trained eye. Some leaders seem to have it naturally; most of those leaders are running an informal version of a disciplined scan they learned from someone else and cannot quite articulate.

The eight patterns externalize the discipline. They are not the only possible catalog; they are a catalog honest enough to use. A team that scans through them regularly will develop the eye. A team that never scans will go on believing it does not know where AI fits in its work, when in fact the opportunities have been sitting inside its ordinary weeks the whole time.

The next piece turns one candidate into an experiment. Pattern recognition tells you where to look. Recipes Are Not Cooking: Structured Experiments tells you how to test what you found.

You are not generating ideas. You are scanning your existing work through eight lenses.

ShareEmail

Sandbox

When you are ready to run a season, not only read about it

The articles describe the argument. The Sandbox Season is the fixed-scope engagement where a cohort does the work with facilitation, scoring discipline, and a Week 12 handoff.

Continue reading

More in this series

  • Sandbox

    The Three Layers of Sandbox Work

    A team I know ran twelve experiments in six weeks. They had picked sensible use cases. They had done the work of setting each one up. By the end, everyone in the room believed the Sandbox had been a success, and everyone in the room struggled to say what had b

    6 min read
  • Sandbox

    Discovering Value Under Constraint

    At the end of the first week, the communications director wrote to her executive: the team is buzzing, everybody has ideas, we should do another one of these soon. The executive asked what the organization had learned, and the honest answer was that the staff

    5 min read
  • Sandbox

    The Three Kinds of Value AI Legitimately Produces

    A mid-sized nonprofit I watched from a reasonable distance ran an AI program last year that hit every visible metric. Newsletter output multiplied by ten. List growth of forty percent in two quarters. Readability scores up across every piece they published. Th

    7 min read