Finding a Guide (Not an Expert)
I want to start this chapter by acknowledging something that might feel uncomfortable: I don't know if you need an AI expert. I'm not even sure what that means anymore.
This chapter exists because of something I've been wrestling with for the last three years. When ChatGPT was released publicly in November 2022, I started on day one. I have not stopped. I've obsessively charted what's possible and terrifying about AI. I've worked closely with Alan Hirsch and Brad Briscoe to build something that serves movement leaders. And in many ways, I've been going down this trail my whole life—thinking about technology, communication, community, and how tools shape us.
But here's what I've learned: even with all that intensity, even with all that focus, I'm still learning. I'm still wrong about things. I'm still discovering what I don't know. And that's not a failure—it's the reality of what we're facing.
This chapter exists because AI expertise is historically unprecedented. With the exception of AI scientists and science-fiction writers—neither of whom are generally available to churches or movemental organizations—no one has more than about three years of lived experience with modern AI. Therefore, every person claiming to be an "AI expert" is operating within a radically compressed, discontinuous timeline.
This is not a critique. It's a diagnostic reality. And it changes everything about how we think about expertise, guidance, and what we actually need.
The Three-Year Reality
Let me be direct about the timeline, because I think we need to sit with this: AI, as we're experiencing it now, is just over three years old. For everyone except AI scientists and science fiction writers, this is brand new territory. The rest of us never saw this coming. Period.
This matters because it means we're all learning in real time. There's no established playbook. There's no generation of elders who've navigated this before us. We're the first generation to face this particular challenge, and we're doing it while the technology is still evolving at breakneck speed.
No one has decades of AI wisdom. Anyone claiming mastery is either exaggerating, redefining "expert," or unaware of what they don't know.
And I need to say this: I have not been immune to this. I've overestimated my understanding. I've been overconfident. I've thought I had things figured out, only to realize I was wrong. The Dunning-Kruger effect is real, and it affects all of us—including me.
Who Are Today's "AI Experts," Really?
This question matters because if we're going to find guidance, we need to understand who's offering it. And I want to be clear: I'm not mocking anyone here. I'm trying to understand what we're actually dealing with.
When I look at who's presenting as AI experts today, I see a range of backgrounds:
Consultants who've pivoted to AI consulting, bringing frameworks from other domains.
Coaches who've added AI to their toolkit, applying coaching methodologies to AI adoption.
Executives who've led AI initiatives in their organizations, bringing business leadership experience.
Leadership speakers who've integrated AI into their talks, bringing communication and influence skills.
Technologists with narrow scopes—people who understand specific AI tools or applications, but may not understand the broader implications for human formation, community, or discipleship.
And here's the question I keep coming back to: Do these backgrounds prepare someone for adaptive human change, or merely for tactical optimization?
I don't have a definitive answer. But I think the question matters. Because if AI is discontinuous—if it represents a break from prior patterns rather than an extension of them—then the markers of credible guidance might need to change.
Is AI Discontinuous?
This is the question the chapter must explore explicitly. Is AI a discontinuous technology—so discontinuous that prior markers of success, leadership, consulting, or expertise do not reliably translate into AI wisdom?
Let me think through this with you.
What discontinuous change means: Discontinuous change is different from incremental change. Incremental change builds on what came before. You can use prior experience, established frameworks, proven methodologies. Discontinuous change breaks from prior patterns. Prior experience may not apply. Established frameworks may break down. Proven methodologies may become irrelevant.
Why this matters for leadership: If AI is discontinuous, then leadership in an AI age requires different skills than leadership in a pre-AI age. The ability to lead through incremental change—to optimize, to improve, to scale—may not prepare someone for leading through discontinuous change—which requires learning, experimentation, and adaptation.
Why this matters for organizations: Organizations that succeed through incremental change—through optimization, efficiency, scaling—may struggle with discontinuous change, which requires different capacities: learning, experimentation, willingness to discard what worked before.
Why this matters for discipleship: If AI is discontinuous, then discipleship in an AI age may require different approaches than discipleship in a pre-AI age. The ability to form people through established patterns may not prepare us for forming people through rapid, uncertain change.
What kinds of prior expertise do not automatically transfer: If AI is discontinuous, then expertise in:
- Traditional consulting frameworks
- Business optimization
- Technology implementation (if it assumes incremental change)
- Leadership development (if it assumes stable contexts)
- Content creation (if it assumes pre-AI patterns)
These may not automatically prepare someone for guiding others through AI's discontinuous implications.
I don't have a definitive answer about whether AI is fully discontinuous. But I suspect it is, at least in significant ways. And if that's true, then we need different markers of credible guidance.
If AI Is Discontinuous, What Kind of Expertise Actually Matters?
This is the heart of the chapter. If AI is discontinuous, then the markers of credible guidance change. And I want to share with you what I've been looking for—not as a definitive list, but as emerging criteria that I'm still refining.
Evidence of intense grappling over the last three years: Not just fluency, but wrestling. Not just using AI, but struggling with it. Not just adopting it, but questioning it. I'm looking for signs that someone has been in the tension, not just on one side of it.
Evidence of building: Systems, tools, workflows—not just commentary. I'm looking for people who've created something, who've tried to solve real problems, who've learned through doing, not just through thinking or talking.
Evidence of rapid evolution: Multiple paradigm shifts, rewrites, discarded approaches. I'm looking for people who've changed their minds, who've abandoned what they thought they knew, who've been willing to start over.
Visible discomfort paired with clarity: Confidence without certainty. I'm looking for people who are clear about what they know, but uncomfortable with claiming more than they know. People who can say "I don't know" without losing credibility.
Resistance to totalizing answers: Absence of "I've figured it out" energy. I'm looking for people who resist the temptation to have all the answers, who maintain curiosity, who stay in the questions.
A maturity map: Some coherent vision of human formation in light of AI. I'm looking for people who've thought about what it means to be human, to form people, to build community, in an AI age. Not just tactical optimization, but formation-level thinking.
Personalized, specific vision: Not generic frameworks. I'm looking for people who've thought deeply about specific contexts, specific challenges, specific communities. People who can speak to particular situations, not just universal principles.
Textual intelligence: Deep comfort thinking and reasoning in language. I'm looking for people who can think clearly, communicate precisely, reason carefully. People who understand that words matter, that language shapes thought, that how we talk about AI affects how we use it.
Love: Genuine care for people over opportunity. I'm looking for people who care more about serving others than about building their platform, who prioritize people's formation over their own advancement, who demonstrate love in how they show up.
Look for love.
From Expert to Guide
I want to end this chapter by pivoting, not resolving. Because I don't think the question is whether you need an AI expert. I think the question is whether you need a guide.
Expert = someone with answers: An expert is someone who knows, who has figured it out, who can tell you what to do. An expert operates from certainty, from mastery, from having arrived.
Guide = someone who accompanies amid uncertainty: A guide is someone who walks with you, who learns with you, who helps you navigate uncertainty without pretending to have all the answers. A guide operates from experience, from wisdom, from being on the journey.
The difference is existential, not semantic. If AI is discontinuous, if we're all learning in real time, if no one has decades of wisdom, then what we need is not someone who claims to have arrived, but someone who's willing to walk with us as we figure it out together.
You may not need an AI expert. But you may need a guide. And the people who can guide you through this moment may not look like traditional experts. They may look like people who've been grappling, who've been building, who've been learning, who've been wrong, who've been willing to change their minds, who care more about your formation than their platform.
Reflection Questions:
1. What have you been looking for in AI guidance? Has this chapter changed what you're looking for?
2. Which of the criteria resonates most with you? Which challenges you?
3. What's the difference between needing an expert and needing a guide? What do you actually need right now?