AI as Both Problem and Solution
I want to start this chapter by acknowledging something that might feel confusing: we just spent a whole chapter talking about how AI is creating a credibility crisis. And now I'm going to tell you that AI is also part of the solution.
That might sound contradictory. It might even sound like I'm trying to have it both ways. But here's what I want you to understand: this isn't about contradiction. This is about complexity. And learning to navigate complexity well is part of what it means to lead in this moment.
So let me be direct with you: AI is both the problem and the solution. It's creating the credibility crisis we talked about in Chapter 1. And it's also providing tools that can help us navigate that crisis. That tension—using the tool that created the problem—is something we need to think through carefully.
But I want you to know going in: we're not stuck. There's a way forward that honors both the opportunities and the challenges of this moment. And that's what we're going to explore together.
The Problem AI Creates
Let's start by being honest about the problem. AI can generate infinite content on any topic. It can produce articles, books, social media posts, emails, sermons—you name it. And it can do it quickly, consistently, and at scale.
This creates what we might call the "infinite content problem." When anyone can generate unlimited amounts of content on any topic, the internet becomes flooded with material. Some of it's helpful. Some of it's accurate. But much of it is just noise—content created not because someone has something to say, but because they can.
I know this might feel frustrating. You've spent years developing your expertise, learning to communicate clearly, investing in your voice. And now AI can replicate much of that in minutes. That's not fair. But it's where we are.
Here's what makes this particularly challenging: AI-generated content often looks credible. It's well-formatted. It's grammatically correct. It follows logical patterns. It sounds authoritative. But it doesn't have the foundation—the years of study, the real experience, the hard-won wisdom—that makes your content valuable.
So when people are searching for information, when they're looking for leaders to follow, when they're trying to learn and grow—they're wading through an ocean of AI-generated content, trying to find what's real. And that's exhausting. It's overwhelming. It's contributing to the trust collapse we talked about.
The Solution AI Provides
But here's the thing: AI is also providing tools that can help us navigate this crisis. And I want you to understand what those tools actually do, because there's a lot of confusion about this.
AI can amplify your voice. It can help you communicate more clearly, more consistently, more effectively. It can help you reach more people, engage more deeply, multiply your impact. But here's the key: it amplifies your voice. It doesn't replace it.
Think about it like this: if you're a pastor who preaches on Sunday, AI can help you turn that sermon into a blog post, a social media series, a newsletter article. It can help you adapt your message for different audiences, different contexts, different formats. But the message is still yours. The voice is still yours. The expertise is still yours.
AI can help you be more efficient. It can handle the logistics—the formatting, the structure, the variation—so you can focus on what only you can do: the unique insight, the personal story, the theological depth, the credibility that comes from lived experience.
I know this might sound too good to be true. And I want to be honest with you: it's not automatic. It requires intention. It requires boundaries. It requires understanding what AI does well and what it can't do. But it's possible. And it's worth figuring out.
The Tension We're Navigating
Here's where it gets complicated: we're being asked to use the tool that created the problem. And that creates a real tension.
On one hand, ignoring AI isn't realistic. It's here. It's being used. It's changing how content is created and consumed. If you try to ignore it, you'll find yourself increasingly invisible in a landscape that's being shaped by it.
On the other hand, uncritical adoption is dangerous. If you just start using AI without thinking through the implications, you risk losing what makes your voice distinctive. You risk contributing to the very problem you're trying to solve.
So we're in this tension: we need to engage with AI, but we need to do it thoughtfully. We need to use it, but we need to use it in ways that preserve and amplify our authentic voice, not replace it.
I know this tension might feel uncomfortable. You might be thinking, "Can't we just avoid this? Can't we just keep doing things the way we've always done them?"
I understand that desire. But here's what I want you to see: the landscape has already changed. Whether you use AI or not, you're operating in a world where AI exists. And that changes things. The question isn't whether to engage with AI. The question is how to engage with it well.
Why Ignoring AI Isn't an Option
Let me be direct with you: ignoring AI isn't an option. And I want to explain why, because I think it's important to be clear about this.
First, AI is already shaping the landscape. Even if you never use AI yourself, you're operating in a world where AI-generated content is everywhere. The search results you get, the social media feeds you see, the articles you read—much of it is AI-generated or AI-assisted. So you're already affected by AI, whether you use it or not.
Second, the mechanisms for discovery are changing. The ways people find content, the ways they evaluate credibility, the ways they discover leaders—all of this is being shaped by AI. Search engines are using AI. Social media platforms are using AI. Content recommendation systems are using AI. If you want to be discoverable, you need to understand how these systems work.
Third, your audience is using AI. The people you're trying to reach, the people you're trying to serve—they're using AI tools. They're asking AI questions. They're getting AI-generated answers. If you want to be part of that conversation, you need to understand what they're experiencing.
Fourth, efficiency matters. I know this might sound crass, but it's true: if you're spending hours on tasks that AI can help with—formatting, structure, variation—you have less time for the things that only you can do. And those things—your unique insight, your personal story, your theological depth—those are what make your work valuable.
I'm not saying you have to use AI for everything. I'm not saying you have to become an AI expert. But I am saying that understanding AI, engaging with it thoughtfully, using it strategically—this isn't optional anymore. It's necessary.
Why Uncritical Adoption Is Dangerous
But here's the other side: uncritical adoption is dangerous. And I want to be very clear about why, because I think this is where a lot of leaders get into trouble.
First, uncritical adoption can erode your voice. If you just start using AI to generate content without thinking about how it affects your voice, you risk losing what makes you distinctive. Your voice is your most valuable asset. It's what makes your work recognizable, trustworthy, valuable. And if you're not careful, AI can homogenize it.
Second, uncritical adoption can undermine your credibility. If you're using AI in ways that aren't transparent, if you're presenting AI-generated content as fully your own, if you're not being honest about how AI is involved—you risk damaging the trust you've worked so hard to build. And in a world where trust is already fragile, that's dangerous.
Third, uncritical adoption can replace what only you can do. AI is really good at some things: structure, formatting, variation, expansion. But it's not good at other things: your unique insight, your personal story, your theological depth, your credibility that comes from lived experience. If you let AI do everything, you're not leveraging its strengths—you're replacing your strengths.
Fourth, uncritical adoption can contribute to the problem. Remember, the credibility crisis is partly caused by too much AI-generated content that's hard to distinguish from human-created content. If you're using AI uncritically, you're contributing to that problem, not solving it.
I know this might sound like I'm being contradictory. "Use AI, but don't use it too much. Engage with it, but be careful. It's necessary, but it's dangerous." But here's what I want you to see: this is the complexity we're navigating. And navigating complexity well requires both engagement and boundaries.
Finding the Way Forward
So how do we navigate this tension? How do we use AI in ways that amplify rather than replace, that build credibility rather than undermine it, that solve the problem rather than contribute to it?
Here's what I want you to know: there's a way forward. It's not simple. It's not automatic. But it's possible. And it starts with understanding a few key principles.
First, AI amplifies, it doesn't replace. The goal isn't to have AI do everything. The goal is to have AI handle what it does well—structure, formatting, variation—so you can focus on what only you can do—insight, story, depth, credibility.
Second, voice preservation is non-negotiable. Your voice is your most valuable asset. It's what makes your work distinctive, trustworthy, valuable. And preserving that voice—maintaining what makes you you—is more important than efficiency or scale.
Third, transparency builds trust. In a world where trust is fragile, being honest about how you're using AI actually builds credibility. People want to know what's real. And when you're transparent about AI usage, you're showing them what's real.
Fourth, boundaries are essential. Not everything should be done with AI. Some things—formation work, personal stories, theological depth—require human presence. And understanding where those boundaries are is crucial.
Fifth, network verification matters. In a world where individual credibility signals are breaking down, credibility through networks—through relationships, through mutual vouching, through scenius—becomes essential. And that's something AI can't easily replicate.
We'll explore all of this in more detail in the chapters ahead. But for now, I want you to understand: the tension is real. The complexity is real. But there's a way forward that honors both the opportunities and the challenges of this moment.
A Word of Encouragement
I know this chapter has been heavy. We've talked about problems and tensions and complexity. And I want to make sure you hear this: there's hope here. There's a way forward.
AI is both the problem and the solution. That's complicated. But it's also an opportunity. Because if we can figure out how to use AI well—how to amplify rather than replace, how to build credibility rather than undermine it, how to solve the problem rather than contribute to it—we can actually create something better than what we had before.
The old mechanisms for credibility are breaking down. But maybe that's okay. Maybe we can build something better. Maybe we can create credibility through networks, through relationships, through human verification that AI can't easily replicate.
I don't know exactly what that looks like yet. But I know it's possible. And I know that leaders like you—thoughtful, committed, authentic leaders—are the ones who will figure it out.
So take a breath. Process what we've talked about. And when you're ready, we'll move forward together.
Reflection Questions:
1. How do you feel about the tension of using the tool that created the problem? What concerns does this raise?
2. In what ways have you already been affected by AI, even if you haven't used it yourself?
3. What's your biggest fear about engaging with AI? What's your biggest hope?