Leadership in the Age of AI
- The Next 100
- Jan 17
- 4 min read
Updated: Jan 19
From Managing Tasks to Curating Judgment
For decades, leadership was defined by expertise - the person in the room who knew the most. That assessment is evaporating. AI can now out-read, out-summarize and out-scan any human. With that shift, the Information Age is fading, replaced by something more demanding: the Age of Discernment. Today, leadership is shaped by the capacity to evaluate information, place it in context and act with purpose.

The Shift
Where leaders once owned the “how” and the “what,” AI now executes much of that work.
According to the 2025 Microsoft Work Trend Index, "power users" of AI save an average of 30–45 minutes per day, yet 60% of leaders say they have no plan for how to reallocate saved time into high-value strategy.
The Plan Gap and the BYOA Era
To lead effectively (with AI as a partner) research suggests acknowledging three friction points:
From "Shadow AI" to BYOA (Bring Your Own Agent): While organizations debate enterprise AI security, employees are moving beyond chatbots and embracing personal, agentic AI - tools that browse, draft and execute formerly impossible workloads. While there is an upside to the volume of information managed by AI, IT professionals argue this can create a data-leak risk.
The Expertise Paradox: Emerging leaders are using AI to mirror senior-level outputs, marking a shift in how expertise is built. In the past, career progression meant years of manual data synthesis and formatting. Increasingly, AI handles task-based work, allowing junior talent to spend more time making decisions.
While some worry about thinning "muscle memory," the opportunity here is a mentorship upgrade. Academics point out we're accelerating past how to do the task and toward teaching how to weigh the outcome. By bypassing the mechanical, the next generation can develop "judgment memory" much earlier in their careers - focusing on the nuance and strategy that used to take decades to master.
A Catalyst for Critical Thinking
AI-generated data, without human oversight, isn't always correct, triggering a return to inquiry. Instead of passive consumption, we're forced to engage in adversarial thinking. This lack of 100% certainty in AI's output becomes a leadership opportunity: It prevents us from operating on autopilot. When the machine is mostly right, humans must be critically present.
This might be best explained by by Ethan Mollick and the Boston Consulting Group (BCG) in a concept they popularized as the "Jagged Frontier." What they found is, AI is brilliantly capable in some complex areas and confidently wrong in others, often within the same task.
The "frontier" is not a straight line; it is a jagged edge where leaders must test where the machine's capability ends and human oversight begins.
The Reality: Most organizations are "playing at AI" rather than "designing for AI." We are layering 2026 tools onto 20th-century processes.
Architecting the Human-in-the-Loop
To move from AI-adjacent to AI-integrated, leaders might consider these three practices:
1. The Judgment Audit
Stop asking "Where can we use AI?" and start asking "Where is human judgment non-negotiable?" Identify the three most critical decisions your team makes each week. For each, determine:
What data does AI provide?
What context does only the human have? (e.g., client history, internal politics, long-term brand values).
Who is the moral backstop if AI's recommendation is biased or incorrect?
2. Move from "Prompting" to "Intent-Setting"
Instead of giving your team (and their AI tools) a list of tasks, consider a Statement of Intent.
Old Way: "Write a report on Q3 sales."
AI Way: "Our goal for Q3 is to identify why our churn in the mid-market is rising. Use the AI to synthesize the last 100 sales calls, but I need you to find the human story - the specific emotional reason customers are leaving that the data might miss."
3. The "Cyborg" Mentorship Model
Since AI can handle many tasks emerging leaders used to do to learn the ropes, create new ways for teams to gain experience.
Reverse Mentoring: Have early-career staffers show how they are using agentic tools to accelerate workflows.
The "Double-Check" Drill: Invite emerging leaders to critique an AI-generated output against company values, helping align the AI work with organizational goals.
How to Know If It’s Working
AI leadership success varies by organization. Here are some common measures:
Leading Indicator: Your team is reporting fewer "low-value" hours (meetings that could have been emails, data entry, basic drafting).
Lagging Indicator: "Judgment Accuracy." Are your team's final outputs more aligned with long-term strategy than they were a year ago?
The Anti-Metric: Don't measure "Lines of Code" or "Word Count." AI can generate infinite noise. Measure the reduction in noise.
The Conversation Starter
Bring these to your next team huddle to break the "Shadow AI" ice:
"What is one task you've started using AI for you might want to share?
"If we could automate 40% of our "business as usual" work tomorrow, what is the "Big Project" we’ve been ignoring we'd finally have time to solve?"
"Where do you feel the AI is currently making us less creative or more generic?"
"Doing the work" looks different today, but "owning the result" feels the same. As AI agents begin to handle the heavy lifting of synthesis and drafting, the space that opens up is yours to fill. Whether you use that time to stress-test strategy or to focus on the high-touch mentorship that AI can’t replicate, the value you add is less in the volume of your output, and more in the confidence of your direction.