If AI Can Do the Work, What's Left for Leaders?
AI is making output cheap. Leadership is still judgment, trust, and responsibility.
I read a Twitter thread from Matt Shumer this week about AI that is written in a tone I’m seeing more often. The author argues that we’re entering a period of rapid disruption, and he anchors that with a concrete prediction: AI could eliminate 50% of entry-level white-collar jobs within one to five years, with many insiders thinking even that is conservative. He puts it bluntly: “I think the honest answer is that nothing that can be done on a computer is safe in the medium term.”
I’m not approaching that argument from a distance. I use AI every day in my work, and I can tell you from lived experience that it lowers the cost of producing competent first-pass work and helps me analyze, think, and produce more and better. The thread is right that this is not confined to a narrow slice of tasks. It names fields where AI already handles substantive work, including legal research and drafting and financial analysis. The author also pushes against a common comfort statement, the idea that AI will handle grunt work but never touch judgment, strategic thinking, or empathy, and says he no longer believes that.
So the question for me isn’t whether the capability is real. It is. The question is what counts as “the job” in a real organization, and whether we’re being precise about what AI is replacing versus what AI is accelerating.
The category error leaders have to resist
In most organizations, the deliverable is not the real work. The deliverable is the residue of the work, the visible record that something happened: a memo, a budget summary, a strategic plan, an enrollment forecast, a risk assessment. Those outputs matter, and AI will make them easier to produce. But leadership is rarely defined by whether the artifact exists. Leadership is defined by whether the artifact is wise, accurate, timed appropriately, and owned by someone who will take responsibility for what it sets in motion.
This is the thesis I keep coming back to: AI is making output cheap, but it is not making responsibility cheap.
To his credit, the author nods at this. He explicitly raises the question of whether AI can replace trust built over years of relationship. Later, he lists “relationships and trust built over years” and institutional inertia, compliance, and liability as factors that slow displacement, at least for a time. My point is that these aren’t footnotes. In most institutions, they are the work leaders spend their lives doing.
Imagine an accountant asked by her boss to analyze the most recent financial statement and explain what it means for next quarter’s budget. On paper, this is a technical task, and AI will be excellent at parts of it. The thread even names financial analysis directly and describes model-building, report generation, and memo writing as work AI can handle competently.
But the accountant’s job is never only the analysis. It’s analysis plus interpretation plus communication under constraint. Which numbers matter, and which are just getting in the way? Is that variance a warning or normal fluctuation? Do you frame it as “we need to tighten immediately” or “we can absorb this,” knowing one framing triggers cuts and the other invites complacency? What’s the backstory behind that spike, that dip, that line item everyone notices because it has history? Who is the real audience here, the boss, the board, or the department that will take the hit?
The final deliverable might be four bullet points, but those bullet points are decisions. They embed assumptions. They allocate attention. They set expectations. They quietly move power and resources.
AI can help with the surface. It can detect patterns, draft summaries, and generate plausible explanations. What it cannot do is own the consequences of which explanation is chosen, how it is framed, and what happens downstream when people with incentives and histories react to it. If “the job” is whatever happens on a screen, then the author’s conclusion follows. If “the job” includes judgment, timing, trust, and accountability, then the story is more complicated.
How AI removes one bottleneck and exposes another
Five years ago, there were several bottlenecks that routinely slowed down change. Days were full of meetings, and the space to sit and think, then sit and write, was getting thinner. One bottleneck was simply producing the thing. Getting the idea, the change, the recommendation, the analysis onto paper in a form that other people could react to.
AI has eliminated a lot of that friction. In the past you might have needed three hours to produce a memo, a proposal, or a high-stakes email. Now you can talk through what you want to say in the car on the way to work, walk into a meeting with a clearer sense of the argument, and then in a 30-minute break turn it into something polished and usable. The thread explicitly encourages this posture, even describing the person who can walk into a meeting and say, “I used AI to do this analysis in an hour instead of three days,” as the one who will win right now.
Don’t get me wrong, that is real world leverage. It changes the pace of leadership work. It also changes the sequencing. Before, you might have had to wait for a day when you could block out more time to accomplish it. Now you can get the idea down, let it sit, come back the next day, refine it, and have something ready to socialize before the next wave of meetings takes over.
But the remaining bottlenecks become clearer, and this is where leadership, sometimes stubbornly, stays human. Even when the draft is easy, the human side still takes time. The idea still has to be discussed. It still has to be stress-tested relationally. People still have to agree on who owns it. Someone still has to decide whether they are willing to carry responsibility for the decision. Someone still owns the cost if it fails, if it upsets a constituency, if the rollout goes sideways, if it creates second-order problems you did not anticipate.
AI reduces the bottleneck of producing the artifact. It does not reduce the bottleneck of building alignment and taking responsibility.
Jobs will shrink and new roles will emerge
I do think AI will eliminate jobs. Not as a sensational claim, but as a basic economic reality. When output becomes cheaper, organizations need fewer people whose primary function is producing routine work products on a screen: first-pass analysis, report writing, coordination, data analysis, and standard memos. The thread is explicit about this trajectory, and I largely agree with its direction of travel.
But it’s also likely that AI creates new categories of work, and this is not just a comforting “it all balances out” story. It’s a practical necessity of deployment. Once output is cheap, someone has to design workflows, govern data, audit results, train teams, manage risk, and decide what “good” means in this organization. Someone has to be accountable when a tool produces something plausible and wrong. Output becomes abundant, judgment becomes scarce, and leaders have to restructure teams around what remains scarce.
AI as a force multiplier for experts
This is why the most compelling AI stories are usually expert stories. A lawyer uses AI to accelerate research and drafting, but the gain is not that the lawyer stopped being a lawyer. The gain is that a trained mind can cover more ground, explore more options, and move faster without losing standards. The thread explicitly makes this point with legal work, describing AI doing tasks “at a level that rivals junior associates,” and linking that to senior leaders using it because it outperforms associates on many tasks. It also describes a managing partner using AI “like having a team of associates.
This matters because it clarifies what AI is doing at its best. It’s not just replacing labor. It’s multiplying the productivity of people who can evaluate outputs, spot nonsense, correct assumptions, and take responsibility. Those are leadership-adjacent skills even when the title isn’t “leader.”
Why higher education leaders can’t treat this as a workplace-only issue
I work in higher education, and this pushes the conversation into a different category. In the workplace, the question is often productivity and efficiency. In education, the question is formation. We are not simply trying to produce outputs. We are trying to form people who can make judgments in a world where outputs are cheap.
That reframes the AI question in the classroom. What is authentic student work now? If a student can generate competent prose on demand, what does a polished paper demonstrate? For a long time, the artifact was a decent proxy for learning because the artifact was costly to produce. Now polish is no longer evidence of understanding. A student can submit something that sounds like mastery while bypassing the slow internal work that normally precedes mastery.
This is not just an integrity issue. It is a formation issue, and from a Christian perspective it presses on categories we already know matter. Competence is not character. Technical brilliance is not moral clarity. Stewardship is not optimization.
AI can generate competent output. It cannot be virtuous. It cannot repent. It cannot tell the truth because it loves the truth. It cannot take responsibility because it is not responsible for anything. If we let output replace formation, we will graduate students who look more prepared than they are, and we will have trained them into a kind of unreality.
What leaders need to see
The thread is right about acceleration, and it’s probably right that many people are underestimating the pace of change. Where I resist it is the flattening move that treats the job as whatever happens on a screen. In real organizations, the screen work is often the easy part. The hard part is interpretation under constraint, communication with integrity, stewardship that considers people and not just efficiency, and responsibility that cannot be delegated to a tool.
If AI is making drafts cheaper, then the enduring work becomes more visible, not less. The draft isn’t the job. The job is being the kind of leader who can tell the truth wisely, earn trust over time, and carry the weight of decisions when they cost something.


