Cracks in the Ladder: How AI Is Disrupting the Traditional Workplace Apprenticeship Model
- Aparajita Sihag
- Jan 3
- 5 min read
And how L&OD needs to rearchitect a new model for a sustainable future
For decades, organizations have operated on a quiet but powerful model of talent development: the apprenticeship ladder.
People entered the organizations as juniors, learned through experience, and earned judgment through repetition. Then they became managers by mastering the review layer of the very work they learned to do as a junior, and eventually evolved into leaders who could strategize and steer. It was an invisible ladder - the “learn - do - manage - lead” progression - serving as a talent pipeline - both shaping and being shaped by it.
But with the rise of AI, this ladder is starting to crack at multiple fault lines, thanks to subtle, structural, and strategic changes. If left unaddressed, they could reshape how organizations build capability, how people grow, and how work itself is organized.
Over the past few weeks, I’ve been writing a series unpacking these changes on my LinkedIn account. Here’s a synthesis of the five fault lines I believe leaders need to pay close attention to.
The Disruption of Skill Formation
Once upon a time, juniors built their craft the hard way: through messy briefs, ambiguous data, real deadlines, and repeated reps. You didn’t just learn what to do - you learned why something worked.
This messy, unglamorous work forged tacit knowledge - the gut sense that tells you when a number feels off, when a brief is hiding a bigger issue, or when something looks right but isn’t.
Today, that phase is being short-circuited. Instead of wrestling with problems, many now simply “ask the model.” AI delivers decent rough drafts with speed and polish, but without the hard-earned judgment underneath.
This results in confidence curves rising faster than competence curves.
People look productive sooner, but often, they haven’t built the underlying scaffolding of skill. And when edge cases or ambiguous problems arise - the places where real value lies - they’re underprepared.
The Rise of Review Theatre
The second crack appears in the managerial review layer.
Traditionally, managers stress-tested work. They checked logic, reran numbers, asked hard questions, and caught the tiny cracks before they widened. This layer was both a quality filter and a learning loop.
But AI-generated work is tricky. It looks polished. It can cite sources, structure arguments, and mimic expertise. Yet underneath, errors lurk quietly in the form of stale or mis-scoped context, unreliable (or worse, non-existing, ghost) sources, and hallucinated facts that sound real. The resulting output is a work of unclear provenance - we don't know what was done by a human vs. the model. And honestly, sometimes Managers are hesitant to ask because the polish of AI-generated work can easily trigger imposter syndrome in people. Therefore, many managers are still building the skill to review this kind of work rigorously. In the meantime, reviews often focus on surface polish rather than deep structure - a kind of “review theatre.”
The danger: unchecked errors flow through the system, and the feedback loop that once sharpened thinking weakens.
The Blurring of Expertise Signals
The third fault line shows up in how skills and seniority are signaled - to recruiters, peers, and clients.
AI can make junior work look senior.
Portfolios can be polished with AI. Deliverables can look flawless even if the underlying craft is shallow.
This creates three challenges:
Hiring: Resumes and portfolios may reflect AI polish rather than real capability.
Internal credibility: Seniors who are slower on tools but stronger in judgment may be misread as less capable by peers who excel at “AI tricks.”
Client value: Clients may question pricing and value if they perceive that AI is “doing the work.”
Traditional signals of expertise - polish, speed, presentation - are no longer reliable proxies for actual skill. Slickness ≠ expertise. So how does one even signal expertise anymore?
The Hollowing of the Pyramid
Organizations have long relied on a pyramid structure:
A broad base of juniors executing work, who are supervised by a middle layer of managers reviewing and coordinating, who are led by a yet smaller layer of leaders setting direction.
This structure wasn’t arbitrary - it was a talent pipeline. Juniors learned, grew into managers, and eventually became leaders. Along the way, the juniors learned the skill to grow into managers. These managers learned how to get the work done by their team. And eventually, through sustained expertise became leaders.
AI is quietly disrupting this model.
If AI can handle 70–80% of junior work with reasonable accuracy, fewer juniors are hired. But if you hire fewer juniors, who becomes your managers in 5–10 years?
And what exactly will managers manage - people or AI agents? As roles and responsibilities blur, the middle of the pyramid risks hollowing out, just when organizations need strong managers the most.
Over time, this could push structures to become flatter, change how workforce planning happens (skills + AI agents instead of headcount alone), and reshape the leadership pipeline fundamentally.
The Breakdown of Performance Speed & Learning Speed Relationship
The final fault line is about incentives - perhaps, the most powerful behaviour shaping force at work.
Historically, organizational performance metrics were tightly coupled with skill-building. To perform well, you had to learn deeply. KPIs and OKRs acted as indirect measures of judgment and craft.
AI breaks this alignment.
Now, employees can meet performance metrics by producing deliverables with AI’s help - often without building underlying capability. Time to productivity has collapsed. That sounds good, right?
The problem emerges at performance reviews, promotions, and merit cycles. If someone meets KPIs through AI-enabled outputs but hasn’t truly developed skill, what are we rewarding?
Are we shifting from “learning to perform” to “performing to please the metric”?
This doesn’t mean abandoning performance metrics. It means renewing our focus on incentivizing capability alongside performance. Resilient organizations will reward judgment, rigor, and risk-awareness - not just speed and surface.
Re-Architecting Apprenticeship for the AI Era
This is not a diatribe against AI. These fault lines call for redesigning the apprenticeship model deliberately:
Make tacit knowledge explicit. Build mechanisms to teach and transfer judgment, not just outputs.
Evolve review practices. Teach managers to review thinking, not just polish.
Rethink hiring and promotion signals. Go beyond slick portfolios and outputs.
Redesign structures. Plan for pipelines and manager roles in an AI-augmented workforce.
Realign incentives. Reward capability and judgment, not just output velocity.
The apprenticeship ladder has served organizations well for decades. But if we don’t reinforce it for the AI era, we risk building organizations that are fast - but fragile. If ever there was a time for L&OD to lock-step with the business - this is it. For the first time, L&OD is being asked to step out of the training ground and onto the battlefield - co-piloting live manoeuvres with leadership, rather than running rehearsals in the safety of peacetime.




Comments