top of page
Search

Your Competency Framework Was Designed to Fail

  • Writer: Aparajita Sihag
    Aparajita Sihag
  • Apr 7
  • 6 min read

Why most competency frameworks collapse - and why the fix isn't what you think


In twelve years of building competency frameworks across consulting, the public sector, and corporate environments, I have rarely seen one sustain cross-functional use. Frameworks get approved, socialised, and documented - but they do not get used. They exist as artefacts, not as operating systems.


By "success," I mean something specific: a framework that is actively used across hiring, performance management, learning, and succession - and that meaningfully shapes talent decisions beyond the team that created it. Not one that lives in a shared drive. One that changes how a hiring manager evaluates a candidate or how a business leader makes a succession call.


Most do not meet that bar. And after building / implementing these things across five distinct organisational archetypes - offshore consulting, a Maharatna PSU, a central government ministry, domestic consulting, and a PE-backed multinational - I'm increasingly convinced the failure isn't in how we implement frameworks. It's in how we design them.


A Beautiful Theory, a Broken System


The intellectual foundations of competency models are strong. David McClelland's work in the 1970s challenged the dominance of IQ-based selection, arguing that behavioural competencies were better predictors of job performance. Richard Boyatzis extended this into a systematic model - identify the attributes that predict superior performance, cluster them, and build talent systems around them.


In theory, this creates alignment across hiring, development, and performance. In practice, it rarely does.


This isn't a new observation. Scholars like Robert Zemke noted as early as the 1980s that competency definitions had become so elastic as to be nearly meaningless. More recently, critiques in industrial-organisational psychology have pointed to the same conclusion: competency models frequently suffer from definitional ambiguity, construct overlap, and weak measurement rigour. The field has acknowledged these issues for decades.


It has not resolved them. It has rebranded them.


Every few years, competency frameworks get a new skin - "capability frameworks," "skills architecture" "talent dimensions." The vocabulary changes. The underlying architecture doesn't. And the same failure patterns repeat.


The Six Stages of Competency Framework Failure


Across organisations, failure is not random - it is systemic. It unfolds across six predictable stages. I've seen each of these at close range, often in the same project.


Stage 1: Definitional confusion. The project begins without settling a fundamental question: what exactly is a competency? Practitioners routinely blur the distinction between skills, competencies, and capabilities - three constructs with different levels of abstraction, different measurement properties, and different implications for development. The result is a model that mixes behavioural indicators, knowledge domains, and personality traits into a single taxonomy, treating them as if they're equivalent. They are not.


Stage 2: Weak sourcing. Frameworks are supposed to be grounded in what actually differentiates top performers. In practice, they are frequently built from generic competency libraries, stakeholder opinion workshops, or "best practice" borrowings from other organisations. Rigorous approaches - behavioural event interviews, critical incident analysis, role-level decision mapping - are expensive and time-consuming. They get scoped out. The framework that survives reflects what the room believes drives performance, not what the evidence shows.


Stage 3: Arbitrary proficiency levels. Once competencies are defined, proficiency scales are layered on - typically three to five levels, from "foundational" to "expert" or "leading." These scales are almost always templated, not empirically derived. The difference between a Level 3 and a Level 4 is described in language ("applies the competency in complex situations" vs. "applies the competency in novel and ambiguous situations") that no manager can reliably use to rate a direct report. The scales create the illusion of measurement precision without actual discriminant validity.


Stage 4: Overlap and overload. Most frameworks contain too many competencies with blurred boundaries. When "collaboration" and "stakeholder management" and "influencing" all appear as separate competencies - with overlapping behavioural indicators - the system fails a basic test of construct clarity. Assessors can't distinguish between them. Ratings regress to halo effects. In psychometric terms, the framework violates the principle of mutual exclusivity and collective exhaustiveness (MECE). In practical terms, it becomes noise.


Stage 5: Translation failure. Even if the framework is well-built on paper, it must be translated into tools managers actually use - interview guides, performance review criteria, development plans. This is where most frameworks die. Managers do not think in competencies. They think in decisions, actions, and outcomes. A framework that asks a manager to rate someone on "strategic thinking" without anchoring that label in the specific judgment calls the role requires is asking for abstraction in a context that demands concreteness.


Stage 6: Temporal decay. Roles change. Business contexts shift. The competencies that mattered when the framework was built erode in relevance within eighteen to thirty-six months. But frameworks are expensive to rebuild and politically difficult to retire. So they persist - increasingly detached from the reality of the roles they claim to describe.


These failures compound. Each stage introduces distortion that amplifies downstream. By the time a framework reaches implementation, it is already too abstract, too bloated, and too disconnected from actual work to be useful.



The Consulting Incentive Problem


The way competency frameworks are built reinforces the very problems they're supposed to solve. Consulting firms - and I say this as someone who has worked inside two of them - are paid for deliverables, not outcomes. The engagement model rewards comprehensiveness: more competencies, more proficiency levels, more supporting documentation. A 40-competency framework with five proficiency levels and behavioural indicators for each represents significant billable effort. Whether it gets used is outside the scope of work.


Clients, meanwhile, often lack the technical expertise to distinguish between a rigorously built model and a templated one dressed in bespoke language. The asymmetry is structural. George Akerlof described this dynamic in his work on information asymmetry: when buyers cannot assess quality, lower-quality products drive out higher-quality ones. The competency framework market operates under similar conditions.


The organisational response is entirely rational: optimise for the path of least administrative burden. A poorly designed, poorly communicated framework that adds no value to a manager’s daily decisions is structurally guaranteed to die. Managers are not resisting change. They are making an efficient decision about how to allocate their limited time and attention. The framework gets reduced to a checkbox in performance reviews or a reference list for talent acquisition. These are useful functions, but they are a fraction of what a competency framework was supposed to achieve.


The post-mortem is predictable: "It was a change management problem." The organisation didn't embed it properly. Managers weren't trained. Leadership didn't sponsor it consistently.


Sometimes that's true. But more often, it's a deflection. The framework wasn't a good product. Change management became the explanation that protected the design from scrutiny.


The question I am trying to answer is not whether competency frameworks are theoretically sound. They are. McClelland’s insight was profound. Lawler’s extension was logical. The idea that organisations should define what predicts superior performance and then hire, develop, and evaluate against those definitions is elegant. The theoretical foundation is strong.


The question is whether a tool that requires I-O Psychology-level rigour to build, continuous organisational commitment to maintain, and cross-functional literacy to use can ever be practical at scale. Consider what is required for a competency framework to succeed: a practitioner who can distinguish competencies from skills and capabilities; a sourcing methodology grounded in rigorous job analysis; a principled approach to proficiency architecture; a MECE-compliant structure of no more than roughly twelve competencies; the ability to translate the framework for every function that will use it; and a governance mechanism that ensures continuous updating.


Each requirement is individually demanding. Together, they describe a level of investment and maturity that most organisations do not possess.


So What Actually Works?


I have an answer to this - one I've been developing across both practitioner work and academic writing. But this post is the diagnostic, not the prescription. The failure patterns above need to be understood on their own terms before we rush to solutions, because the solutions most organisations reach for (simplify the framework, reduce the number of competencies, improve training) are themselves symptoms of the same design logic that created the problem.


The question isn't how to build a better competency framework. It's whether the unit of analysis - attributes - is the right foundation for talent systems at all.

More on that soon.

 
 
 

Comments


bottom of page