Do 70% of change initiatives really fail?
If you’ve spent any time in consulting or corporate strategy, you’ve heard that, “70% of change initiatives fail.” It shows up in pitch decks, keynote speeches, HBR articles, and LinkedIn posts. It’s treated as settled fact, the kind of number that needs no citation because everyone already knows it. It’s beyond a fact, it’s simply a meme that flows into people’s awareness without thinking to check it.
I personally put this claim on many slides, especially since it was part of the standard pitch at McKinsey and BCG before putting forth our own unique analysis of why we, of course, can help you overcome those odds.
The problem with all of this is that nobody can trace back the 70% number to any kind of rigorous data. Despite this, many firms still anchor around this number and do their own deep dives trying to recreate the analysis.
I decided to go down the rabbit hole and see where it leads.
Seeds planted in the early 90s
Hammer and Champy (1993): the original “unscientific estimate”
The earliest traceable version appears in Michael Hammer and James Champy’s 1993 book Reengineering the Corporation. They wrote:
“Our unscientific estimate is that as many as 50 per cent to 70 per cent of the organizations that undertake a reengineering effort do not achieve the dramatic results they intended.”
Two things about this. First, they explicitly called it “unscientific.” Second, it was about reengineering specifically, not all organizational change.
Two years later, Hammer walked it back. In The Reengineering Revolution (1995), he wrote that his observation had been “widely misrepresented and transmogrified and distorted into a normative statement” and added: “There is no inherent success or failure rate for reengineering.”
The originator disowned his own number within two years. But by then it was already spreading.
Kotter (1995, 2008): from observation to estimate
John Kotter, probably the most widely cited name in change management, took a more gradual path to the number. In his 1995 HBR article “Leading Change: Why Transformation Efforts Fail,” he wrote that he had watched more than 100 companies try to transform themselves. His assessment: “A few of these corporate change efforts have been very successful. A few have been utter failures. Most fall somewhere in between, with a distinct tilt toward the lower end of the scale.” This was an observation from his consulting experience.
By 2008, in A Sense of Urgency, Kotter had landed on a specific number: “From years of study, I estimate today more than 70 per cent of needed change either fails to be launched, even though some people clearly see the need, fails to be completed even though some people exhaust themselves trying, or finishes over budget, late and with initial aspirations unmet.”
That definition is worth reading twice. By Kotter’s standard, a change that finishes late or over budget counts as a failure. A change that was never even launched counts as a failure. By this definition, almost any complex initiative in any field would qualify.
Beer and Nohria (2000): the quote that went viral
The version most people cite comes from Michael Beer and Nitin Nohria’s 2000 HBR article “Cracking the Code of Change”. Their opening line:
“The brutal fact is that about 70% of all change initiatives fail.”
No footnote or source for this claim. Just a “brutal fact,” sounding like a modern hustle bro delivering “harsh truths.”
From there, it spread like a powerful meme.
The citation chain: “academic matryoshka dolls”
Mark Hughes, an academic at Brighton Business School, has spent over a decade tracing how this number propagates. In a 2011 paper in the Journal of Change Management, he examined the five most prominent published sources for the 70% claim: Hammer and Champy, Beer and Nohria, a Bain & Company article, a McKinsey article, and Kotter.
His conclusion:
“Whilst the existence of a popular narrative of 70 per cent organizational change failure is acknowledged, the absence of valid and reliable empirical evidence to support such a narrative is highlighted.”
In every case, Hughes found the same pattern. Each source either stated the number without evidence, or cited another source that stated it without evidence. The citations are like nesting dolls: each one opens to reveal another citation inside, never to lead us anywhere interesting.
The Bain trail is a good example. Senturia, Flees, and Maceda wrote in a 2008 Bain article: “People have been writing about change management for decades and still the statistics haven’t improved. With each survey, 70 per cent of change initiatives still fail.” Hughes traced their supporting references back to a 2002 article by Pace and Mulvin, which opened with “Seventy percent of change programs fail” and cited Beer and Nohria’s HBR article as its source. The loop never ends.
Hughes also looked at Beer and Nohria’s own follow-up. Harvard Business School Press published Breaking the Code of Change in 2000, based on a 1998 conference Beer and Nohria organized. The book brought together respected change scholars around their Theory E and Theory O framework. But as Hughes found, “despite the impressive guest list, the book failed to provide empirical evidence to support the assertion that ‘about 70% of all change initiatives fail.’” A review in Administrative Science Quarterly was direct: “Taken as a whole, the volume illustrates the ideological nature of organizational change.”
The deeper problem: can you even measure this?
Beyond the citation trail, Hughes raised a question that I think is more interesting than the number itself: can you even assign an inherent failure rate to something as ambiguous as “organizational change”?
His argument is that change is ambiguous (the stated rationale may differ from the real one), context-dependent (the same initiative produces different results at different companies), perceived differently depending on who you ask, measured at arbitrary points in time, and nearly impossible to isolate from other initiatives running simultaneously. Any single number flattens all of that into a false precision.
Two of his examples are worth quoting. In a study of restructuring at 11 American hospitals, Walston and Chadwick (2003) found something counterintuitive:
“Employees are often not aware of positive effects of their restructuring efforts and, contrary to reality, may believe that both cost and quality have deteriorated when it has improved.”
If the people living through a change can’t accurately assess whether it succeeded, how much weight should we give executive survey responses about transformation outcomes?
The timing question is equally revealing. BusinessWeek ran a cover story in 1984 declaring a third of Peters and Waterman’s “excellent” companies as failures, just two years after In Search of Excellence. Two decades later, Ackman (2002) in Forbes found those same companies had “easily outperformed the market averages any way you slice it.” Whether a transformation succeeded depends on when you check.
Hughes also cited Doyle et al. (2000), who surveyed managers and found that 67% agreed that “the change process cannot be evaluated effectively because there are too many overlapping initiatives running at one time.” If you can’t isolate a single initiative from everything else the organization is doing, you can’t assign it a pass/fail grade.
Hughes connected these measurement problems to what Charles Handy called the McNamara Fallacy:
“The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.”
The 70% number is a case study in this fallacy. In the absence of empirical evidence, an arbitrary quantitative value was assigned to something that resists quantification. Then it was repeated until it became assumed fact.
Hughes followed up in 2016 and again in 2022. In his most recent paper, he wrote: “The enduring prevalence of this flawed assumption illustrates a failure of scholarship rather than practice.” He also noted that the number serves a specific function: “In certain instances reviewed here, opportunistic business consultants may have deliberately promoted a 70 per cent organizational-change failure rate, which could be rectified through their consultancy services.”
The circular chain is visible even in the footnotes of serious research. McKinsey’s own 2021 report, “Losing from Day One,” includes a footnote citing Kotter: “More than 70 percent of needed change either fails to be launched… [or] to be completed.” McKinsey cites Kotter. Kotter cited his own estimate. The number loops back on itself.
What the actual research found
The 70% number may have no empirical foundation, but that doesn’t mean the consulting firms stopped thinking about the problem. Over the past 15 years, McKinsey, BCG, and Bain have all run large-scale surveys trying to understand what actually happens during transformations. Their findings are more interesting than a single failure rate.
When I was at BCG, we started tracking our own data on large-scale transformation projects. The problem with this is obvious: every incentive is for BCG to make it look like more of their projects succeed and more of general projects fail. The same is true of McKinsey and Bain. ut the research is real, the sample sizes are large, and the findings are consistent enough across competing firms to take seriously.
McKinsey: “Losing from day one”
McKinsey has surveyed executives on transformation outcomes for 15 years. Their 2021 report, “Losing from Day One,” is the most detailed.
The headline number is familiar: less than one-third of respondents say their transformations succeeded at both improving performance and sustaining those improvements over time. As the report puts it, “the 30 percent success rate hasn’t budged after many years of research.”
But the more interesting finding is where the value gets lost. McKinsey broke the transformation lifecycle into phases and asked respondents when things went wrong:
- 22% of value loss happens during target-setting
- 23% during planning
- 35% during implementation
- 20% after implementation, once initiatives have been fully executed
Nearly a quarter of the potential value is gone before the transformation even starts, because companies set their sights too low or fail to do a rigorous assessment of what’s actually possible. As the report puts it, “the full potential might be compromised before companies’ transformations even get started.”
The report also found that companies on average deliver 2.7 times more value than their senior executives initially thought possible, when they actually commit to ambitious targets. That suggests the target-setting problem isn’t about being unrealistic. It’s about being too cautious.
The most striking finding in the McKinsey data is about comprehensiveness. They tracked 24 distinct transformation actions across three phases (goal-setting, design, and implementation). The differentiator between success and failure wasn’t which subset of actions companies took. It was how many. Companies that implemented all 24 actions achieved a 78% success rate. The overall average was 31%. There are no shortcuts to success, but there’s also no mystery about what the actions are. The problem is that most companies don’t do all of them.
McKinsey also found a perception gap that anyone who’s worked on a transformation will recognize: senior leaders are nearly 20% more likely than people in other roles to believe that the transformation’s goals have been adapted for relevant employees across the organization. Leadership thinks the message has landed. The people doing the work aren’t so sure.
BCG: flipping the odds from 30% to 80%
BCG’s 2020 report, “Flipping the Odds of Digital Transformation Success,” studied 800 senior executives and 70 digital transformations in detail. Their headline finding mirrors McKinsey’s: only about 30% of digital transformations succeed overall.
But BCG went further and identified six factors that, when all present, flip the success rate from 30% to roughly 80%:
- An integrated strategy with clear transformation goals
- Leadership commitment from the CEO through middle management
- Deploying high-caliber talent on the most important initiatives
- An agile governance mindset that drives broader adoption
- Effective monitoring of progress toward defined outcomes
- A business-led, modular technology and data platform
BCG’s framing is clear about something the 70% stat obscures: “The technology is important, but the people dimension (organization, operating model, processes, and culture) is usually the determining factor.” The transformations that fail aren’t usually failing because the technology doesn’t work. They’re failing because the organization can’t absorb the change.
But this is simply human nature. Most people don’t like change. Most employees aren’t incentivized to care about marginal shareholder value. And most leaders don’t even care that much about implementing a new initiative. I’ve been in all these situations and it’s almost always obvious how widespread doing a little as possible can be in big organizations.
From 12% to 88% Success Rate!
Here are some examples of research done by various firms:
| Source | Year | Sample | Success rate |
|---|---|---|---|
| McKinsey Global Survey | 2008 | 3,199 executives | ~33% |
| McKinsey Global Survey | 2021 | 1,034 participants | 26-30% |
| BCG Transformation Study | 2020 | 800+ executives | 30% |
| Bain Transformation Survey | 2024 | 400+ executives | 12% |
| Gartner CIO Survey | 2024 | 3,100+ CIOs | 48% |
| Prosci (excellent change mgmt only) | 2023 | 10,800+ professionals | 88% |
The range from 12% to 88% tells you more about the definitions than the phenomenon. Bain measures against original ambition in full. Gartner measures against business outcomes. Prosci measures projects with excellent change management specifically. All of these can be simultaneously true.
Why the number persists (and why it’s not entirely wrong)
The 70% claim has no empirical foundation. The people who created it have either disowned it or provided no evidence. So why does it keep showing up?
Part of it is commercial. Every consulting firm that cites the number then offers their methodology as the solution. As Dr. Jen Frahm of the Agile Change Leadership Institute puts it, the statistic “is only ever used to sell the importance of change management or to get people’s attention in an article.”
But there’s a deeper reason: it resonates with experience. Anyone who has lived through a large-scale organizational transformation knows that most of them feel like they’re failing, most of the time. The middle of a transformation, what Jeanie Duck calls the “determination” phase, is exhausting, ambiguous, and full of doubt. The 70% stat gives language to that feeling.
The real data, when you look at it carefully, tells a more useful story than the made-up stat. The McKinsey and BCG research doesn’t just confirm that change is hard. It identifies why it’s hard and what differentiates the companies that succeed: comprehensiveness in execution, ambition in target-setting, leadership commitment that extends through middle management, talent deployed to the highest-value initiatives, and face-to-face communication rather than email memos. These are things you can act on. A blanket failure rate is not.
A better frame
The honest answer to “do 70% of change initiatives fail?” is: the number was made up, but change is genuinely hard, and somewhere between half and two-thirds of transformations produce disappointing results by most definitions of success.
The more useful question is what separates the ones that succeed. The research from McKinsey, BCG, and Prosci converges on a consistent answer: the companies that succeed do more, not less. They commit more completely, set more ambitious targets, involve more of the organization, and sustain the effort longer than feels comfortable. The ones that fail usually cut corners, declare victory too early, or assume the strategy deck is the hard part and execution will take care of itself.
