Making Change Happen in Large Organizations: Top Down or Bottom Up?

A classic problem in organizational change theory is how to increase the odds of success when planning and implementing significant changes.  One way to proceed is the “cascade” or “waterfall” model, starting with the CEO and executive team, and then moving progressively “down” the organizational hierarchy.  Another approach is to begin with front-line workers, then move up and out; this is a “bottom up” or “grassroots” strategy.  There are pros and cons to each approach- the top-down strategy can take a long time to trickle down to the front lines and, by the time it gets there a great deal can be “lost in translation.”  The bottom-up, grassroots approach can lack significant executive sponsorship and can falter because it is not viewed as connected to the organization’s key strategic goals.   A third option is to combine elements of both, thus attempting to reap the benefits of each approach while mitigating its downside.   But is there a formula, or way of proceeding, that has the greatest odds of success?  In working with many large organizations and change projects over the last twenty-five years, I have found that the most reliable path to success is to plan that a change effort will be “top-down, bottom-up, middle out.”  Let’s take an example to illustrate why.

About three years ago in a large Canadian hospital, the Vice-President of Patient Services became concerned that there was a growing lack of respect in the workplace.   She formed this impression by observing, or hearing about, multiple conversations from different parts of the organization.  Some involved doctors and nurses, others involved allied health professionals, others involved issues of race, ethnicity, and gender.   The Vice-President (we will call her Karen) was concerned about a growing environment in which people were not encouraged to bring their whole selves, including their ideas, concerns, and passions, to the task at hand: “We were silo’ed.  We were not working well across functions; we were isolated rather than interdependent.  We needed to shift our thinking; we needed a fundamental cultural change.”

Karen decided to launch a diversity initiative, but not for the usual reasons: “We had diversity-related complaints, but they were symptoms of the broader issue.”   Some comments were racial, she said, but more indicated a lack of appreciation, sensitivity, and awareness of others’ contributions that was broader and more fundamental.  Karen took a “grassroots approach” to the issue, starting with a broad definition of mutual respect in the work environment.  She went around and solicited participation from unions, staff, and front-line leaders.  She created a Council and made people apply to be on it: “it was important to get the right people- those who were leaders and influencers and had no preconceived agendas.”

The Council began with sensitization, using an organization-wide survey to identify the biggest concerns and challenges.  There was a “phenomenal” response rate; “multiple things came out that guided our plan.”  The underpinning was an emphasis on mutual respect; this led the Council to create a task force that would develop a Code of Conduct.  The task force had representation from across the organization, including leaders, front-line workers, and community representatives.  The task force conducted a literature review and looked into what other organizations were doing.  The resulting Code of Conduct applies to “anybody and everybody” who comes into the hospital.  At the end of the process, the task force held a symbolic “signing ceremony” as a powerful sign that people were committing themselves to following the Code of Conduct:

The Hospital is committed to providing a safe, inclusive, and caring environment where we:

* Treat one another with compassion, kindness, courtesy, respect, and dignity;
* Recognize the unique role and contribution of each individual;
* Work together;
* Listen and communicate responsibly;
* Take responsibility for our actions;
* Act with integrity and fairness; and
* Resolve differences and concerns in a sensitive and timely way.

At this point, the change had become what Karen calls a “bottom up, top down, middle out” initiative.  Front-line workers and staff were mobilized by Karen to create, shape, and endorse the Code of Conduct.  Senior management sponsored the initiative and became active supporters of it along the way.  Because of support from both levels, middle managers felt empowered and authorized to use the Code to create a healthier, more respectful work environment.  The result was both a “groundswell” of support for the initiative and a transfer of momentum to the clinical units, supported by the respective managers.  As it happened, Karen was promoted from VP of Patient Services to interim CEO after the Diversity/ Code of Conduct initiative had been running for two years.  She was unable to continue as champion of the effort and, when we reviewed the effort nine months later, commented that she felt that the effort had lost momentum.  Nonetheless, when asked what impact the Code of Conduct had had in the workplace, several managers were quick to respond with stories of how they had used the Code of Conduct to improve behavior and attitudes in their own areas.    One said, “It helps.  Just this week, I told one of the physicians that ‘you’re not behaving in line with the Code of Conduct; I can’t have you talking with my techs that way.”

Changes that are solely top-down or bottom-up tend not to have this kind of staying power.  We find that a critical of mass middle managers will use things like the Code of Conduct to positive effect if there is both a “push” from senior management and a “pull” from staff in support of the effort.  In this case, strong leadership from Karen got things started, and then the Council and Code of Conduct task force carried the effort forward.  It is also clear that by following several commitment-creating strategies, including consistent and widespread outreach and the signing ceremony, Karen created conditions whereby middle managers were using the Code as a tool well after the “Launch” phase of the initiative.  In effect, Karen ran a campaign for change which drew on a synthesis of the “bottom up” and “top down” models of change, and used a focus on middle management (or “middle out” model of change) to create a transfer of momentum throughout the organization.

Author by Tom Bigda-Peyton

Self-Interest and the Common Good

I have been working in Canada lately on healthcare reform and system integration. The part of Canada where I am working has regional health authorities; despite statutory authority, these must compete for loyalty with local hospitals and their boards. One regional health authority convened a large group of constituents to form working teams and collectively form a vision of the system of the future. As part of this effort, they held a meeting of local boards of healthcare agencies, including hospitals, community health centers, and other care providers. The meeting began with a description of the project, with exhortations to seize upon this rare opportunity to create a new design for the healthcare system. As well, the “governors” were asked to think beyond their local interests, to think in terms of the larger system.

After the opening presentation, the facilitators opened the meeting for comments. Three in particular stood out for me:

• Two Board members from the aboriginal community: “We have had a Board of Boards for the past two or three years. It is working for us. For instance, last year one of our Boards took a 10% cut in programs to maintain a different program that we agreed was in the best interest of the community.” (Comment A)

• A local hospital Board member: “I have to be candid- my primary interest is my local hospital and community. I know there is a system as well, but for me it is secondary.” (Comment B)

• From the Board Chair of a hospice: “I think we need to learn from those who are better than we are at working as a community (turning toward the first speakers); we need to work better together; and if we do, I am convinced we will save money.” (Comment C)

For me, these comments highlighted the perennial tension between self-interest and the common good. In this case, the tension is played out in terms of “system” interests versus “local” interests. The comment from the Aboriginals is striking because they have already been working together as a Board of Boards; they have already sacrificed local interests for the common good; and they proclaim that this is working for them. This is occurring at a time when others, like the third speaker, believe that local interests are paramount and that upholding them is the right thing to do.

How do we decide which approach works best? It depends on our goals. If we want to optimize the whole, and we believe that will serve the best interest of the parts, Comment A is the way to go. If we want to optimize parts, and we assume that will lead to optimization of the whole, comment B above makes sense. If we think that some combination of both makes sense, Comment C marks the way forward. However, this is only the beginning of the discussion. The first and second speakers agree that we need to include both parts (local Boards) and wholes (the “system”) in our analysis. If we have intermediaries whose job is to connect, align, and even integrate the two (such as a Board of Boards), the task of the intermediary is to determine when, and how, to honor the interests of each.

Are there rules of thumb? If we are to create a better healthcare system, what principles of governance should we follow? Various approaches to systems thinking and system dynamics contend that we need to “bring the whole system into the room” and create solutions that place system priorities first, local priorities second. The governors in the meeting we are discussing were advised to follow this path. The cultural and political tendency in Canada and the U.S., though, is to place more faith in the individual, local option as a path to breakthrough. The assumption is that invention does not happen collectively, and that major initiatives often start from the efforts of a few individuals. Even apart from the politics of representing one’s local constituency, there is a strongly held view that the best, smartest way to solve tough problems is to rely on talented individuals to come up with solutions.

Does the third speaker pose a synthesis? He implies that both local and system interests are valid and need to be reconciled. Further, he says that if we do include both in our solutions, “we will save money,” thus fulfilling pragmatic criteria for solutions. But the weight of the majority tradition, history, and culture in Canada tilts toward the views of the second speaker: when in doubt, trust individual, local solutions. This is why the Aboriginal response is striking: it comes from a different tradition and belief system, one that places the parts in service to the whole. When I asked my contact from the regional health authority how many governors would agree with the first statement, she said “Ten per cent.” We estimated that at least 60% would agree with the third statement.

If the systems theorists are right, this poses a significant challenge for us. Many of our most complex, difficult, and compelling problems appear to be systemic in nature- healthcare redesign, environmental improvement, and reform of our financial systems. In dealing with these problems, the systems view goes, we must follow the lead of our Aboriginal neighbors. However, our dominant culture (at least in Canada and the U.S.) suggests the opposite- when in doubt, support individual solutions and trust that the system will be improved as a consequence. How do we test these models? How do we recognize systemic problems as such, and use appropriate methods to solve them? How do we align the principles of emerging system sciences with our political, cultural and social systems? It may be that our fate rests in the balance.

Author by Tom Bigda-Peyton

Improving Safety By Taking More Risks: Lessons from High Consequence Industries

Can safety be compromised by taking too few risks?  That’s the surprising finding of our work at Safety Across High-Consequence Industries (SAHI), a conference and collaborative network I have been a part of since 2004 as a researcher and conference steering committee member.  This interdisciplinary community is enriched by diverse viewpoints like those represented within the Center for Adaptive Solutions (CAS), which is why we’ve chosen Safety Across High-Consequence Industries as our theme for the CAS blog this month.

Safety Across High-Consequence Industries

SAHI was founded by Jeff Brown of   Klein Associates and Manoj Patankar of St. Louis University in 2002 to transfer knowledge of safety improvement theory and practice across industries, for instance from aviation to healthcare.  In addition to a conference, there is a SAHI learning network, a Federal Aviation Administration funded Center for Aviation Safety Research at St. Louis University, and a body of scholarship including our comparative review of 13 safety cultures across nuclear power, aviation, chemical pharmaceutical, construction, and aviation industries,[i] and our forthcoming safety culture[ii] book.

SAHI Defined

So, what do we mean by Safety Across High Consequence Industries?

* “Safety” refers primarily to consumer safety (i.e. patients in healthcare; passengers in aviation) rather than workplace safety (as in the enforcement of OSHA regulations and preventing accidents in the workplace).  Other definitions of “safety” include psychological safety, an issue which emerges in research on the relationship between safety and risk that I will describe shortly.

* “High-consequence industries” refers to industries in which accidents can be catastrophic, causing loss of life (aviation, chemical, nuclear, healthcare), disruption to society (oil and rail), and risks or threats to consumer safety (healthcare, food production).  Other industries where accidents can have disastrous results include financial services and government.  So far, though, we have focused primarily on aviation and healthcare.

*“Across” means we emphasize systemic solutions to safety problems–solutions with high-leverage across multiple areas of an enterprise, network, social system, and  across industries.  We also focus on how to best transfer knowledge, experience, and practice from one sector (or area, or discipline) to another.

* Finally, we focus on socio-technical solutions to safety problems.  For example, improving consumer safety by improving both the “technical”  system (such as software platforms and information exchange through technology) and the “social” system (through training, learning, and development programs for workers).

A Central Problem:  Transfer and Adoption of Best Practices

At first glance, the transfer and adoption of best practices would seem to be a relatively straightforward issue.  After all, who wouldn’t want to follow “best practices?” It turns out that there are many impediments to the adoption, spread, and wider dissemination of practices which have been shown to improve results, even  “evidence-based” best practices within one sector (or discipline), let alone across sectors.

In healthcare, “bench” to “bedside” adoption of best practices often takes about 14 years to achieve mainstream adoption (e.g. Peter Angood, Joint Commission, SAHI Advisory group meeting, September 2007).   In interviews my colleagues and I conducted with seven of Canada’s leading health researchers and practitioners in 2008, adoption of widely-acknowledged best practices (such as the Ottawa Ankle Rules) achieved only 30% penetration, and this is at the home institutions where the practices were developed and tested!

Unless we are willing to live with a 14-year lag time and 30% adoption rates in healthcare, we must understand what accounts for this and what can we do about it.

An Enterprise-Wide Safety Improvement Roadmap

Over the last ten years, my colleagues and I have developed a roadmap for enterprise-wide safety improvement and have tested it against case studies from high-consequence industries in aviation, healthcare, nuclear, chemical, and oil.  We have found that accident levels in these industries are correlated with safety culture, organizational effectiveness and efficiency, and overall performance.

We found a continuum from Normal to Reliable to Highly Reliable to Ultrasafe safety performance which also correlates with other predictable outcomes, such as financial results.  This scale coincides with the percentage of defect-free rates associated with 2, 4, 6, and 7-9 Sigma performance levels, and with safety culture stages that we call Secretive, Blame, Reporting, and Just Culture[iii]

View of Risk

Avoid risk at all cost by:

Reducing variability in practice and enforcing  compliance with standard procedures and protocols.

Understand and learn from risk by:

Differentiating  between productive and unproductive variations in practice.  (3 Sigma)
Understand and learn from risk by:

Making these distinctions explicit and reviewing them together. ( 4 Sigma)

Testing  and refining their findings to date.  (5 Sigma)
Systematize and embed learning about risk by:

Formally incorporating this learning into ongoing processes of incident review, and continuously refining, testing, and learning.  (6 Sigma)
Accept risk as normal and expected;  anticipate risk and use it to innovate by:

Maintaining  the overall “safety envelope” ( no one is harmed by trials of new practices and procedures).

At the lower levels of performance (and safety culture maturity), the emphasis is on reducing variability in practice and enforcing compliance with standard procedures and protocols.  In this environment, “risk” is to be avoided at all cost.  At the upper levels of performance, though, risk is accepted as normal and expected; it is even invited in the service of anticipation and innovation, provided that the overall “safety envelope” is maintained (that is, no one is harmed by trials of new practices and procedures).  This is the world of resilience engineering as defined by  Erik Hollnagel and his colleagues, in which we examine what goes well and what is normal in an organization, as well as what goes wrong, in order to be better prepared to deal with surprising, risky situations when (not if) they come.

In the transition from a Reliable to a Highly Reliable enterprise (or system), it turns out that it is necessary to take more risk, not less.  This is  counterintuitive to many leaders, managers, and safety officers of high-consequence industries (such as healthcare) who are striving mightily to achieve and maintain Reliable performance levels.

In order to reach Reliable performance levels, safety leaders must rely on compliance, standardization, and rules which apply to everyone (with no exceptions).  But in order to achieve system-level safety performance at 6-9 Sigma, it is essential to develop two additional competencies throughout the organization:

* The ability to take more risk in the service of discovering and adapting new and more effective practices and adapting to dynamic uncertainty; and

* Situational awareness, or the ability to distinguish when to take less risk, and when to take more, without compromising the enterprise safety envelope.

The Key to Improving Safety:  Link it to Quality

The key to accomplishing this transition is linking the “safety” agenda to the “quality” agenda.  In brief, this means that at about the 3 Sigma level, enterprise managers should start telling the organization’s influencers to differentiate between productive and unproductive variations in practice.

At 4 Sigma, they should begin making these distinctions explicit and review them together.  At 5 Sigma, they should test and refine their findings to date.  At 6 Sigma, they need to formally embed this distinction in ongoing processes of incident review, refinement, testing, and learning.

Improving Safety Requires Taking On New Kinds of Risks

Navigating this kind of transition requires enterprise managers themselves to take new kinds of risks.  For instance, in the SAHI circle we have been looking at ways of transferring knowledge and experience of Crew Resource Management (CRM) from aviation to healthcare.  Early adopters have included national surgical leaders who have been active in the SAHI conference and learning network.

CRM is a form of team training and development which has been widely used with pilots and co-pilots and (to a lesser extent) air traffic control in the aviation sector.  To date, it has also been used with promising early results to improve communication in healthcare (see Nemeth, 2008), including hospitals in Boston and Pennsylvania.[iv]

These demonstration projects have required the surgical and hospital leaders to take several risks: introducing the idea of cross-disciplinary “team” training in environments where this has not been done before; asking physicians to make changes in practice (such as schedules changes to accommodate the needs of the wider team); and persisting with these changes over 12-18 months, despite the lack of compelling quantitative evidence that the changes are working.

The Biggest Risk: Shifting Ingrained Mind Sets and Behaviors

Perhaps the biggest risk in both the Boston and Pennsylvania cases was starting the process itself, which required the surgical lead to pull the team together and make sense of problems and opportunities which might motivate the team to make improvements:

“I remember sitting in the room and, as we all talked, I realized that everyone had a valid gripe; everyone in the room had valid concerns.  The real issue was we didn’t have a way for the team to function with multiple personnel substitutions during the procedure…this was the root of conflict within the team; as participants verbalized their frustrations they recognized that the problems experienced by each role were interrelated.” (“Safety Culture in Aviation and Healthcare”, by Patankar, Brown, Sabin, and Bigda-Peyton, ch. 6, in press).

This is the human side of transferring “best practices” across high-consequence industries, such as aviation and healthcare.  To move ahead with such projects, leaders must take more risk, not less.

This runs counter to their deeply-learned habits and assumptions, such as “in order to be safe, do not take risks.”  Instead, they must adopt a different view: “in order to be safe, accept that risk is normal and to be expected.  Welcome it, understand the difference between productive and unproductive risk-taking, and conduct real-time experiments and rehearsals in which the chances of harm are low and the chances of team and organizational learning are high.”

Conclusion: Shift from Risk Avoidance to Anticipate, Respond and Learn From Risk

By shifting from a focus on Reliability to an emphasis on Resilience, this paves the way for a wider shift in the team, organization, or enterprise to higher levels of safety performance under greater degrees of stress, turbulence, and dynamic uncertainty.

[i] Patankar, M., S., Bigda-Peyton, T., Sabin, E., Brown, J., & Kelly, T., (2005).  A Comparative Review of Safety Cultures. Federal Aviation Administration: ATO-P Human Factors Research and Engineering Division. (Grant No. 5-G-009)

[ii] Patankar, M. S., Brown, J.P., Sabin, E. J., Bigda-Peyton, T. G.  (In press). Safety Culture: Building and Sustaining a Cultural Change in Aviati
by Tom Bigda-Peyton
Photo courtesy of U.S. Army

Oh What a Story Your Story Can Tell

In my last blog post, I said that the best way to solve a problem is not to focus on the problem. Instead, find an example of a successful solution to that problem, or a similar problem. Describe the solution in the form of a story or anecdote, review the story for clues to the solution, apply the clues to your problem as an action experiment, and see what happens. Finally, use what you learn to change your solution until you have solved the problem.

It’s Not Just A Story
Why does this work? We tend to think of stories as “just” stories or anecdotes, not as valid data containing causal information about how things work. But in fact, stories are personal narratives about action within a particular context or situation. Personal narratives are a form of qualitative (words, images, meanings, behavior vs. numbers) data. These narratives reveal perceptions (beliefs, attitudes, opinions, feelings) about what’s going on in the situation being described. Narratives or stories also reveal the causal relationships, or logic-in-use between beliefs, actions, and the consequences of those beliefs and actions.

Stories Reveal Patterns of Practice
When understood as a form of qualitative data about causal relationships between beliefs, actions, and their consequences, stories can be systematically gathered and analyzed to understand the practices people use to do their work. The kinds of practices you can learn about include typical practices, whether they are “best” practices, “average” practices, or “worst” practices. Stories about rare events—whether rare failures or breakthrough successes—are another valuable source of information about practices.

Stories Show the Logic-in-Use of Action
Why are stories worth treating as data? We know from much research that a lot of what people “know” is tacit or implicit. People “know” a lot more than they can put into words explicitly. Stories are a way to draw someone’s tacit knowledge out, and make it visible. Not what they think they did, but what they actually did, as revealed by the logic-in-use of the story. So the next time you want to understand what someone knows, ask them to tell you a story about it. Ask “Can you think of a time when…” or “Can you think of an example of that?” In my next blog post, I will share more about ways to gather and analyze stories as data.

Author:  Clarissa Sawyer

Want to Solve a Problem? Tell a Story.

Most people try to solve a problem by focusing on the problem and analyzing it to discover root causes.  The best way to solve a problem is not to focus on the problem.   Instead, find a time when you, or someone else, successfully solved a problem that is similar to the one you want to solve.  Tell the story of that success.  Then ask, “What do I notice, what stands out?”   Next ask, “Given what I notice, what does this suggest about what I could or should do?”   Record your answers to these questions.  Finally, use what you’ve uncovered from examining the success example to design an action experiment.  Take the ideas about what you could or should do and try them out on the problem you wish to solve.

How It Works: Solving a Problem at the FAA

I used this method with a team of maintenance technicians in the Federal Aviation Administration. The team needed to train and certify several new, inexperienced technicians.  The new technicians had been through extensive classroom training the year before, and had received some on-the-job training, but they still weren’t trained on some of the more complex radar and navigational equipment, or certified to  maintain it.  This meant they couldn’t help with certain kinds of equipment, and work was backing up.

The problem:  How to get the new technicians trained, which would take time, while keeping up with already backed-up maintenance?

Tell A Success Story

Instead of focusing on this problem, I asked the team to tell me about a time when they were successful in getting someone trained, in spite of a demanding work load.  It took a few minutes for the team to recall an example, but eventually someone did.  After telling the story, which I recorded on a flip chart, I asked the team, “What do you notice?  What stands out?”

Review the Story for Clues

They pointed out that the training proccess was a mini-apprenticeship.   A senior technician would take the new technician under his wing and show him the ropes, bit by bit.  First, the new technician would spend time watching the senior technician perform maintenance on the equipment.  The senior technician would explain what he was doing and why, and the new technician would ask questions.  Next, when the senior technician and the new technician felt ready, the new technician would try doing some of the maintenance while being watched and coached by the senior technician.  As the new technician became more confident over several weeks or months, the senior technician would give less and less coaching.  After several months, the new technician was usually ready to take the certifying exam given by technician from a different maintenance team.  When the new technician passed the certifying exam, they were given responsibility for maintaining that type of equipment.

The Role of the Supervisor

The team noticed another element in the success story featuring  an apprenticeship process.  Their supervisor.  The supervisor secured permission from air traffic control for release of the radar equipment for training purposes, taking off-line for managing air traffic.  This took advanced coordination and planning. Sometimes at the last-minute, due to weather or outages elsewhere in the air traffic system, the equipment release was rescheduled.   Another thing the supervisor did was make sure that training sessions were not postponed or interrupted by other, competing demands.  Getting the equipment released for training was a big deal, so it was important to take advantage of the scheduled time.

Apply the Clues to the Problem and Do an Experiment

With the elements of the success story now made visible, I suggested that the team apply the same elements to their current situation, and design an action experiment.  Together, we identified who needed training, who would oversee and provide training, estimated how long the training period would take, and when we would meet again to review the results of the experiment.   I agreed to tell the supervisor of what we had come up with, and to get the supervisor’s commitment to arrange the equipment release, and protect the training sessions from interruption or delays.

Review the Results

We met again 8 weeks later to review the experiment.  Results?  Success!  The training sequence had gone as planned, with support from the senior technicians and the supervisor.  The new technicians had their certifying exams scheduled, and soon after that, they passed their exams and were given responsibility for maintaining that specific equipment, with back up support from the senior technicians.

Author: Clarissa Sawyer