Podcasts

Root Cause Analysis

This podcast episode features Andrew Henry and Beth Steenwyk discussing root cause analysis in special education.

Note: This podcast is the first in a five-part series on how state education officials can improve special education outcomes by focusing on Results-Driven Accountability.

Transcript

INTRODUCTION:

20 United States Code Congress finds the following:
(1) Disability is a natural part of the human experience and in no way diminishes the right of individuals to participate in or contribute to society. Improving educational results for children with disabilities is an essential element of our national policy of ensuring equality of opportunity, full participation, independent living, and economic self-sufficiency for individuals with disabilities.

ANDREW: Good morning and welcome to the first in our series of podcasts on how state education officials can improve special education outcomes by focusing on Results-Driven Accountability. My name is Andrew Henry, I am the founder of Red Cedar Solutions Group and we are the creators of Stepwell, a web-based platform that helps drive continuous improvement in special education with automated best practices and visibility into special education systems, processes, and results.

Over the next six months, we will be exploring the implementation of special education programs. Our goal is to provide you with real stories and share some of the collective experiences of special education administrators, parents, students, and others around the system. Our hope is to humanize that system, and to help you think about your own systems. But first, let’s step back for a few moments and think about what brought us here in the first place…..

BETH: You know Andrew, when I think of the special education system I don’t think of it in terms of SPP indicators or data points; I think of it in terms of specialized purpose and a specialized vision, and also a privilege to educate learners that have complex needs. From an educator’s standpoint, I see that as a prime challenge. I see working with students with disabilities as a professional privilege. To do anything less than that diminishes our profession. 

ANDREW: That’s Beth Steenwyk. Beth is a former teacher, state special education administrator, innovator, consultant and relentless pursuer of improvement.  When I asked her, “What do you see as the key characteristics of a quality special education system,” here’s what she told me:

BETH: You know, in general education administration, we have a lot of direction on how to define quality. We spend a TON of time focusing on educational leadership, curriculum aligned to standards, educational performance of students, data driven continuous improvement and evidence-based practices. Whether you liked or hated the No Child Left Behind Act (we refer to that as ESSA now), it was instrumental in making a systematic review of what was happening in your district and buildings and helping us pay attention to the picture that review painted.

In special education we’ve been somewhat adjunct to those conversations—maybe for good reason, because kids with disabilities pose some unique learning challenges that aren’t as easy to understand or capture systematically. Many educators would tell you that they can’t take the time to investigate and focus on implementing improvement because our measures of success are heavily weighted towards compliance and the never-ending stream of paperwork and reporting that goes along with it.

So what are the pieces of a healthy special education system? In my opinion they’re not different than general ed. They are:

  • Strong Leadership;
  • High Quality Instructional Practices and implementing evidence-based practices;  
  • Ongoing Professional Development/Technical Assistance/Coaching
  • Data Analysis/Monitoring and evaluation for the purposes of Continuous Improvement

In order to move from compliance to results, we need to place a greater focus on the underlying issues that impede our ability to deliver quality services across these domains. We need to use monitoring as a tool or a window into our ongoing program performance and our system performance as opposed to a checklist that we discard when we have finished the monitoring activity. Only then can we tell the story about how we’ve done, where we have challenges and how we can improve. This is how you get to results driven accountability.

ANDREW: Results-Driven Accountability: it requires more than just a shift in thinking. It requires a new approach to how state education departments and their local implementing agencies collect and analyze student information. The process must go well beyond the “what” that is being measured and intentionally, and sometimes uncomfortably ask the question, “Why did this happen?”

BETH: You know, as adults we forget that we create the systems that produce the trends that produce the results we get for our programs. Often those results are not what we want or things that we don’t like. Sometimes we have a tendency to blame those results on a myriad of actors—you can figure out who those actors are in your neck of the woods—yet in my experience, there are a lot of things that are well within our control to change and influence without assigning blame. Committing to looking in the mirror and honestly and regularly assessing what we see is critical. After 42 years in this business I still think I have got a lot to learn.

ANDREW: Beth’s talking about conducting a Root Cause analysis. It’s the most important element of any improvement planning process. It’s the process of gathering, studying and making a plan based on underlying issues instead of jumping to conclusions based on presumed cause or preconceived notion. Put simply, it’s the act of being curious and examining and challenging your ability to change. I asked Beth to share some practical experience with using root cause analysis to change institutional behaviors.

BETH: You know Andrew, that concept of root cause analysis is really just about digging deeper and asking that “why” question until we get to the bottom of the issue. You know, I had an experience with a district that had a dress code requiring kids to wear uniforms. This happened to be a Title 1 building and ostensibly, the purpose behind the dress code was to relieve the pressure on families who couldn’t keep up with the unending expense of fashionable clothing. 

Unfortunately, the district underestimated the kids desire to end-run the dress code. What was happening was that kids would come to school not following guidelines and then be sent to the office and processed for violations. Infractions were always identified in the first and second hour class, which also happened to be when the school scheduled its core curriculum like math/ELA classes. Well, it didn’t take long for academic performance to drop, discipline issues to go off the chart (because of course on the way to the office the kids were lingering, fighting, skipping out, being kids). So when they contacted me, they had concluded that their positive behavior and interventions support systems weren’t working. I wasn’t so sure, so we sat down and looked at the data. If we had relied on only that data, it would have been a reasonable conclusion. But then we started digging a little deeper, asking the questions: “Where are all these discipline referrals coming from? Why are they going to the office in the first place?” Well, as we dug it became a little more clear—it was the dress code. We took this a step further and asked ourselves, “So what are our beliefs about the dress code? What are our beliefs about the kids? Are there ways we can address them without making office referrals?” What we concluded was as adults we’re creating our own problems with the dress code! While the purpose was laudable, we reinforced a system that was in turn reinforcing an unfortunate perception that the kids were discipline problems.

ANDREW: So you might assume that this story ends with the District dropping the dress code. But that’s not what happened. After consideration and debate, they determined that the core intent was still reasonable and benefited many families. Instead, they changed their procedural approach to morning transition time, and began greeting kids at the door. In this way, they were able to support the kids as they began their day, and they diverted students who did not meet the dress code to a uniform closet. The additional supervision also helped kids to get to class on time. The school was able to dramatically reduce their discipline referrals and their math and ELA scores went up significantly because kids were in class and ready to learn.

One of the difficulties of results driven accountability is that we’re often measuring our success based on what happened a year or two ago. It’s difficult to shift our thinking from looking backward to looking forward, but it’s absolutely essential in order to learn to identify trends in data. Often, we are dealing with what is above the water line—what we can see. What we need to do is to go below the line and uncover why the system is producing these trends. Sometimes this requires more than just reviewing available data. It might require qualitative inputs-and probing questions to help determine whether our perceptions of what kids can and can’t do/should and shouldn’t do as administrators is in keeping with cultural norms and whether changes need to be made to strengthen, update or eliminate policies, practices and procedures that fail to produce quality outcomes.

BETH: You know Andrew, that process of root cause—I talk about it as the iceberg method, let’s see the whole iceberg below the waterline—I think that’s something that I’ve learned in the consulting work I do. When you enter a situation like that school district I was sharing, that starts teaching us how to think about how to get under that waterline. I used that experience in a recent experience with another leadership team who I consider to be very innovative and willing to tackle any problem that comes along. They had a particular challenge with their kindergarten and first grade kids and the kids were acting out, struggling with resiliency and self-control, showing an inability to navigate problems and problem solve. By the end of the first semester, everybody was fatigued. They had so many office discipline referrals it was making their head spin.

We took a look at these referrals and asked some questions like, “When are the discipline issues coming up?” and “Where in the building are they coming up?” Turns out, the majority were occurring during transition times—specifically first thing in the morning and again at lunchtime, and typically in the early part of the semester. This particular building used a pod system very successfully. That pod system requires many transitions throughout the day. While this was fine with the older kids who had gotten used to that system, some of these transitions—hallways and common areas in particular—were very challenging and overwhelming to the younger kids. 

The next question we asked was, “How are we implementing our positive behavior intervention and support systems?” We then reviewed their PBIS practices and we learned that while they did a very good job at the beginning of the year at explicitly instructing in all parts of the buildings what was expected of kids, it started falling by the wayside as the year progressed and it wasn’t consistently reinforced. In addition to that, the reinforcement system for the kids was not implemented consistently. The result: mad chaos and adults super fatigued. So we brainstormed ways that we could improve outcomes for kids while working within the building’s infrastructure. We agreed that specials teachers, whenever possible, would come into the classroom or come to the classroom to walk the kids down to the specials. We talked about eating lunch in the classroom during the first few weeks of school, or using older kids as “lunchtime mentors” to the younger kids. It ended up being a win-win for everybody.

ANDREW: So, to recap today’s thinking: Compliance—it’s a small piece of the quality pie, but one that can be used to shed light on what we know and help inspire our curiosity about why we are seeing what we are seeing. We can use tools—data systems and software like those we produce here at Red Cedar—to help give us that window and to help drive the questions behind our root cause analysis. We can bolster our data with experiential information, like surveys and inputs about beliefs and norms to help assess whether policies, practices and procedures are supporting or impeding performance. And, we can use this information to drive improvements that  lead to strengthening leadership, curriculum, professional preparation and student performance.

BETH: Hey Andrew?

ANDREW: Yes Beth?

BETH: Let’s not  forget that while we don’t control what students come to us with, we are capable of rising to challenges with positive responses. From my perspective that’s what being an educator is all about, and it’s a privilege to serve our communities in this endeavour.

ANDREW: And there you have it for today. Thanks for joining and be sure to listen to our discussion on Empowering Teams to Evaluate Special Education Data. I’m Andrew Henry and this podcast and content were created by Red Cedar Solutions Group, developers of Stepwell, a software solution that empowers users to collaborate for results.