Podcasts

Evaluating Data

This podcast episode features Andrew Henry and Dr. Jacque Thompson discussing data evaluation in special education.

Note: This podcast is the second in a five-part series on how state education officials can improve special education outcomes by focusing on Results-Driven Accountability.

Transcript

INTRODUCTION:

20 United States Code Congress finds the following:
(1) Disability is a natural part of the human experience and in no way diminishes the right of individuals to participate in or contribute to society. Improving educational results for children with disabilities is an essential element of our national policy of ensuring equality of opportunity, full participation, independent living, and economic self-sufficiency for individuals with disabilities.

ANDREW: Good morning and welcome to the second installment in our series of podcasts on how state education officials can improve special education outcomes by focusing on Results-Driven Accountability. My name is Andrew Henry, and I am founder of Red Cedar Software Group and the creator of Stepwell, a web-based platform that helps drive continuous improvement in special education with automated best practices and visibility into special education systems, processes, and results.

In our last podcast, we talked about quality special education systems. We discussed compliance—and the fact that although it is a small piece of the quality pie, it is an important one that can be used to shed light on what we currently know and can inspire our curiosity about why we are seeing what we are seeing. We also discussed tools—like the data systems and software we produce here at RCSG that help give us that window and help us formulate the right questions to ask when conducting activities like root cause analysis. We also discussed the importance of experiential information-survey’s and inputs about beliefs/norms that help assess whether policies, practices and procedures are supporting or impeding performance. My friend and colleague Beth Steenwyk helped gave us some excellent examples of how the quest for improvement can lead to meaningful change, strengthened leadership, and improved student performance.

Today, we are going to delve into how to streamline the process of data collection and allow special education administrators to focus more on data evaluation.

Joining me today is former State Special Education Director from Michigan and NASDE past-President, Dr. Jacque Thompson. Jacque has a long and distinguished career in education including early intervention, K-12 special education, and special education program review consultation with districts across the country. I tracked Jacque down last week to talk with her about her tenure in MI and what inspired her to invest in data driven decision making tools while she was at the helm in Lansing, MI.  Jacque, when you first became state director-what motivated you to start using data to improve the system?

JACQUE: Thinking back to the resources we had at our disposal, I think I was struck by the fact that we had data, but nobody really paid much attention to it. What we collected was something people had to do to meet a deadline and check a box. What we needed was a way to help them interpret their data.  We needed easier and more user-friendly ways to visualize data and make it real, in order to get people to pay attention to it.

ANDREW: It doesn’t surprise me to hear this. At the time that you are referring to, I was director of Michigan’s Center for Educational Performance and Information (CEPI). We were wrestling with the same problem in general education. We had a lot of data and a lot of demands from the field for information, but very few tools or resources to help us make that data useful to the people who were responsible for making changes based on what it said.

JACQUE: In Special Ed, our approach was to develop what we referred to as a ’data portrait’. It aggregated local data based on a number of priorities we assigned like suspension/expulsion, disproportionality, child find, etc. This was before the first State Performance Plan requirements or even the invention of the KPIs. I remember the first time we presented these data portraits at a summer conference of Special Ed Directors. I was new to my position–and I was a woman—and frankly, a number of people were remarkably up-front about their opinion that I didn’t belong there.

I can remember one of my colleagues from a local ISD who said to me, “You know Jacque, I am actually impressed with the work that you’ve done, but, oh, you and your ‘datas’…  As if I had somehow conjured the whole data portrait thing up! But that was EXACTLY WHY I did what I did because in Michigan we had evaluated our whole system based on the notion that we were very good compared to states. That we had better regulations, more robust teacher prep standards, and better procedures.  But in reality, we didn’t back it up with data. We were just riding a wave of self-assessment and maybe, self-aggrandizement.

So, the move to data-driven decision making was an effort to be introspective. To allow us the opportunity to reflect, but to back it up with data. Some people really “glommed” onto it. In particular, I’m thinking about one colleague who went so far as to corral and engage her peer in presenting about how and what their local districts did with their “datas” and the knowledge they acquired through the improvement process. She became a leader in turning compliance into improvement and making meaningful change as a result of what was learned from data reporting.
ANDREW: Jacque, could you talk a little about the ‘Aha’ moment you had with root cause analysis from a state perspective? What did this do for people on a statewide scale?

JACQUE: I remember the first time I showcased a trend in autism eligibility and realized that there had been a huge increase over the past decade—people couldn’t believe it. They were floored!  Similarly, the first time we handed out those Data Portraits to the field, again, in a large audience setting, we planned for a pause from the audience—because the data included, among other things, graduation and drop-out data for special ED students. What I didn’t expect was an audible reaction—or gasp from the crowd…. which we got… it was…. revealing. This was because, among other things, no one had ever considered whether their data was submitted accurately or inaccurately—and whether it reflected their actual program performance. We started collectively asking ourselves: ‘Why?’ Whyare we seeing what we are seeing or not seeing? Again, you can operate on opinion and jump to conclusions, but you can’t address practices or system design if you don’t dig into and really look at your data.

ANDREW: Do you think that compiling and presenting data helped you to foster better conversations with your locals? We would hope that it would lead to collaboration between programs?

JACQUE: So, there’s this old story about a chicken and the pig—the chicken cooperateswith breakfast by providing the egg, the pig collaborateswith breakfast by supplying the bacon. I think it is a good metaphor for how our locals responded to the availability of and information provided by data. Some acted as cooperators some as collaborators.

We used to theorize that there were three stages of data-driven decision making: Stage (1) data grieving. It is where you are presented with your data and you take that <gasp> moment to realize it doesn’t really align with what you believed to be true about your program. You might refute the ‘source’ of your data, or the calculation of your data and at this state you might try to ‘appeal’ your data. It’s where you come to the realization that the data is in fact what you entered or submitted. Stage (2) is a form of acceptance about your data. You either come to terms with the fact that you can’t change your data due to reporting timeframes—or, that the data you submitted, was indeed, accurate, yours and reflection of your program. Stage (3) is where you become resolute and responsive to your data; you accept it for what it is and make a plan to make sure that whatever caused the data to be what it was in the first place never happens again.

Getting though the stages of this process requires collaboration. In order to change, someone has to give something: up-change a process, compromise on a policy or reflect on a procedure. It often requires that a member of the team, and by team, I mean data collector all the way to District Superintendent, approach the process with the ability to see something new—even if it challenges their  own point of view, requires them to be more empathetic or to give something up to effect change and produce better solutions. Some programs are more willing and more able to be this introspective.

At the State Education Agency level, wewere hyper-cognizant of the need to be that introspective of our own processes was so important. Once we started reviewing data, we implemented a statewide data referent group—to provide input for us on the data collection process, the accuracy of data collected and the opportunities and limitation of its use. We had to be willing to be introspective if we were asking our locals to do the same.

ANDREW: Clearly, quality reporting and planning for improvement requires collaboration. Data is just data.  It’s neutral information until it is given context. And it is the context that is terrifically important. Training data stakeholders to be data evaluators—to really explore why we give certain data weight or use it to confirm or refute a position is critical to program improvement.

I’m thinking back to the observation of my colleague Beth Steenwyk in our first podcast. She was adamant that we need to use data to inspire curiosity—and to make it challenge our preconceived notions and believes. She’s said many times that when we use data to “ding” systems instead of helping them to examine the data and analyze what we can learn from it, we do a disservice to the entire continuous improvement process. That’s because numbers are as only as accurate as the input and collaboration means that we look into the efficacy, to understand if we are all doing our part to be accurate.

So, how do we make our collaborators better data evaluators and better data investigators?

1) We can improve the data collection process so that we have more robust, accurate, and most importantly, the most up-to-date data at our disposal.

2) We can provide data collection teams with tools to help inspire their curiosity. We can provide them with probe questions, context and communities of practice that help them understand, interpret and active constructively on the data they have in front of them.

3) Finally, we can help foster collaboration. Our educational systems and their measures of quality often exist in silos.  We can help to break down those silos through data to help bridge the most difficult conversations like:

  • Do we have policies that ensure a systemic approach to high quality leadership for all learners?
  • Do we have policies, practices and procedures that ensure high Quality Instructional Practices for all learners?
  • Do we have a system of ongoing Professional Development/Technical Assistance/Coaching that is differentiated in its approach and effective for all learners?
  • Do we use data analysis and monitor for and continuous improvement in a way that improves outcomes for all learners?

I’m Andrew Henry and this podcast and content were created by Red Cedar Solutions Group, developers of Stepwell, a software solution that empowers users to collaborate for results.