When I took on the duties of coordinating research, evaluation, and assessment at the St. Croix River Education District (SCRED), fresh out of graduate school, one of the first tasks I was confronted with was helping our consortium of small, rural, Minnesota districts make sense of these new tests (the Minnesota Comprehensive Assessments, a.k.a. MCAs) and this brand new legislation (No Child Left Behind). As part of our work, I suggested that we lead groups of teachers through a “deep dive” into the assessment, dissecting the test by the types of questions asked, how much weight was given to each strand area, and matching that up with the scope and sequence of their curriculum.
I was working with a group of 3rd-grade teachers on the MCA math, reviewing how the test included a significant number of questions on predicting the outcomes of spinners, dice, and other “games.” Through my research, we discovered that those activities were well-matched to the scope of their curriculum, but two years prior, the teachers had purposefully moved the units covering these topics to the end of the school year, after testing was over.
They had felt at the time like these topics were less rigorous (more “fun”) and would not be as good at preparing students for the MCA. Needless to say, they promptly moved these units to immediately before the test, and the following year third-grade math test scores jumped considerably.
The students’ overall learning didn’t change. The quality of their education was just as high. The socioeconomic status of the students arriving into the district hadn’t shifted. But suddenly, this school moved from near the middle of the pack to near the top of the state, in 3rd-grade math. From an accountability perspective, they were rock stars. What lessons are to be learned from this work, which was playing out in schools across the country?
As the adage goes, “what gets measured gets done,” and in this case, we have an example of a system adapting to the measurement pressures placed upon it. What’s broken about this system is not the act of measurement itself, but the extreme overfocus on summative, accountability-focused assessments that don’t provide timely information to educators.
The legislature recently commissioned a report on our state’s MCAs, and, to paraphrase, they found that we spend a lot of money on these tests, but no one finds them useful. The irony is thick, here: you might have said the same thing about this commissioned report. I’ve been working with districts across the state for the past 15 years on topics related to data and testing. I frequently ask large groups at presentations, “how many of you find the MCA useful to help you plan instruction?” I haven’t seen a hand raised yet. I think the collective response by educators to this legislative report has been, “We already knew that!”
My hope is that our state legislature will make informed decisions and listen to what educators already know: Assessments have value, and each has a purpose, but no assessment meets all purposes. The reason the MCA isn’t valuable to educators is that its only purpose is accountability. Accountability isn’t a bad thing, but it isn’t everything.
The cognitive dissonance of policymakers contributed to bad assessment policy. If we spent $50 million on a test, it must be amazing. Our state education department leaders, under pressure to make these tests as valuable as policymakers thought they were, then made the fatal mistake of trying to make the MCAs a “silver bullet” – useful for every possible purpose. (It’s a formative, it’s a summative, it’s a Supertest!) Now we see the backlash, and assessment as a whole is at risk, as the pendulum swings back.
What we need is policy that understands the need to balance different types of formative assessment (screening, diagnostic, and progress monitoring assessment) with the need for summative accountability and outcomes assessments. We aren’t overtesting our students; we are overtesting for just one purpose, and not providing enough support to districts for other purposes.
We need a wider range of assessments, and our resources (both money and time) should be redistributed across these purposes. This redistribution requires more flexibility for schools to implement research-based assessments that match their needs, and more support to help schools make these choices, as well as support for using these assessments effectively once they are in place. We can do all of this, but our staff continues to be hamstrung by outdated state-mandated assessments, pushed onto districts who are left to pay the extensive “hidden” costs of administration, coordination, professional development, and lost instructional time.
It’s time for a more sensible approach to testing in our state. Minnesota can continue its reputation as a trailblazer in education, by doing what’s right for kids, by supporting a balanced approach to assessment – one that focuses on the needs of educators, giving us the tools we need to do what we do best.