When the Bush Foundation launched its partnership with 14 higher educational institutions to change the way teachers are recruited, prepared, placed and supported, it knew many types of reform would be called for, including in the ways schools assess teacher effectiveness.
That’s why the Foundation has partnered with the Value-Added Research Center at the University of Wisconsin-Madison, where until recently I was the associate director. The Foundation asked VARC to develop an assessment system that looked beyond success being defined as coaching a student through passing a particular test. Instead, they asked the VARC team to aim at identifying the value a teacher adds that ensures a student learns a year’s worth of knowledge in a year (the Foundation’s definition of an “effective teacher”), regardless of other school- and home-based challenges that may affect the student.
We knew we didn’t need to reinvent the wheel. Many districts in states the NExT Program serves administer assessments, including NWEA’s Measures of Academic Progress (MAP) and the DACS Performance Series (in South Dakota). That didn’t mean, however, that all the problems inherent in how teachers and districts use the results of those assessments had been solved.
A good case in point is the October 2011 report by Lorrie A. Shepard and colleagues, How Middle School Mathematics Teachers Use Interim and Benchmark Assessment Data. It looks at how teachers use the results of formative assessments—mid-year tests that are designed to help teachers improve their instruction methods and thereby improve their students’ ability to grasp and apply concepts (not just pass year-end assessments).
Some of the study results are troubling to those of us working to provide teachers and school leaders with assessment data that really makes a difference in their ability to educate all children. The study said, "While teachers' uses of assessment information varied, few gained substantive insights about students' mathematical understanding. Instead, teachers most frequently retaught standards or items with the lowest scores and focused on procedural competence."
The authors also noted that while teachers could have used these mid-year assessments to focus both on giving those students at a high level of mastery a deeper understanding and on providing additional supports to students who struggle, teachers who participated in the study reported that they instead tended to focus on improving the procedures and strategies for taking the year-end test.
Most school districts that implement formative testing do so with the intention that they’ll end up with data that will help teachers change their instruction methods to become more effective by better understanding individual student needs. But, like the authors of this study, I have found in my own work very few instances in which teachers actually use the data to engage with students more deeply.
One thing that may promote this outcome is that a prime purpose of providing value-added feedback on student performance on the mid-year assessment system in this study was to measure progress toward meeting those year-end assessments, or what VARC staffers call “high-stakes assessment goals,” even though that’s not what the assessments were ideally suited for. It’s easy to understand how linking an assessment tool meant to give one type of feedback with an outcome for which the tool was not specifically designed can detract from the effectiveness of the assessment for its intended use.
The good news is that these study results confirm that we have an opportunity for reforming not only how we assess teachers but how we support them in using what they learn from such assessments to increase the effectiveness of their instruction method.
As a result, researchers at VARC are working with MAP data from multiple states to help NExT partners address some of these challenges around data use. Soon VARC staff will be providing NExT partners with sample reports of value-added metrics built off such formative assessment as MAP. Although the value-added results we’ll be providing still link the mid-year assessment to the high-stakes, year-end test, we believe these refinements can help both teacher-preparation institutions and school district staff think through how to best leverage the results to improve student teaching experiences and mentoring for new teachers in their first years on the job.
Rather than providing data in a vacuum or only for such narrow accountability as passing a year-end test, this value-added feedback is intended to be immediately relevant to the practice of teaching, although whether it is ultimately used that way will depend on school leadership. While using a value-added method on the formative assessment will allow for a more fair comparison of teacher and student performance, there is nothing inherent in that use that guarantees any more sophisticated use of the test data.
Formative assessments could provide important feedback to the many actors engaged in making the NExT program work, but Shepard and her colleagues remind us that it will take dedicated effort to make that possibility a reality.
Chris Thorn is director of the Center for Data Quality and Systems Innovation at the University of Wisconsin-Madison, a new research group focusing on how multiple sources of educational effectiveness (including value-added measures) can be combined and used to make better decisions for adults and children.
You can read more about the challenge of how to use data in Using Student Achievement Data to Support Instructional Decision Making, a report from the National Center for Education Evaluation and Regional Assistance, Institute of Education Science.