Student Learning Objective (SLO) Examples and Processes around the U.S.
NOVEMBER 2013
By Liz Barkowski
During the summer, Nandita Gawade and I, both researchers at the Value-Added Research Center, presented at the South Carolina Association of School Administrators’ Innovative Ideas Institute in Myrtle Beach, South Carolina. The Institute provided professional development for school administrators, arming them with tools and resources to help improve their schools.
Our presentations focused on new evaluation systems, which states and school districts have developed as part of ESEA waiver applications and state legislation around educator accountability. Similar to many other states, South Carolina’s ESEA waiver requires the state to implement a new evaluation system for teachers, part of which will be based on student performance. While the state is only in its pilot phase, it is still important for all school districts to understand the ways in which teachers might be evaluated based on student performance outcomes.
To that end, I presented “Student Learning Objectives: One Option for Measuring Growth in Non-Tested Grades and Subjects.” The presentation defined and provided a brief overview of the Student Learning Objective (SLO) process. In research across the U.S., I found that over 30 states have passed laws to change their educator evaluation systems, many adding “significant emphasis” on student growth compared to past educator evaluations. However, No Child Left Behind only mandated tests for grades 4-8 reading and math, so many teachers lack the standardized tests that are used to calculate student growth for tested grades and subjects.
SLOs are a process by which many states and districts measure growth for these teachers. SLOs, as defined in my presentation, are “detailed, measurable goals for student academic growth to be achieved in a specific period of time based on prior student learning data, and developed collaboratively by educators and their supervisors.” SLOs can be used as a way to provide teachers with the autonomy to develop their own growth goals for students; however, growth targets set through this process may lack the rigor, reliability, and validity that other measures demonstrate The presentation highlights some of the difficulties in using the SLO process as part of educator evaluation, from ensuring that goals are comparable in rigor across teachers and schools to consistency in the scoring process used to determine if an educator has met their SLO. As states continue to revamp their educator evaluation systems, SLOs will continue to be a topic of discussion as one way to measure student growth for teachers not covered by standardized tests. See the presentation for more details on the VARC web site.
In addition to my presentation on SLOs, Nandita Gawade also presented “Value-Added: A Brief Overview and Implications for Educator Evaluation” at the conference. The presentation provided basic information on value-added models and described how the models attempt to isolate the impact of teachers and principals on student learning. She then described how such models could be used as a measure of student growth within educator evaluation systems.