Re-evaluating student evaluation of teaching

October 31, 2014

Let’s face it: nobody likes student evaluation of teaching (SET). Faculty say it's unfair and that is biased toward easy assignments, and get frustrated when they receive more comments back on their shoes than on their pedagogy. Students feel like their feedback is disappearing into a black box, and that their thoughtful comments are ultimately meaningless. Administrators, meanwhile, don’t know how to act on the data they get, and can’t separate the signal from the noise. Academica Group has been working with partners at Centennial College, Conestoga College, and Durham College to design a better evaluation instrument, and we wanted to share just a little bit about what we've found out in the process.

Our review of student feedback forms from 15 Ontario colleges revealed that while most evaluations hit on similar themes, there’s little common ground beyond that. Questions were often worded very differently, and we identified more than 20 variations in the response scale being used. What’s more, once the forms are filled out, neither faculty nor administrators are getting the kind of information that they need. There was little in the way of data consolidation, comparison, or trend analysis, making it difficult for professors, chairs, or deans to get a good sense of faculty strengths or weaknesses over time.

Student Evaluations CycleThis is a big missed opportunity. Although SET is just one component of a more comprehensive teacher evaluation framework, it provides a very useful perspective. For example, SET can help identify ways to improve teaching and provide one form of accountability and quality assurance for students, as well as help make sure that those faculty members who are stellar educators can be recognized and rewarded for the work they put in.

In order to provide truly meaningful data, a well-designed teaching evaluation form needs first of all to be portable. A strong evaluation can be adopted across different faculties and departments, whether in the sciences or the fine arts. It should be just as useful for traditional, online, and hybrid classrooms, as well as for full-time and part-time faculty members. Only then can administrators and deans get a strong, big-picture sense of their faculty or program’s performance, and only then can faculty members identify their own strengths and weaknesses regardless of the range of courses they might be teaching.

Additionally, rather than passing out paperwork and golf pencils at the end of the term, institutions should look to implementing electronic forms. An electronic evaluation form is far less labour-intensive, saving countless hours of data transcription and processing time. Just as importantly, though, an electronic form can be used to quickly generate on-demand, customizable reports that can be tailored depending on the user’s needs. These reports can help facilitate constructive feedback, as well as provide data that can be used to identify focus areas for faculty development at the individual level or across entire programs.

With our college partners, we’ve developed a new instrument that is based on a comprehensive review of the literature, broad consultation across faculty and administration, and extensive instrument development and statistical testing. The instrument includes 5 core dimensions: 1) organization and clarity, 2) expertise and enthusiasm, 3) rapport, 4) group interaction, and 5) assessment and grading. And stay tuned: we’ve received funding from the Ontario Human Capital Research and Innovation Fund to proceed with the next phase of our research, which will let us refine our instrument, develop some benchmarks, and finalize our reporting templates. Academica President Rod Skinkle presented our findings so far at this week’s CIRPA conference in Hamilton, but if you weren’t able to attend, feel free to get in touch to find out more.