What Evaluations Measure: Part II

October 17, 2013

"If you can’t prove what you want to prove, demonstrate something else and pretend that they are the same thing. In the daze that follows the collision of statistics with the human mind, hardly anybody will notice the difference." 

-D. Huff (1954)

To a great extent, this is what the academy does with student evaluations of teaching effectiveness. We don’t measure teaching effectiveness.  We measure what students say, and pretend it’s the same thing. We dress up the responses by taking averages to one or two decimal places, and call it a day.

But what is effective teaching? Presumably, it has something to do with learning.  An effective teacher is skillful at creating conditions that are conducive to learning. What is to be learned varies by discipline and by course: It might be a combination of facts, skills, understanding, ways of thinking, habits of mind, a maturing of perspective, or something else.  Regardless, some learning will happen no matter what the instructor does. Some students will not learn much no matter what the instructor does. How can we tell how much the instructor helped or hindered learning in a particular class? 

What can we measure?

Measuring learning is not simple: Course grades and exam scores are poor proxies, because courses and exams can be easy or hard.[1] If exams were set by someone other than the instructor—as they are in some universities—we might be able to use exam scores to measure learning.[2] But that’s not how our university works, and there would still be a risk of “teaching to the test.”

Performance in follow-on courses and career success may be better measures of learning, but time must pass to make such measurements, and it is difficult to track students over time. Moreover, relying on long-term performance measures can complicate causal inference.  How much of someone’s career success can be attributed to a single course?

There is a large literature on student teaching evaluations. Most of the research addresses reliability: Do different students give the same instructor similar marks?[3] Would the same student give the same instructor a similar mark at a different time, e.g., a year after the course ends?[4]

These questions have little to do with whether the evaluations measure effectiveness.  A hundred bathroom scales might all report your weight to be the same. That doesn’t mean the readings are accurate measures of your height (or even your weight, for that matter).

Moreover, inter-rater reliability strikes us as an odd thing to worry about, in part because it’s easy to report the full distribution of student ratings—as we advocated in part I of this blog. Scatter matters, and it can be measured in situ in every course.

Observational Studies v. Randomized Experiments

Most of the research on student teaching evaluations is based on observational studies. Students take whatever courses they choose from whomever they choose.  The researchers watch and report.  In the entire history of Science, there are few observational studies that justify inferences about causes.[5]

In general, to infer causal relationships (e.g., to determine whether effective teaching generally leads to positive student teaching evaluations) requires a controlled, randomized experiment rather than an observational study.  In a controlled, randomized experiment, individuals are assigned to groups at random; the groups get different treatments; the outcomes are compared across groups to test whether the treatments have different effects and to estimate the sizes of those differences. 

“Random” is not the same as “haphazard.” In a randomized experiment, the experimenter deliberately uses a blind, non-discretionary chance mechanism to assign individuals to treatment groups.  Randomization tends to mix individuals across groups in a balanced way. Differences in outcomes among the groups then can be attributed to a combination of chance and differences in the treatments.  The contribution of chance to those differences can be taken into account rigorously, allowing scientific inferences about the effects of the treatments. Absent randomization, differences among the groups other than the treatment can be confounded with the effect of the treatment, and there is generally no way to tell how much the confounding contributes to the observed differences.[6]

For instance, suppose that some students choose which section of a course to take by finding the professor reputed to be the most lenient grader. Those students might then rate that professor highly for meeting their expectations of an “easy A.”  If those students perform similar research to decide which section of a sequel course to take, they are likely to get good (but easy) grades in that course as well.  This would tend to “prove” that the high ratings the first professor received were justified, because students who take the class from him or her tend to do well in the sequel.

The best way to reduce confounding is to assign students at random to sections of the first and second courses.  This will tend to mix students with different abilities and from easy and hard sections of the prequel across sections of the sequel. Such randomization isn’t possible at Berkeley: We let students choose their own sections (within the constraints of enrollment limits). 

However, this experiment has been done elsewhere: the U.S. Air Force Academy[7] and Bocconi University in Milan, Italy.[8]

These studies confirm the common belief that good teachers can get bad evaluations: Teaching effectiveness, as measured by subsequent performance and career success, is negatively associated with student teaching evaluations. While one should be cautious in generalizing the conclusions because the two student populations might not be representative of students at large (or at least of Berkeley students), these are by far the best studies we know of. They are the only controlled, randomized experiments; they are from different continents and cultures; and their findings are concordant.

What do student teaching evaluations measure?

There is evidence that student teaching evaluations are reliable, in the sense that students generally agree.[9]  Homogeneity of ratings is an odd thing to focus on.  We think it would be a truly rare instructor who was equally effective (or equally ineffective) at facilitating learning across a spectrum of students with different background, preparation, skill, disposition, maturity, and ‘learning style.’ That in itself suggests that if ratings are indeed extremely consistent, as various studies assert, then perhaps ratings measure something other than teaching effectiveness.  If a laboratory instrument always gives the same reading when its inputs vary substantially, it’s probably broken.

If evaluations don’t measure teaching effectiveness, what do they measure? While we do not vouch for the methodology in any of the studies cited below, their conclusions indicate that there is conflicting evidence and little consensus:

●      student teaching evaluation scores are highly correlated with students’ grade expectations[10]

●      effectiveness scores and enjoyment scores are related[11]

●      students’ ratings of instructors can be predicted from the students’ reaction to 30 seconds of silent video of the instructor: first impressions may dictate end-of-course evaluation scores, and physical attractiveness matters[12]

●      the genders and ethnicities of the instructor and student matter, as does the age of the instructor[13]

Worthington (2002, p.13) also makes the troubling claim, “the questions in student evaluations of teaching concerning curriculum design, subject aims and objectives, and overall teaching performance appear most influenced by variables that are unrelated to effective teaching.” We as a campus hang our hats on just such a question about overall teaching performance.

What are student evaluations of teaching good for?

Students are arguably in the best position to judge certain aspects of teaching that contribute to effectiveness, such as clarity, pace, legibility, audibility.  We can use surveys to get a picture of these things; of course, the statistical issues raised in part I of this blog still matter (esp. response rates, inappropriate use of averages, false numerical precision, and scatter).

Trouble ensues when we ask students to rate teaching effectiveness per se. On the whole, students then answer a rather different set of questions from those they are asked, regardless of their intentions.  Calling the result a measure of teaching effectiveness does not make it so, any more than you can make a bathroom scale measure height by relabeling its dial “height.” Calculating precise averages of “height” measurements made with 100 different scales would not help.  And comparing two individuals’ average “height” measurements would not reveal who was in fact taller.

Summary

●      Teaching effectiveness ratings might be consistent across students; this can be assessed in every class in every semester. But consistency is a red herring. The real question is whether ratings measure instructors’ ability to facilitate learning, not whether all students rate an instructor similarly. Does better teaching earn better ratings?

●      Controlled, randomized experiments are the gold standard for reliable inference about cause and effect. The only controlled randomized experiments on student teaching evaluations have found that student evaluations of teaching effectiveness are negatively associated with direct measures of effectiveness: Evaluations do not seem to measure teaching effectiveness. There are only two such experiments, so caution is in order, but they do suggest that better teaching causes students to give worse ratings, at least in some circumstances.

●      Student teaching evaluations may be influenced by factors that have nothing to do with effectiveness, such as the gender, ethnicity, and attractiveness of the instructor.  Students seem to make snap judgments about instructors that have nothing to do with teaching effectiveness, and to rely on those judgments when asked about teaching effectiveness.

●      The survey questions apparently most influenced by extraneous factors are exactly of the form we ask on campus: overall teaching effectiveness.

●      Treating student ratings of overall teaching effectiveness as if they measured teaching effectiveness is misleading:  Relabeling a package does not change its contents.

We think student teaching evaluations—especially student comments—contain information useful for assessing and improving teaching.  But they need to be used cautiously and appropriately as part of a comprehensive review.

It’s time for Berkeley to revisit the wisdom of asking students to rate the overall teaching effectiveness of instructors, of considering those ratings to be a measure of actual teaching effectiveness, of reporting the ratings numerically and computing and comparing averages, and of relying on those averages for high-stakes decisions such as merit cases and promotions.

In the third installment of this blog, we discuss a pilot conducted in the Department of Statistics in 2012–2013 to augment student teaching evaluations with other sources of information. The additional sources still do not measure effectiveness directly, but they complement student teaching evaluations and provide formative feedback and touchstones.  We believe that the combination paints a more complete picture of teaching and will promote better teaching in the long run.

[1] According to Beleche, Fairris & Marks (2012), “It is not clear that higher course grades necessarily reflect more learning. The positive association between grades and course evaluations may also reflect initial student ability and preferences, instructor grading leniency, or even a favorable meeting time, all of which may translate into higher grades and greater student satisfaction with the course, but not necessarily to greater learning” (p. 1).

[2] See, e.g., http://xkcd.com/135/

[3] See, e.g., Abrami, et al., 2001; Braskamp and Ory, 1994; Centra, 2003; Ory, 2001; Wachtel, 1998; Marsh and Roche, 1997.

[4]  See, e.g., Braskamp and Ory, 1994; Centra, 1993; Marsh, 2007; Marsh and Dunkin, 1992; Overall and Marsh, 1980.

[5] A notable exception is John Snow’s research on the cause of cholera; his study amounts to a “natural experiment.” Seehttp://www.stat.berkeley.edu/~stark/SticiGui/Text/experiments.htm#cholera for a discussion.

[6] See, e.g., http://xkcd.com/552/

[7] Carrell and West, 2008.

[8] Braga, Paccagnella, and Pellizzari, 2011.

[9] Braskamp and Ory, 1994; Centra, 1993; Marsh, 2007; Marsh and Dunkin, 1992; Overall and Marsh, 1980.

[10] Marsh and Cooper, 1980; Short et al., 2012; Worthington, 2002.

[11] In a pilot of online course evaluations in the Department of Statistics in fall 2012, among the 1486 students who rated the instructor’s overall effectiveness and their enjoyment of the course on a 7-point scale, the correlation between instructor effectiveness and course enjoyment was 0.75, and the correlation between course effectiveness and course enjoyment was 0.8.

[12] Ambady and Rosenthal, 1993.

[13] Anderson and Miller, 1997; Basow, 1995; Cramer and Alexitch, 2000; Marsh and Dunkin, 1992; Wachtel, 1998; Weinberg et al., 2007; Worthington, 2002.

References                            

Abrami, P.C., Marilyn, H.M. & Raiszadeh, F. (2001) Business students’ perceptions of faculty evaluations, The International Journal of Educational Management, 15(1), pp. 12–22.

Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of personality and social psychology64(3), 431.

Anderson, K., & Miller, E. D. (1997). Gender and student evaluations of teaching. PS: Political Science and Politics, 30(2), 216-219.

Basow, S.A. (1995). Student evaluations of college professors: When gender matters. Journal of Educational Psychology, 87(4), 656-665.

Beleche, T., Fairris, D., & Marks, M. (2012). Do course evaluations truly reflect student learning? Evidence from an objectively graded post-test. Economics of Education Review31(5), 709-719.

Braga, M., Paccagnella, M., & Pellizzari, M. (2011). Evaluating students' evaluations of professors. Bank of Italy Temi di Discussione (Working Paper) No825.

Braskamp, L.A., & Ory, J.C. (1994). Assessing Faculty Work: Enhancing Individual and Institutional Performance. San Francisco: Jossey-Bass.

Carrell, S. E., & West, J. E. (2008). Does professor quality matter? Evidence from random assignment of students to professors (No. w14081). National Bureau of Economic Research.

Centra, J.A. (1993). Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness. San Francisco: Jossey-Bass.

Centra, J. A. (2003). Will teachers receive higher student evaluations by giving higher grades and less coursework?.Research in Higher Education44(5), 495-518.

Cramer, K.M. & Alexitch, L.R. (2000). Student evaluations of college professors: identifying sources of bias. Canadian Journal of Higher Education, 30(2), 143-64.

Huff, D. (1954). How To Lie With Statistics, New York: W.W. Norton.

Lowman, J. (1984). Mastering the techniques of teaching. San Francisco: Jossey-Bass.

Marsh. H.W. (2007). Students’ evaluations of university teaching: Dimensionality, reliability, validity, potential biases and usefulness. In R. P. Perry & J. C. Smart (Eds.), The Scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 319-383). Dordrecht, The Netherlands: Springer.

Marsh, H.W., & Cooper, T. (1980) Prior subject interest, students evaluations, and instructional effectiveness Paper presented at the annual meeting of the American Educational Research Association.

Marsh, H.W., & Dunkin, M. J. (1992). Students’ evaluations of university teaching: A multidimensional perspective. In J. C. Smart (Ed.), Higher education: Handbook of theory and research, Vol. 8. New York: Agathon Press.

Marsh, H.W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness effective. American Psychologist, 52, 1187-1197.

Ory, J.C. (2001). Faculty thoughts and concerns about student ratings. In K.G. Lewis (ed.), Techniques and strategies for interpreting student evaluations [Special issue]. New Directions for Teaching and Learning, 87, 3-15.

Overall, J. U., & Marsh, H. W. (1980). Students’ evaluations of instruction: A longitudinal study of their stability. Journal of Educational Psychology, 72, 321-325.

Short, H., Boyle, R., Braithwaite, R., Brookes, M., Mustard, J., & Saundage, D. (2008). A comparison of student evaluation of teaching with student performance. In OZCOTS 2008: Proceedings of the 6th Australian Conference on Teaching Statistics(pp. 1-10).

Wachtel, H.K. (1998). Student evaluation of college teaching effectiveness: A brief review. Assessment & Evaluation in Higher Education, 23(2), 191-211.

Weinberg, B. A., Fleisher, B. M., & Hashimoto, M. (2007). Evaluating methods for evaluating instruction: The case of higher education (NBER Working Paper No. 12844). Retrieved August 5, 2013, from http://www.nber.org/papers/w12844

Worthington, A.C. (2002). The Impact of Student Perceptions and Characteristics on Teaching Evaluations: A Case Study in Finance Education, Assessment and Evaluation in Higher Education27(1). 49–64.