We can take some joy from knowing that when Elon Musk sees that video his mind will immediately go to all of the Russian mobsters and Saudi bone saw guys he is in hock to.
A glimpse into his future. Yeah, billion dollars or not they *can* get to you and when you’re no longer useful, they will. Sleep tight! ...
Everyone remotely near a position of power in the U.S. right now is drunk on It Can't Happen Here-ism while most of the rest of the world has recent enough experience with "It" Happening that they react differently. If they don't react to authoritarian power-grabs *successfully* they at least react to them decisively. They understand democracy as a thing that is fragile, that can disappear, and that requires a defense beyond telling citizens to vote. ...
"Yoon attempted martial law" to the tune of "let the bodies hit the floor" ...
AP - November 29, 2025 - The death toll has risen to 190,000 after Butterball (NYSE: SEB +23.7%) debuted the all-AI version of its venerable Turkey Helpline that instructed callers to "leave the plastic on so it cooks faster and doesn't dry out." ...
Butch says:
Well, I just left a company where up until recently one of the questions on the self-evaluation – and let me stress that this is the evaluation you complete of your own performance – was "would you work with this person again?"
Also my given name is Butch, and a couple of years ago the system rejected my self-evaluation because (I'm quoting) "the word "butch" may have lesbian connotations." I changed my name to Loretta and submitted it. At least at the corporate level, most of the performance reviews are worse than useless and I don't have a solution.
jim says:
HS teacher evals are basically useless too….I've received glowing to harsh evaluation mainly due to how well I got along with the department head at the time.
Mo says:
Recently read David Graeber's observations on bureaucracy: The Utopia of Rules.
Twice. My Kindle pages now look like I was highlighting on acid.
Whoa, I just used yellow. How about orange for this paragraph. Then do red for the next one, a sort of tequila sunrise effect.
Ixak says:
There is a solution, of course. Embedded ethnographers in every department.
OtherAndrew says:
Why not outcomes-based, but no one marks their own class? They just provide a marking rubric and the exams to another member of the faculty or another member's graduate students.
Yaron says:
Generally you're right, and no argument with almost anything in your post. Except, well, the topic of grades:
The problem is that, well, the students themselves are graded based on those, ahem, grades. Students with higher grades are considered to be "better" students and to have understood the material better. Officially, by the way the whole system works, by the university giving the degrees based on those grades (and deciding who passes or fails based on them and who can or can't carry on to a more advanced degree…), and by any potential future employer that wants to measure academic success/knowledge.
So as long as the system officially holds students with higher grades as having studies/learned/understood better, it's extremely hypocritical to state that teachers who give higher grades (so teachers whose students are officially accepted as ones who did learn better) aren't better teachers.
The criteria of how good someone is as a teacher can't be entirely unrelated to the criteria of how well their students managed to learn from them.
And related:
But that can often be true because the teacher is really better. If the teacher manages to teach the material better, in a way that is easier for me to understand, then I learned better. So I'm both more likely to get a better grade (since I learned better), and more likely to think the teacher is better (because they taught me better).
Though, again, as in the previous point, even if the teacher didn't actually teach any differently, just raised grades, well… If you ask students to rate how well a teacher has taught them, then of course a teacher who managed to get them to get better grades (accepted sign that they learned better) was a better teacher. That's not a problem with the way teachers are rated by students, it's a problem with the way students are rated.
Kaleberg says:
Student evaluations were something new in the 1970s. They were originally just something a bunch of students put together and published, sort of an underground press project. I'm not sure when they became something official, but I doubt it will make a big difference if they go away. It's not as if they made teacher evaluation better or worse with their arrival.
There is never going to be a reliable and fair system for evaluating teaching. At research schools, teaching is job two, so if a professor can bring in research money and produce PhDs, that professor is solid gold.
Evaluation is easier at science and engineering schools. There, teaching to the test actually makes sense. So many courses are cumulative that if a prerequisite isn't taught well, it will cause problems somewhere down the pipe. Here, the skills can be tested fairly objectively since they usually involve specific and measurable knowledge and techniques.
Otherwise, you are stuck, and that means a lot of it is going to be subjective. That means the final decision is based on the choice of perceived stakeholders who might be students, peers, outside evaluators, voters or just people driving along in cars. (Honk, if you think Bill is a great teacher.) The only way to make it a bit fairer is to add layers and checks and balances, things like faculty committees, advocacy groups, an appeals process, union representation and so on. It's ugly. It would be far from perfect, but it might be made better than most alternatives.
It's is interesting to compare faculty evaluation and administrator evaluation. Maybe it is time to add administrators to that evaluation process. The comic possibilities are endless.
Sixth Column says:
". . .some kind of self-certification system wherein faculty take turns declaring one another Outstanding Teachers." This is how Silicon Valley company boards historically (I've retired) pick their members; the president of Xdyne goes to the president of Spacebook and says, "I'll put you on our board if you'll put me on yours and we'll all give each other raises and get rich!" And aren't they the smartest guys in the room?
blahedo says:
Oh man. I think about this *a lot*—I'm a college professor, got burned by a bad tenure process at one school and eventually got tenure at another. Everything in this post is entirely true.
The thing I'd like to add is that it's hard to even propose any reasonable evaluation because I don't think there's even any broad agreement as to what our job even is! Are we a business, that students (and parents) are paying to come out the other end knowing specific things? Are we a civic institution, that transforms children into critical-thinking adults? Is our primary obligation to the student who wants a credential, to the student who wants to learn how to do things, or to the student who wants to learn how to think? Or to the employers who need trained workers? Or to the employers who need a credentialing and certification system? Or maybe to the general population, who in some abstract way doesn't need any specific person to be well-educated but does need a broad supply of well-educated people?
Most people who have thought about higher education would produce a mix-and-match of stuff on that list as our "mission" (and some other things too), but with verrrrry different priorities—and you can't possibly come up with a legitimate evaluation until you know what you're trying to measure.
Leading Edge Boomer says:
When I began to teach college students (as a PhD student myself), I went to faculty members with earned reputations for good teaching, and asked for guidance. I also emulated the professors I especially appreciated in the classrooms. It is also a fact that there are 100 ways of subtly telling students that you are a good teacher; then at the end of the term they will tell you that. I saw many of my own evaluations over time.
As a department chair I received copies of student evaluation summaries, and as chairs we advised the dean to evaluate packages for promotion and tenure. So by then I had seen even more evaluations. My conclusion was that there are only three teaching levels: great teacher, adequate, and failed teacher. (I actually fired untenured faculty members who consistently failed teaching.) There is a lot of variance depending on the course, the time available to spend in preparation, the sample of the student space in the classroom, etc. In processing grad school applications, I came to the same conclusion about GRE scores, which also exhibit variance when re-taken by the same student.
Rather than relying too much on just one measure, evaluate the entire package .
buster says:
I would think the University's HR department would be full of people who have some ideas on how to do professional assessment…
c'mon… that's funny… there is no HR department…
Safety Man! says:
It’s a symptom of a greater problem, namely that as we move to a data- driven culture, we have to shoehorn in metrics for things that don’t lend themselves to them. Writing an algorithm for determining the best quality pinnapple is easy. It’s far less easy to quantify the best philosophy class, or peice of artwork, or funniest joke, or the best essay on civic responsibility.
Don’t get me wrong, data metrics are a superior way of doing things, but they have limitations.
PhatB says:
I think Yaron is assuming that students are objective (or are at least capable of being objective) about teacher effectiveness with no relation to the grade said student received. I think that assumption is bad. Not a lot students give a shit about "fair" in that context.
I took plenty of classes where I did well grade-wise but rated the teacher poorly. I have never, though, received a grade lower than I thought I deserved and rated the teacher highly.
anotherbozo says:
I remember once researching Ed on Rate Your Professor!, or some such, and discovering what were credible notes from students, that he was (a) funny, sometimes hilarious, and (b) strict and no-nonsense about course requirements. If I'd read those as an undergraduate, I would have pre-enrolled if possible, but been prepared to work.
In my own case, as a one-term-only visiting lecturer at Princeton, with evaluations of absolutely no use one way or the other, I had to absent myself for practically an entire class period (1/13th of the total contact time, I recall) so that I could be properly appraised. Student self-expression for its own sake.
I could see a service to other, prospective students in the case of a friend, probably a decent professor of beginning sculpture, in that there were many complaints about the cost of the materials that she insisted on. Not all her colleagues visited such financial punishment on starving undergraduates, so it might have been good to be forewarned.
Anon Prof says:
Actually, my experience has been that the system does exactly what it is supposed to.
The purpose of a modern university is to milk the students for tuition money.
To do that, you need to keep them happy.
If canceling the final keeps them happy, then mission accomplished. The students will give you an evaluation that accurately reflects the fact that you convinced them to continue giving tuition money to the institution.
I have been repeatedly told, at multiple universities, by my superiors, that if I don't coddle the students they will just go take the equivalent class at the local community college.
Please understand- the system is totally broken. Modern higher education is not even SUPPOSED to teach anything anymore.
I was fired from my last position because my chair told me that my students would be doing online homework, of his choosing. I would be evaluated on how well the students did on his problems. But the problem is that his homework exercises didn't reflect real chemistry. Some of his materials were wrong in ways that are downright dangerous (like claiming that sodium metal is soluble in water.) Oftentimes the problems for the first half of the chapter would contradict the second half of the chapter- i.e. "classify these compounds using this chart," but the chart keeps changing for each problem.
I insisted on teaching real chemistry, and was fired for it.
The Dean insisted that the Chair's chemistry problems reflected real chemistry. Why? Because he didn't write the problems- they were all provided by the publisher. Therefore, as far as he was concerned, they were all accurate.
Please understand that this is not just an isolated incident. At the job before that, I was told that I couldn't contradict what the students had been taught in chemistry class at the local high school- because then the high school would get pissed and not send students our way. Never mind that the local high school was teaching stuff that was flat-out wrong.
Greg says:
When you reach below college level you work with some meaningful number of colleagues who have both experience and training as teachers. University level by and large does not care whether you train as a teacher. My experience in high school teaching was that I had some colleagues who were excellent and thoughtful evaluators. By contrast a relative taught in a school evaluated by Gates Foundation grant. The methodology was laughable to say the least. So I’m with you that peer review is not something universities can get with. Though I think it’s something that COULD be of value…in a world other than the one in which we live.
democommie says:
@ Anon Prof:
Excuse but you are very cynical. Not cynical enough, but headed in the right direction.
Ed:
I din't never finnisch kolidge but I'm still pretty shur that with allathat recording devices out there its' not two hard to LISTEN to a lecture, as an informed person, and know if it's any good. Relying on the "votes" of indignoramufucks is how we got Trumpligylamygdala–just sayin'.
Robert V Walker-Smith says:
This reminded me of the one upper division English class I took at UC Berkeley. Short version: I went to the professor's office hours a few weeks in and requested to drop, as I really didn't understand what we were doing. He assured me that I really did and should stay.
After the midterm, he apologized quite sincerely. I had been right, he had been wrong. I don't think he had previously encountered a student who was simultaneously highly intelligent, widely read, and ignorant regarding literature.
This was a well respected, tenured senior professor. I really don't fault him; he just wasn't prepared for a student like me. If he had been contemptuous of my ignorance, it would have negatively affected my evaluation.
I did have a lively conversation with my English major friend who had recommended the class.
tech bro says:
Soooo, I mean, if there are *no* better alternatives and this is the best of bad options… But the problem is that it's noisy data, Is there a way to get granular enough results as to remove the noise?
I would definitely continue to collect the following from the students:
1. Scoring on the professor/class (probably the more granular the better ).
2. The Student's grade, to account for a bias based on a good grade.
3. The Demographic of the professor, to account for racism/sexism in professor grading
4. The student's "opinion" of the professor: i.e. (Did you think the professor was a fine person?)
If we know that generally these are the things that these surveys are telling you, then it feels like you should be able to collect those biases in order to suss out any signal in the noise.
I'm suspecting that the reason that the evaluations are terrible, is that the people reading the numbers (read administrators) are ignoring that the evaluations are terrible, but none of those problems with the data seem that difficult to overcome.
I even wonder how difficult it would be to come up with a machine learning model that takes in aggregate evaluations for a demographic of professor, and gives out real worthwhile information based on all of the biases that you've collected.
Anon Prof says:
@tech bro:
Maybe this is unfair of me, but I think the problem with your solution is that it's basically applying fancy math to solve a meaningless question. You talk about the need for more granular data, etc., but what exactly are you looking for? The "quality" of the professor? What does that mean?
Let me give you some more examples from my own experience.
At one of my previous jobs, the Dean would haul me into his office roughly every 3 weeks or so to scream at me for an hour that I was doing a terrible job. Why?
Because yet another female student was turning in blank exams in my class, and telling me that she wanted an A for that, and if I didn't give it to her, she'd go to the Dean and he would MAKE me give her an A for a blank exam.
If the administration used your system, they wouldn't get to fuck the students.
Year after year, the Chair would call me into the annual evaluation meeting to review my end of course surveys. He would always tell me, hmmmm, these numbers need to be improved, and if you can't bring them up, we'll have to fire you. Bear in mind that in my first year at that job, he insisted that I have a "faculty mentor" who would teach me the ropes- and he chose someone who ultimately lost her job because the students hated her so much. The students hated me, too, my first semester, because I did what she did- and the Chair held that over my head for five years, always telling me that I was making progress, but still hadn't overcome that rocky start.
At my most recent job, the students hated me for a similar reason. My Chair (who believes that the Earth is 6,000 years old) told me not to prepare labs for any of my classes, because he wanted me to use the labs he would provide- and then he didn't provide them. This wrecked my semester so that my survey results were horrible for the first semester. When I refused to teach his pseudoscience, he used the bad surveys to get me fired- even though my survey results for the second semester where overwhelmingly and unanimously positive because I wasn't teaching labs then.
At another job, I was fired for being- and this is a direct quote from the Dean- "too intellectual."
If the administration used your system, the administration wouldn't get to fuck over the faculty.
The funny thing is that after I lost that job, some of my students called me up to tell me that I was the only person who even so much as TRIED to teach them anything real. They thanked me for getting them into med school. And a lot of good that did me- because AT THE TIME, they never spoke to the Dean or the Chair.
The fact is that nobody wants the students to learn anything. We have to waste a huge amount of time on "assessment," mapping CLO's to PLO's which were written with Bloom's taxonomy and all that bullshit, but nobody ever does anything with that data. How could they, when fundamentally nobody is trying to teach the students anything real in the first place? If the system functioned at all, effectively all the chemistry textbooks out there would be laughed out of existence. Ditto for college algebra textbooks, for what I've seen. Because the textbooks are bullshit- they're not real chemistry, and not real algebra.
What kind of system would we get out of your analytics? This professor did a great job at teaching the students how to do PV=nRT problems? His students memorized "sodium is soluble in water" for long enough to get the right answer on the online homework, and promptly forgot it?
Anon Prof says:
Let me give another example of textbook bullshit, because most people seem unaware of this problem.
My chemistry students are always totally non-functional with math. They can't even do arithmetic, and yet they've all had a semester of college algebra.
And I thought, Jesus, how hard could it be? They spend an entire semester on this stuff, and can't solve 2x + 4 = 12??? What the hell are they doing in that class?
So I decided to look at some of the textbooks.
I found a page of about 100 problems like this:
Give the range of values for x for which the following expressions are true:
1.) | x + 3 | <= 7
2.) | x + 4 | <= 2
3.) | x – 1 | <= 9
4.) | x + 7 | <= 6
.
.
.
And then I looked later in the textbook, and found this gem:
"The fuel efficiency of a Toyota corrolla is given by the function f(x) = 0.219x^2 + 0.113x + 32.7, where x is in mph and f(x) is mpg. How could a Toyota owner use this formula to save money?"
And then there's the problem that starts from the premise that a coffee shop sells coffee at $2 a cup, and therefore the gross receipts can be calculated using f(x)= -0.003x^2 + 2x. Obviously it should really be f(x) = ($2/cup) x, but they just present this absurd formula, without units, and the students are expected to accept it as true.
Typically textbooks will VERY TECHNICALLY, in the most narrow sense possible, teach something "real." The textbook author can claim that he really is teaching students how to solve these problems. The problem is that none of these problems have any relevance to reality, and some of them are perversely unreal.
Once the students are marched through problems where gross receipts depend on the square of the number of cups of coffee sold, they are unable to function at all in my chemistry class.
LarsMacomb says:
I am finishing up my final grades for students as I set out to give my response here… so all of this is stewing in my mind as I register my agreement with the general cynicism which motivates this thread.
I teach at a private university. It is a very good school. It's not an R1 research institution, but our students place well into graduate programs, and professional schools. Having said that, it is also clear that the system of higher education is broken. Educating students is not a leading institutional priority. Keeping the students happy is.
My thoughts are best organized, I believe, by a heuristic that comes from the late Neil Postman: the distinction between enlightenment versus entertainment. In the former, the emphasis is on the development of a discipline of thinking based on extensive reading, the learning of critical terms (i.e., learning how to "think like and talk like," say, a sociologist, means understanding the language of the sociology field), an awareness of competing perspectives (i.e., developing a critical mind), and the capacity to apply what is learned to emerging problems in the world (which should mean learning appropriate research methods).
The "entertainment" aspect, following Postman, entails the question of whether the student LIKES or DISLIKES the experience of what is going on in the classroom. For Postman, it is a very much a matter of mediated experience. Televisual, cinematic, and, generally mass-mediated experience lacks immediate elements of falsifiability. That is, when a person sees a TV show, she or he assesses whether or not it is any good on the basis of whether they "like" or "dislike" it. Postman was putting his thoughts together in the mid 1980s, when cable TV was becoming standard. It made TV, which had already changed the way in which we thought about the world for decades, universal and ubiquitous.
In 2018, the experience of the world is so heavily mediated that it has seriously disrupted the process of sustained reading and focused thought. As such, EVERYTHING is filtered through the fundamental paradigm of "like" versus "dislike." Postman saw how TV affected the world of politics—transforming questions of ideology and practical policy into questions of likability and personal comfort. It now applies to everything, which, of course, now includes the college classroom.
Do my students "like" my class? Were they entertained while learning? Did they "enjoy" the reading? Would they "like" taking a class from me again? Was it a meaningful "experience" for them? These are, in one way or another, the questions concerning "student perceptions of teaching."
Are they stupid questions? Well, no. They actually do have their place. A professor who is a source of mind-fucking pain for a majority of students in the classroom is a problem, and this instrument makes clear if that person should be let go or, at a minimum, called into the Chair's office and offered some sort of assistance. They also help us recognize teaching excellence. When students are raving about a professor and her or his students are performing well and it's clearly NOT a matter of candy and grade inflation, these evaluations are quite helpful.
The sad reality is that most of the faculty at any university are not at either end of this continuum. They are right in the middle. Some students will connect with them. Some students will, for various reasons, register some "dislikes." What has become especially problematic is that University Administrations are entirely SOLD on the "student perception of teaching" instrument that they really do not want to look at anything else.
Over ten years ago, during a conversation with my Academic Dean when I was Chairing my department, he went on on and on about "Rate My Professor dot.com." He acknowledged that it had no real worth in a "scientific" sense (he smirked as he said this) but that it was a very useful "quick and dirty" gauge to see how his faculty was performing in the classroom! Ten years later, my entire university has gone toward a system of online student evaluations that enables comparative assessments of faculty both within and outside of academic departments.
The new regime of Deans, Provosts, Associate Deans, and Associate Provosts are having fun playing with this data like SABRmetricians at a conference of baseball writers. They are encouraging each of us to make comparative and longitudinal assessments of how effective we are from course to course, from semester to semester, etc. Soon, I imagine, they will gain enough low-level statistical know-how to set up WAR percentages and career assessment data.
They like working with this information because it SEEMS SCIENTIFIC. They shrug when reminded that they cannot realistically compare a math professor to an art professor, a theatre professor to a political science professor. They shrug when they are reminded that a professor in a large lecture hall cannot be compared with a professor in a small seminar. "It's data," they proclaim. "You people helped generate it, so, as such, you need to work with it."
They want faculty to set up matrices so that they can present their longitudinal data from year to year, especially when they are going up for tenure and promotion. "It's numbers," they say, "and numbers are objective." For years now, I've begged anyone who will listen to grasp that it is just descriptive data that enables the faculty member to know whether a class was well-received by the student. That's all. It's plenty (for what it is), but that's all.
In my experience, peer evaluation, for all of its problems, serves as the ONLY institutional rejoinder to the student evaluation as a way of evaluating the effectiveness of instruction for a faculty who finds himself or herself in the "middle"—that is, that broad group of teachers who are neither the university's most exciting not the university's most dreadful.
Increasingly, what we really need are unions. Scientifically/statistically literate administrators would be nice, but, honestly, we are probably MORE likely to get unions. Which, of course, we are never going to get.
H-Bob says:
I always find it amusing how people whose jobs involve grading/evaluating people (especially teachers, professors and judges) complain so vociferously about the unfairness and arbitrariness of the evaluation process that gets applied to them. Plus the hint/assertion that they should be exempt from any evaluation due to such unfairness and arbitrariness. Those jobs have far more insulation from the evaluation process than most jobs, not to mention the people they're grading/evaluating (students and litigants).
Karl R says:
I am all for being evaluated by the students in my classes, but like most college professors, I find the numbers reported on the ten or twelve Likert-type responses to questions on the evaluation supremely useless. First off, the questions they ask are so generic that they are largely useless to understand much of anything, or my job in directing the students' understanding. Then there is the whole issue of using the mean of responses based on a Likert scale and the tiny, tiny differences in those means ( most college teaching evaluations have small standard deviations) leading to vast differences in the percentile results. Also there's the issue of them not correcting for generally high means across comparable classes, so that if you have most of your students giving you 4s and 5s, but several disgruntled students giving you 1s, the 1s have an undue effect on the class mean (they have more power than the 5s).
I think the real solution would be to ask the students to write narratives and then have a set of agreed-upon criteria that are then used to code their narratives. But for my workplace, that would be a whole lot of work for a situation in which nobody at my institution has gotten any raise (other than one based on cost-of-living adjustments – and I am talking about 3-4% total over seven or eight years). Other than for promotion, why evaluate anybody if there is hardly any money to reward good performance and hiring someone else would cost more money?
Jane Lonely says:
I would like to be evaluated by tracking my students & observing their collective performance in the class for which mine is a prerequisite. But my community college informs me this isn't possible.
April says:
During my college-teaching years I got generally good pretty good student reviews, but I remember at one school one of the questions was "How well do you like the facilities?" It just so happened that the chairs in this school were particularly uncomfortable, and everyone HATED them. So there were always a bunch of "1's" for that question, which were always factored into my total score. Fortunately my Dean understood I had no control over the comfort of the chairs.
Having watched and evaluated numerous teachers over the years it's easy to determine the good from the bad, whatever the discipline, so it would seem to me that the answer is having evals from established profs from different departments be a part of the total eval process, but again, who wants to do more work for no extra money?
It's a conundrum.
quixote says:
Student evals are useful for showing whether a prof causes more pain than necessary in the learning process. Other than that, the don't know enough to give an informative evaluation. They're *students*. They're there because they don't know the subject yet. It's a logical impossibility for them to fathom how well their prof is teaching the subject. No matter how much administrators love all the free data they get. GIGO.
The UK used to have a peer review system, maybe they still do, that had one important added wrinkle. Instead of grumpy senior faculty in your own department, reviewers had to come from outside the district in question and not be personally acquainted with the evaluee. They had to be teachers themselves in the relevant discipline but could be retired. And there were two independent ones for each review.
No it wasn't perfect. It was a lot better than anything I've seen anywhere else. But I can hear administrators' eyes rolling back in their heads at the thought of what it would cost (ZOMG, the cost!) to get real evaluations.
Barbara Piper says:
@Kaleberg: my impression is that student evaluations started getting institutionalized when higher education started adopting the neoliberal, corporate/business model, and student course evaluations were a way of fooling students into thinking that their opinions made a difference. And sometimes they might, at the extremes. I think of them as comparable to Yelp or Amazon reviews. My frustration is with the student who contacts me 10 years later to say that they realized that they got a lot out of the course they had from me. That's great, but it highlights the fact that an evaluation during the last week of class is meaningless.
As for teaching itself, I adopted a strategy years ago of asking students every two weeks to dash off a couple of suggestions on note cards, anonymously, so I can get real-time feedback on the course when it would actually make a difference to the student(s) who is concerned– or I can address an issue in class long before a student writes a misguided complaint on the official course evaluation.
democommie says:
"The UK used to have a peer review system,"
That's the problem, in a nutshell. If they do it, why should we–everything else they do in Ol'Blighty and passe europe are wrong–what else is new.
Besides the piers here are made of good ol' MurKKKan white oak and cypress. They don't need no steenkeen reviews!
swampprof says:
Eleventy years in higher education and a lifetime in school taught me re: this topic –
If faculty want to maintain any autonomy and still be able to remove crappy teachers, then the system of triangulation we have now probably works best. Student evals are crap, but in twelve years at directional state, they do capture my weaknesses as a teacher quite well – I always grade papers late, and students always mention that. Otherwise, my evals are fine. I've had faculty observations (pre-tenure), and lots of informal feedback. I've fired (not-rehired) adjuncts with a combination of student evals, observations, and review of their syllabi. As a white male (plus) who is gay (minus), students generally expect me to be authoritative; my female and international colleagues always get lower means than I do, and not because they're less gooder at teachering.
Maybe looking for The Single Solution is the problem here. I Peter Griffin-angry-laugh any colleague who suggests that higher education is inexorably headed toward the standardized assessments that are brining K-12; that will be the ruin of higher education in the US. The primary problem I've noticed at Directional State is now all of our teaching evals are online, so I can't hand out paper forms at the end of the semester when everyone is in class; now it's up to them to decide whether to do it. So, I usually get less than 50% eval completion, which means it skews toward the ones who didn't like me/the class/their lives. Our chair has flatly stated, and our CBA states that student evals cannot be used (alone) to make any personnel decisions. That seems rational given the limits of the format.