A few weeks ago I was eating in my favorite Ann Arbor pizza joint (New York Pizza Depot), when a friend of mine walked by the front window. Elaine peered in and decided to have some pizza herself. We started chatting about work and, despite trying to avoid the details, I got talking about assessment.
As I finished my elevator pitch of what we do for assessment, she says “ugh, so you must all be evaluated based on the results of the assessment?” The feeling I had must have been how Jimi Hendrix felt when he realized that, as a lefty, he could just string a righty guitar upside down. Until that point I had shied away from connecting our assessment effort with individual advisors under the idea that our assessment results could not accurately reflect on the efforts of an individual advisor. But in that moment, I simply could not muster any justification to avoid separating certain assessment results by advisor.
Pausing for a moment, I don’t want to get into a discussion on appropriate evaluation advisors. My short response to that is; no, advisor performance should not solely be measured by a few surveys with which we’re ecstatic if we receive a 30% return rate. The evaluation ought balance a handful of factors (and perhaps use more reliable data).
It hit me that we’re using our assessment to inform our delivery of information. Are students unsure what the Bulletin is? Let’s discuss it at orientation a bit more, or perhaps we’ll refer to it when we’re in our advising appointments or responding to e-mail. But there’s more to higher education than supplying information. Much of the value of advising comes out of our conversations with students. Since we each have our own style for advising, wouldn’t it be helpful to know if Steve’s students are getting involved at a higher rate, or if they’re more likely to find a position in the first few months after graduation? And isn’t that a great opportunity for us to have a conversation with Steve about how he approaches his student conversations, allowing us to reflect on our own practice?
My sense is that many of us, on some level, are afraid that our conversations with students are not always helpful and certainly not transformative. Helping a student sculpt a meaningful path for the near and far future is not at all like tutoring him for a math exam. It’s not uncommon for a student to leave my office with me thinking “Did I help that student at all?” Couple that insecurity with the fact that the outcomes we shoot for are often complicated and you’ve created an environment pushing assessment to the periphery.
If you’re reading this post (especially if you made it this far), you probably believe higher education is more than a set of information loaded into the minds of our students — that higher education can help individuals see themselves and the world in a more complex way. Central to this transformation are the conversations students have with faculty and staff. Though difficult to assess, it’s important that we improve the effectiveness of those conversations. While the assessment might not spit out a quantitative number of our conversation quality, it does give us a good opportunity to reflect and re-tool.