Survey Results? 5’s All Over The Place

2015-03-14_OhNoLogo22-mark3A Friday post?!? Yeah. Who knows how this happened.

A year ago, I created a short program assessment for our Peer Advisor program. The idea: to capture the extent to which our Peer Advisors (PAs) learned what we wanted them to learn. Some of the learning outcomes were related to office function. for example: How comfortable do you feel answering the phone?, others were a bit more about whether they are drinking the advising Kool-Aid — apparently Kool-Aid is spelled with an “K” — things like: To what extent do you agree that professional development is important to you? Of course, there were Lickert (pronounced LICK-ert. I learned that at a conference once. I’ll tell you the story later).

The results you ask? 5’s. 5’s all over the place (out of 5). Half of the not-5’s were the result of 1 peer advisor who must (…oh god I hope) have chosen 1’s but meant 5’s. What do I make of this? With such a small sample size (we hire 10 PAs per year), a 4.78 and a 4.89 sound near-identical to me. I’d certainly be hesitant to conclude that the 4.78 is all that different from the 4.89. Deep within me is a fool shouting this must mean everything is perfect! My brain says otherwise. Maybe this assessment needs some work.

As a quick side note: I’m a firm believer in the Strongly Agree to Strongly Disagree scale. When you use a standard scale of measurement, you can compare one question to another — which can be useful.

So what’s a man/woman/assessmenteur to do?  I have a few ideas:

Avoid the whole situation in the first place. When designing your surveys, consider how students are likely to respond to a given question. If your gut (trust your gut!) tells you almost every student will respond 4 or 5, raise the bar. Look for easy ways to bring down your scores. For example, replace words like familiar with, aware of, etc. with confident, command of, etc. If your question is quantitative or involves a frequency, up the numbers — for example, weekly not monthly.

Continue avoiding the situation in the first place! That is, avoid leading questions. If I’m asked a true or false question “Do you know that Michigan has the winningest softball coach in the country?” Well, I don’t think I knew, until you asked the question, and I don’t want to be wrong, so…. TRUE!
Lump your responses. Focus on the number of students to select “agree” or
“strongly.” For example, “90% of students agreed or strongly agreed with statement x.” By lumping scores together, you blur some of the noise that’s created with so many response options, and the data is simpler to look at. You can also check out the negative side. Let’s say you want to know if students are adjusting socially? You might want to see how many students disagree or strongly disagreed with the statement “I have made friends on campus.”

Break down your demographics. If you have a large enough sample size, try looking at how the men responded in comparison to the women, or how the history majors responded versus how the engineers responded. While I don’t recommend breaking down the data just for the sake of breaking it down — unless you have bundles of time on your hands — this might yield insights you otherwise would have missed.

ELSEWHERE IN HIGHEREDLAND:
Tax on higher ed endowments? Higher ed funding, you’re going the wrong way!