Survey Results? 5’s All Over The Place

2015-03-14_OhNoLogo22-mark3A Friday post?!? Yeah. Who knows how this happened.

A year ago, I created a short program assessment for our Peer Advisor program. The idea: to capture the extent to which our Peer Advisors (PAs) learned what we wanted them to learn. Some of the learning outcomes were related to office function. for example: How comfortable do you feel answering the phone?, others were a bit more about whether they are drinking the advising Kool-Aid — apparently Kool-Aid is spelled with an “K” — things like: To what extent do you agree that professional development is important to you? Of course, there were Lickert (pronounced LICK-ert. I learned that at a conference once. I’ll tell you the story later).

The results you ask? 5’s. 5’s all over the place (out of 5). Half of the not-5’s were the result of 1 peer advisor who must (…oh god I hope) have chosen 1’s but meant 5’s. What do I make of this? With such a small sample size (we hire 10 PAs per year), a 4.78 and a 4.89 sound near-identical to me. I’d certainly be hesitant to conclude that the 4.78 is all that different from the 4.89. Deep within me is a fool shouting this must mean everything is perfect! My brain says otherwise. Maybe this assessment needs some work.

As a quick side note: I’m a firm believer in the Strongly Agree to Strongly Disagree scale. When you use a standard scale of measurement, you can compare one question to another — which can be useful.

So what’s a man/woman/assessmenteur to do?  I have a few ideas:

Avoid the whole situation in the first place. When designing your surveys, consider how students are likely to respond to a given question. If your gut (trust your gut!) tells you almost every student will respond 4 or 5, raise the bar. Look for easy ways to bring down your scores. For example, replace words like familiar with, aware of, etc. with confident, command of, etc. If your question is quantitative or involves a frequency, up the numbers — for example, weekly not monthly.

Continue avoiding the situation in the first place! That is, avoid leading questions. If I’m asked a true or false question “Do you know that Michigan has the winningest softball coach in the country?” Well, I don’t think I knew, until you asked the question, and I don’t want to be wrong, so…. TRUE!
Lump your responses. Focus on the number of students to select “agree” or
“strongly.” For example, “90% of students agreed or strongly agreed with statement x.” By lumping scores together, you blur some of the noise that’s created with so many response options, and the data is simpler to look at. You can also check out the negative side. Let’s say you want to know if students are adjusting socially? You might want to see how many students disagree or strongly disagreed with the statement “I have made friends on campus.”

Break down your demographics. If you have a large enough sample size, try looking at how the men responded in comparison to the women, or how the history majors responded versus how the engineers responded. While I don’t recommend breaking down the data just for the sake of breaking it down — unless you have bundles of time on your hands — this might yield insights you otherwise would have missed.

ELSEWHERE IN HIGHEREDLAND:
Tax on higher ed endowments? Higher ed funding, you’re going the wrong way!

I collected data! Now what?!

Abby photoWe’re coming to the close of yet another academic year and you did it! You surveyed students or tracked who did (and didn’t!) visit your office or understood the student learning outcomes from a program or whatever we keep preaching about on this blog. But, now what???? If you read any assessment book, at this point there are common next steps that include things like “post-test” and “close the loop” and a bunch of other common (and good!) assessment wisdom. But sometimes that common assessment wisdom isn’t actually helping any of us professionals DO something with all this data. Here are a few things I do with my data before I do something with my data:

  1. Share the data in a staff meeting: Your colleagues may or may not be formally involved in the specific program you assessed but they work with the same students, so they’ll be able to make connections within student learning and to other programs/services that you’re missing. Ask them about the themes they’re seeing (or not seeing!) within the data. It’ll help you clarify the outcomes of your data, bring more people into the assessment efforts in your office (more heads are better than one!), and it’s a nice professional development exercise for the whole team. Teamwork makes the dream work!
  2. Talk to peer colleagues about their version of the same data: Take your data* to a conference, set up a phone date with a colleague at a peer school, or read other schools’ websites. Yes, you’ll likely run into several situations that aren’t directly applicable to yours, but listen for the bits that can inspire action within your own context.
  3. Take your data to the campus experts: Know anyone in Institutional Research? Or the head of a curriculum committee? Or others in these types of roles? These types of people work with the assessment process quite a bit. Perhaps take them to coffee, make a new friend, and get their take.
  4. Show your data* to student staff in your office: Your student staff understand the inner workings of your office AND the student experience, so they’re a perfect cross section of the perspective that will breathe life into the patterns in your data. What do they see? What data patterns would their peers find interesting? What does it mean to them?

WOW, can you tell I’m an extrovert?! All of my steps include talking. Hopefully these ideas will help you to not only see the stories of student learning and programmatic impact in your data, but also to make the connections needed to progress toward closing the loop.

* This goes without saying, but a reminder is always good; make sure to autonomize the data you show students and those outside of your office/school!

Guest Blogger: When Assessment and Data Are Too Much of a Good Thing

Shamika Karikari photo headshot

I love coffee. Strong and black. Coffee does something to my soul when I drink it. The smell of roasted coffee beans and steam coming from my favorite mug brings a smile to my face. Beginning my morning with a hot cup of coffee is sure to set a positive tone for the rest of the day. I love coffee. Then something happens. It’s 3 o’clock and I realize I’m on my fourth cup of the day. During this realization, I begin to notice my higher heart rate, funny feeling in my stomach, and that I’m a bit more energized than what is good for me. The truth is I had too much of a good thing.

 

coffee mug with quote about courage

What is your “too much of a good thing”? I’m convinced we all have it, whether we want to admit it or not. Nowadays it seems assessment and data has become one of higher education’s good things that we have too much of. I want to be clear; assessment and data are necessary in higher education. Both assessment and data are good, just like coffee. However, when we have too much of it and do not use it effectively, this good thing turns into something bad. I see this most often show up in three ways that I will describe below.

  • Quality over quantity. Assess more and have more data has been the message given to many higher education professionals. More isn’t inherently bad, but it also isn’t always necessary. When we expect professionals to assess more, are we equipping them with the tools to build effective assessment tools? Are we being thoughtful about targeting what we assess instead of assessing everything? Do we consider survey fatigue? We must consider these questions. Creating fewer effective assessment tools that provide rich data instead of conceding to the pressure to assess everything will serve professionals well. Switching the focus to quality over quantity is a shift higher education must consider.
  • Dust filled data. When we leave something in a corner and don’t attend to it dust will collect. The same happens with data in higher education. When we conduct multiple assessments we have data that is filled with dust because we do not do anything with it. Because most of our data is stored electronically we don’t see the dust, but it’s there. It’s not enough to say we did an assessment. We must go a step further and use the data! We must analyze the information we’ve collected, share it with folks who need to know, and adapt a plan for how the data will be used. When we do this, our assessment becomes purposeful. When we do this, our investment in that specific assessment is justified. When we do this, our colleagues and students are best served. What timeline can you set yourself to avoid dust getting on your data? What data currently needs dusting off?
  • Over our heads. Some higher education professionals have done a great job assessing in effective ways and utilizing the data collected. However, the dissemination of data is over our heads. The pressure professionals feel has turned into the need to create 30-page analysis of data. What happened to one-page summaries? When will we use technology to disseminate our data? How can we make the analysis and presentation of the data interesting, so people want to read and use it? These are all questions we should be asking when considering the dissemination of data. I have found infographics to be a highly effective way to disseminate information in an accessible way. Making changes to better share our stories is beneficial and necessary.

Assessment is a good thing. Making data driven decisions is a good thing. We know this to be true. To ensure it doesn’t continue as too much of a good thing, professionals must consider the implications of the current way we do assessment in higher education. The survey fatigue students experience, the pressure to have data when making any size decisions, and the expectation that we assess everything under the sun have clouded the goodness of assessment. How are you doing with quality over quantity? What data needs dusted off in your office? How can you make data accessible to all? Considering these questions will get you one-step closer to keeping assessment good. Because remember, like my fourth cup of coffee in the afternoon, you want to steer clear of having too much of a good thing.

Mika Karikari is currently a doctoral student in Student Affairs in Higher Education at Miami University as well as an Associate in Career Services. Additionally, her professional background also includes academic support, residence life, and new student programs. You can follow Mika on Twitter @MikaKarikari or email her at johns263@miamioh.edu.
  

For the Love of Counting: The Response Rate Rat Race

2015-03-14_OhNoLogo22-abby3I’m in the midst of our annual summer experiences survey – my office’s push to understand what do students do over the summer? And is it meaningful? We know that getting ALL students to respond to our 3-12* question survey would be near impossible, but, as the assessment person on the team, it’s my job to always chase that dream (let’s be real, it’s my obsession to chase that dream!). And at a small institution like where I work getting a response rate of 100% (~1500 students) is seemingly an attainable goal. But this raises so many questions for me.

A little bit of context about the survey. Students do many valuable things over the summer that add meaning to their college experience; the particular subsection of this data that chiefly interests me (as I rep the Career Center) is the number of students who intern.

Common statistical wisdom would tell me that if I am indeed going to report on how many students intern over the summer then I need a certain response rate in order to make an accurate, broader statement about what percentage of Carleton students intern. This stats wisdom is based on a few factors: my population size, my sample size, and the margin of error with which I’m comfortable (I know, I know…ugh, statistic terms. Or maybe some of you are saying YAY! Statistic terms! Don’t let me stereotype you):

Population size = 1500 (all the upperclass students)

Sample size = 1275 (well…this is the goal…which is an 85% response rate…but do I need this # to be accurate and broad?? Hmm…better look at what margin of error I’m comfortable with…)

Margin of error = um…no error??? Baaaahhhh statistics! Sorry readers, I’m not a stats maven. But that’s ok, because SurveyMonkey greatly helped me to determine this

Margin of Error

Ok, so if I want to be SUPER confident (99%) then my goal of 1,275 students (or an 85% response rate) will get me a VERY small margin of error (read: this is good). But, turns out if I look at this from the angle of sample size, I could have the same small margin of error if I only had 1,103 students respond (74% response rate).

Sample Size

So, at this point, I could ask: Why the heck am I busting my butt to get those extra 11% of respondents??? YARG! And statistically, that would be a valid question.

But I don’t ask that question. I know I chase the 85% and 100% response rate dream because I aim to serve ALL students. And even if statistically all the students after 1,103 respond consistently, there is likely an outlier…one or a few student stories that tell me something that the first 1,103 couldn’t that shape a better student experience for all.

So to all of you regardless of if you have a relatively small population size (like me) or a much larger one (hint, Mark, Michigan Engineering, hint), I say keep up the good work trying to reach and understand the stories of 100% of your students. It may be an impossible dream but that doesn’t make it any less worthy a pursuit.

*3-12 question survey based on what the student did over the summer - skip logic, woot woot!