For the Love of Counting: The Response Rate Rat Race

2015-03-14_OhNoLogo22-abby3I’m in the midst of our annual summer experiences survey – my office’s push to understand what do students do over the summer? And is it meaningful? We know that getting ALL students to respond to our 3-12* question survey would be near impossible, but, as the assessment person on the team, it’s my job to always chase that dream (let’s be real, it’s my obsession to chase that dream!). And at a small institution like where I work getting a response rate of 100% (~1500 students) is seemingly an attainable goal. But this raises so many questions for me.

A little bit of context about the survey. Students do many valuable things over the summer that add meaning to their college experience; the particular subsection of this data that chiefly interests me (as I rep the Career Center) is the number of students who intern.

Common statistical wisdom would tell me that if I am indeed going to report on how many students intern over the summer then I need a certain response rate in order to make an accurate, broader statement about what percentage of Carleton students intern. This stats wisdom is based on a few factors: my population size, my sample size, and the margin of error with which I’m comfortable (I know, I know…ugh, statistic terms. Or maybe some of you are saying YAY! Statistic terms! Don’t let me stereotype you):

Population size = 1500 (all the upperclass students)

Sample size = 1275 (well…this is the goal…which is an 85% response rate…but do I need this # to be accurate and broad?? Hmm…better look at what margin of error I’m comfortable with…)

Margin of error = um…no error??? Baaaahhhh statistics! Sorry readers, I’m not a stats maven. But that’s ok, because SurveyMonkey greatly helped me to determine this

Margin of Error

Ok, so if I want to be SUPER confident (99%) then my goal of 1,275 students (or an 85% response rate) will get me a VERY small margin of error (read: this is good). But, turns out if I look at this from the angle of sample size, I could have the same small margin of error if I only had 1,103 students respond (74% response rate).

Sample Size

So, at this point, I could ask: Why the heck am I busting my butt to get those extra 11% of respondents??? YARG! And statistically, that would be a valid question.

But I don’t ask that question. I know I chase the 85% and 100% response rate dream because I aim to serve ALL students. And even if statistically all the students after 1,103 respond consistently, there is likely an outlier…one or a few student stories that tell me something that the first 1,103 couldn’t that shape a better student experience for all.

So to all of you regardless of if you have a relatively small population size (like me) or a much larger one (hint, Mark, Michigan Engineering, hint), I say keep up the good work trying to reach and understand the stories of 100% of your students. It may be an impossible dream but that doesn’t make it any less worthy a pursuit.

*3-12 question survey based on what the student did over the summer - skip logic, woot woot!

Is what I’m doing assessment or evaluation?

2015-03-14_OhNoLogo22-abby3I got into assessment via a love of learning outcomes (as opposed to Mark who got in it via a love of SPSS…Mark + math = true love). Training as a French high school teacher (tres chic!) meant taking quite a few classroom assessment courses studying concepts such as: assessment, evaluation, goals, outcomes, etc. I spent years seeking to understand the nuanced differences and relationship between these terms – hence the flowchart I put together (see below – feel free to use it; just throw ‘Oh no’ some love):

04-13-2015 - Learning Goal definitions chart - image

I’ll get into all of these terms in later posts.

During the month of April, we’re focusing on assessment in the bigger picture; laying groundwork to really dive into detailed assessment topics. Today, I’m looking into the difference between assessment and evaluation. I know varying definitions exist, and what I’m about to embark upon may be controversial, but here’s my effort at making the assessment vs. evaluation a little more clear.

Assessment is… Evaluation is…
  • about LEARNING – what did the students learn?
  • about value – did students think it was worth their time? (which is what you might be asking yourself about reading this blog ;-D)
  • finding out if what you intended to teach students is what the students actually learned AND to what degree did they learn it?
  • gathering facts about the initiative/service/program (e.g., number of attendees, were they satisfied, was the event space good, was there enough food, how much did it cost, etc…AND the results of your assessment) in order to determine any number of answers, such as:
    • who are we serving?
    • who are we not serving?
    • did students think it added value?
    • was it worth our staff time and resources?
  • (ideally) driven by your institution, department, and/or office mission and/or goals
  • (ideally) driven by your institution, department, and/or office mission and/or goals
  • the tool you use to inform how you improve learning – if you’re not using the information taken from your assessment, then I’m not sure why you’re assessing – I mean, I know you all love assessment so much…HA!
  • the tool you use to determine if you should change the logistics, format, structure, or other elements of the initiative/service/program itself
  • not counting how many students were present nor if they were satisfied (counting and satisfaction are NOT assessment)
  • count your heart out! Go ahead, ask if they’d suggest this to their friends. Find out if they wanted  foie gras instead of pizza at the event. Ask away!

Both assessment and evaluation are needed to best serve students, but decide (before you get started) what YOU are seeking to know in order to ask the “right” questions. Most likely, you’ll use a little bit of both.

There’s so much more to all of this! Stay tuned for equally stimulating posts about:

  • goals vs. outcomes
  • how to create effective learning outcomes (or are they goals?!)
    • and how to align them with your institution, department, or office goals (or are they objectives?!)
  • data analytics vs. data mining (WHOA…wait a minute! – I promise you know than you think you know)

Try to contain your excitement, people. This is a blog, not an N’Sync reunion concert.

Did I get it right? What would you add to or change about my definitions? What other common assessment/evaluation terms should we explore?