For the Love of Counting: The Response Rate Rat Race

2015-03-14_OhNoLogo22-abby3I’m in the midst of our annual summer experiences survey – my office’s push to understand what do students do over the summer? And is it meaningful? We know that getting ALL students to respond to our 3-12* question survey would be near impossible, but, as the assessment person on the team, it’s my job to always chase that dream (let’s be real, it’s my obsession to chase that dream!). And at a small institution like where I work getting a response rate of 100% (~1500 students) is seemingly an attainable goal. But this raises so many questions for me.

A little bit of context about the survey. Students do many valuable things over the summer that add meaning to their college experience; the particular subsection of this data that chiefly interests me (as I rep the Career Center) is the number of students who intern.

Common statistical wisdom would tell me that if I am indeed going to report on how many students intern over the summer then I need a certain response rate in order to make an accurate, broader statement about what percentage of Carleton students intern. This stats wisdom is based on a few factors: my population size, my sample size, and the margin of error with which I’m comfortable (I know, I know…ugh, statistic terms. Or maybe some of you are saying YAY! Statistic terms! Don’t let me stereotype you):

Population size = 1500 (all the upperclass students)

Sample size = 1275 (well…this is the goal…which is an 85% response rate…but do I need this # to be accurate and broad?? Hmm…better look at what margin of error I’m comfortable with…)

Margin of error = um…no error??? Baaaahhhh statistics! Sorry readers, I’m not a stats maven. But that’s ok, because SurveyMonkey greatly helped me to determine this

Margin of Error

Ok, so if I want to be SUPER confident (99%) then my goal of 1,275 students (or an 85% response rate) will get me a VERY small margin of error (read: this is good). But, turns out if I look at this from the angle of sample size, I could have the same small margin of error if I only had 1,103 students respond (74% response rate).

Sample Size

So, at this point, I could ask: Why the heck am I busting my butt to get those extra 11% of respondents??? YARG! And statistically, that would be a valid question.

But I don’t ask that question. I know I chase the 85% and 100% response rate dream because I aim to serve ALL students. And even if statistically all the students after 1,103 respond consistently, there is likely an outlier…one or a few student stories that tell me something that the first 1,103 couldn’t that shape a better student experience for all.

So to all of you regardless of if you have a relatively small population size (like me) or a much larger one (hint, Mark, Michigan Engineering, hint), I say keep up the good work trying to reach and understand the stories of 100% of your students. It may be an impossible dream but that doesn’t make it any less worthy a pursuit.

*3-12 question survey based on what the student did over the summer - skip logic, woot woot!

High Impact Practices: Resources

2015-03-14_OhNoLogo22-abby3Hands-on learning, experiential education, engaged learning, whatever you may call it, student affairs professionals can agree that creating an environment in which students test, reflect upon, and reapply their learning will result in better outcomes (read: more bang for your higher education buck). We know this anecdotally but the High Impact Practices (HIP) research out there provides the data to support the level of engagement HIP have on the collegiate experience as well as gives professionals ideas and steps for how to enact all of this goodness (or more likely maximize what you already have). What is clear in all of the research is that the next level of this engaged learning is not the mere existence of experiential education, but rather that students have multiple opportunities to engage in high impact learning and that we properly assess these efforts and students’ level of learning.

Provided today at Oh no are resources for you to dive in more…

According to the George Kuh via NSSE, high impact practices:

  • demand considerable time and effort,
  • facilitate learning outside of the classroom,
  • require meaningful interactions with faculty and students,
  • encourage collaboration with diverse others, and
  • provide frequent and substantive feedback

Below are the most widely held examples for HIPs from AAC&U:

HIP_tables (1)-page-001

On the NSSE website, you can build your own report with the data they’ve collected in 2013 and 2014 – so fun!! Give it a try and sift through it to review the fun findings. Have I mentioned FUN!

Ashley Finley (on behalf of the AAC&U) provides some brief (though important) thoughts on proper execution of HIPs:

Other Videos to Watch (or more likely, just listen to in the background while you work on something else and occasionally look at):

  • George Kuh presentation about HIPs:

  • Ashley Finley’s plenary presentation about integrative learning:

What high impact practices are you working within? Where have you found success?

Writing Assessment Questions: Keep It Simple

2015-03-14_OhNoLogo22-abby3The month of May at Oh no is focusing on making assessment easy. I mean, that’s always our focus but we’re really honing in this month. Today I want to bring all of Mark and I’s theoretical musings on assessment and its purpose down to some tangible basics.

In my office (Career Center! woot! woot! ) we have nine career learning outcomes toward which we hope, in working with us, students make significant learning progress by the end of their four years at Carleton. And our purpose for having these learning outcomes is fourfold:

[1] to be transparent with students (and families and the College community too) about what students will learn/be able to do by interacting with the Career Center, so that students can be partners with us in driving their learning,

[2] to hold ourselves accountable to offer programs and services that help students progress in this learning, being intentional to make sure our programs and services serve the purpose of helping students learn,

[3] to hold students accountable to be the drivers of their learning and career development because I can’t get the job for you and more than that, I can’t decide for you which career is going to help you pursue your meaningful life, and

[4] to show the value/impact of working with the Career Center.

Here’s a sampling of some of the learning outcomes:

  1. Understand how to work through a process of self-assessment and be able to identify their transferable and work-related skills.
  2. Learn about a wide variety of career fields and opportunities.
  3. Be able to market themselves through written communication to prospective employers and networks.

So if you’ve written some learning outcomes (hooray!) then at this point, how do you write your assessment questions to assess if the students learned them or not???

An easy way is…brace yourself…to just ask them if they learned it. Revolutionary, I know. Well worth reading this blog, huh? ;-P Here’s what I mean. If my learning outcomes is:

Be able to market themselves through written communication to prospective employers and networks.

Then ask:

Indicate how much you [i.e., the student] agree or disagree with the following statements about the [insert program/service].

After attending the [insert program/service], I am able to market myself through oral communication to prospective employers and networks.

Strongly Agree | Agree | Neither Agree nor Disagree | Disagree | Strongly Disagree

Done!

Now, some of you assessment scholars (who I’m sure have nothing better to do than read this blog full of Sailor Moon gifs and Mark’s jokes ;-P), might say that using solely this kind of assessment is too shallow and wouldn’t hold up in a journal article. And to that I say, this would be only one part of a more rigorous methodology for publishing an article. BUT, I’m guessing most of you aren’t reading Oh no because you’re trying to publish, but rather to better your current practice/office initiatives. And as we’ve mentioned before, there is value is using student-reported data (especially when it’s benchmarked in one year and then measure again the next). Assessing your services will not only keep your initiatives in line with the mission of your office/institution and the learning environment, it’ll also give greater purpose and structure to the work you do.

There is so much more to say on this topic, but for now, I wanted to give a very specific, practical, simple, “you could implement this minute” tip. What are some tips you might have for keeping assessment questions simple?

From Fit to Benefit: the New Role of Learning Outcomes?

2015-03-14_OhNoLogo22-abby3Learning outcomes, goals, objectives – however you swing it – are increasingly a topic of conversation on campuses. The Obama administration (and, more saliently, families and students at our admission and orientation events) want to know what students get for enrolling. The conversation about selecting a college has gone from focusing on fit to benefit. As Mark’s post pointed out, tuition is expensive. So colleges and universities need to better communicate their worth before students and families will shell out serious ca$h to attend.

To many of us in higher ed, this can feel overly transactional at times because we know that learning is more than inputs and outputs. But students, families, and the community at large have a right to ask why they should invest their money and time in such a large investment. So why not be transparent about why college matters and the impact of the college experience? We know why the experience is fulfilling and life changing, so let’s just tell people that instead of getting exasperated by their not understanding (because we’re not telling them).

The Chronicle published an article (see a subscription-free version on the Augustana College website) about one college (Augustana College) attempting to convey its benefit to students by instituting learning outcomes in most outside-the-classroom experiences from athletics to student clubs and much in between. In the article, the college staff beautifully articulated the value of making extracurricular learning transparent to students. The director of advising comments that a major is only one part (though a very important part!) of what helps students find success when he says, “…it’s not what a student studies. It’s how they go about constructing an undergraduate education.” AGREED! Another staff member at a different institution comments on the importance of explicitly articulating these learning outcomes, “Until you make [students] say [their learning] out loud and prompt them to reflect on it, they may not make that connection at all.” AGREED!

I did a lot of AGREE-ing while reading this article. So, I highly suggest reading the entire piece.

But there was one quote that really got my goat: “But how much integration is too much? Advisers shouldn’t force students to fit their experiences into a neat package, and they should promote some degree of exploration.”

BAAAAAHHHH!!!

Or do I mean “BAAAAA”??

Ok, ok…bad goat joke.

When an institution or office constructs a list of learning outcomes, those learning outcomes do not (or at least should not) seek to dictate nor narrow the capacity or diversity of any individual student’s learning. Instead, when an institution constructs a list of learning outcomes it provides an intention for a learning foundation. A foundation which we hope students continue to build upon using their unique interests, values, and experiences. Learning outcomes aren’t trying to “fit [student] experiences into a neat package” and stifle exploration, as the article wonders. Rather, learning outcomes mark points on a map of students’ learning, so that they can explore, and then, as a result, look back and see where they’ve been and what tools they have to use moving forward.

The Narrative to the Numbers: Focus Groups

2015-03-14_OhNoLogo22-abby3Assessment may use many sources from which to collect data (e.g., surveys, pre/post-tests, etc.) – especially when you’re working with assessing learning (due to its the blessed messiness). Conducting a focus group can be a good way to collect the narrative that complements or explains the quantitative data; focus groups breathe life into an otherwise typically survey-based methodology (read: fancy way to say “the ways I plan to collect data”).

Not to mention, focus groups also tend to provide a flare in the data that surveys and other written methods have a harder time conveying: inspiration. I can collect all the survey data I want and find great trends and solid longitudinal results. But the minute I actually hear the same thing out of one student’s mouth, WHOA – talk about a call to action!

venus gif - 3 loop
Ok, for those who don’t know, I used to be a pretty…um…dedicated (read: obsessed 13 year old) Sailor Moon fan. But anyway…

Focus groups are great (hooray for actually hearing the student voice!) but my goodness can they be a BEAR to plan, coordinate, collect the information, synthesize the information, follow up with participants, assign compensation (when applicable), triangulate the findings with other data, etc. Again, WHOA – talk about exhausting.

moon - 3 loop

I think some of this comes from the dichotomy you’re trying to achieve with a focus group: structured, yet open. You want to construct an environment beforehand that gets your participants in the appropriate mindset to give you the feedback about the specific topic you’re wanting (because you don’t want their feedback on anything and everything; you are seeking their feedback about a certain kind of thing), but while also allowing the environment enough openness to get actual feedback (because you don’t want to structure it so much that their feedback is just a regurgitation of what you -the facilitator- already knows/thinks). Finding the balance between structured yet open with a focus group seems more like an ideal that you’re always seeking to achieve.

I recently conducted a focus group to gauge student perspectives our office learning outcomes. I still have a few steps to go in the process, but here are a few early reflections that I have about preparing for the focus group and creating the environment with the group, and then, questions I need to think about for next time:

Things I’m glad I considered beforehand:

  1. Clarify the aim of the focus group.
    • For me, I wanted student feedback on: [1] their usage and engagement with our learning outcomes, [2] the benefit of the learning outcomes to students, and [3] clarifying #1 and communicating #2 to students and the College community.
  2. Understand more precisely what I hoped to get out of their feedback.
    • I had eight discussion questions that I had them work through that came from the intended aim of the group.
  3. Anticipate what will get the group off track, and account for that.
    • I thought they might want to talk about lots of amazing ideas that, with our time and resources, could never be done. So I addressed that with them prior to starting.
  4. Decide the amount of context to give without overly directing their feedback.
    • I erred on the side of giving them little background because I wanted to hear as many new and different ideas as possible. So I shared our larger office vision for these student learning outcomes, but did not explain HOW (in detail) we hoped students would achieve them.
  5. Use different methods to engage everyone.
    • I had the students provide feedback to me in the large group, and then break into smaller groups to accommodate different learning and communication styles to discuss the discussion questions.

But next time, I need to ask myself:

  • What visuals should I provide to convey our aim with the focus group and the topic they’re focusing on?
  • How should I use facilitation to encourage innovation and mitigate tangents?
  • How should I handle the conversation-dominators? How to better engage introverts?
  • Which other facilitators should I bring into the focus group to provide greater perspective on the participant feedback?
  • And, just overall, how can I do this better next time?

I’m still at the early stages with this focus group. Next up for me will be the fun part – analyzing the feedback! Woot woot! But for now, I want to hear from you. Does any of this resonate with your experiences with setting up focus groups? What were some of your successes? What are things that you would change next time? I want to hear from you! Pretty please comment below.