Focus Groups: How Do We Get Students to Participate?

2015-03-14_OhNoLogo22-abby3I’m back conducting another focus group. You may remember that I did a focus group earlier last year to get feedback and ideas for the design of our online learning goals tool (“Career Tracks”). The student voice was influential to Career Tracks’ design: it now has a whole “Planning” component on which students can put career development tasks, services, and programs on a calendar and add their own deadline, due to the great feedback we received from students. This is likely not surprising to you that student input positively shaped a college initiative; but this acts as a good reminder of the power that one student voice can contribute in the creation of effective, student-centered initiatives.

But my question lately is, how the heck do I get students to show up and offer their voice??? Recently I’ve been working on a research project focusing on what students learn from their internship about being an employee (a project that could not be done without the power of collaboration!). To collect data we had two focus groups and several one-one-one research interviews. To find participants, I reached out to a number of interns, provided lunch, held it during a time of day in which no classes are offered, and besides RSVPing, there were no additional tasks students had to do to participate (so a very low barrier to entry). Sounds perfect, right? I’m guessing you know better; there is no perfect outreach method to students (but if you’ve figured that out, patent the idea and then become a millionaire – or, better yet, comment below with your revelations!).

I know many of us struggle with student participation in different forms, whether it be getting students to complete surveys, vote in student council elections, attend open forums for on-campus faculty and staff candidates, and other times in which the student view is imperative. But how do we get them to complete the things or show up at the stuff (outside of paying them or tying participation to things like course registration)? And how do we proceed if they don’t?

At the NEEAN fall forum in November, I attended one presentation about a topic related to student participation in surveys/focus groups/etc. A woman in the audience had been herself a participant in a longitudinal research project (over 15 years). She offered up some advice on how to get and keep students engaged with research/data collection-type projects that I will keep with me and share with you:

  • Show the project’s importance in the big picture – Communicate to students how their voice will shape and be an important part of the future of these initiatives for their future peers and colleagues.
  • But also keep it relevant to the present – Share with students how their participation contributes to college initiatives becoming more beneficial to them. Their voice will help make things better/more effective in their time at the college, not just in some nebulous future time.
  • Make it a mutual investment – In the case of a focus group, where you know your participants and they’re sharing much of their time for your project, make the time and effort to remember or attend one of their events. This of course isn’t always applicable (or in cases of confidentiality, appropriate) but if students are giving you their time, give them yours. Send a birthday card, attend their on-campus presentation, go to their orchestra concert, etc. The participant is investing in your project, so invest in theirs.
  • Follow up with the results and check in – Depending on the timeline and scope of your project, (briefly) check in with your student participants on the research’s progress and give them access to the results. Not only does this help with transparency but also keeps students engaged in the process, and, potentially, creates early adopters of the findings.
  • Preach and model ‘pay it forward’ – Whether you’re a student, faculty, or staff member, there will come a time when you will need other people to complete something for you (e.g., a survey, research questionnaire, etc.), so for this and other reasons, we should all probably be thoughtful about doing the same for others. This concept is larger than the bounds of one person’s project, so how do we as a college-wide community communicate this to students?? (Also, there’s got to be a term for this out there already – Data Stewardship? Civic Participation? Academic Responsibility? Survey Karma? – …ideas???)

I’m working on a few of these already, but the “pay it forward in data collection” is a concept I want to keep thinking about. I haven’t hit a millionaire-level idea with it yet but I’ll keep you all updated. You do the same. What have you done to get the student voice?

Advertisements

For the Love of Counting: The Response Rate Rat Race

2015-03-14_OhNoLogo22-abby3I’m in the midst of our annual summer experiences survey – my office’s push to understand what do students do over the summer? And is it meaningful? We know that getting ALL students to respond to our 3-12* question survey would be near impossible, but, as the assessment person on the team, it’s my job to always chase that dream (let’s be real, it’s my obsession to chase that dream!). And at a small institution like where I work getting a response rate of 100% (~1500 students) is seemingly an attainable goal. But this raises so many questions for me.

A little bit of context about the survey. Students do many valuable things over the summer that add meaning to their college experience; the particular subsection of this data that chiefly interests me (as I rep the Career Center) is the number of students who intern.

Common statistical wisdom would tell me that if I am indeed going to report on how many students intern over the summer then I need a certain response rate in order to make an accurate, broader statement about what percentage of Carleton students intern. This stats wisdom is based on a few factors: my population size, my sample size, and the margin of error with which I’m comfortable (I know, I know…ugh, statistic terms. Or maybe some of you are saying YAY! Statistic terms! Don’t let me stereotype you):

Population size = 1500 (all the upperclass students)

Sample size = 1275 (well…this is the goal…which is an 85% response rate…but do I need this # to be accurate and broad?? Hmm…better look at what margin of error I’m comfortable with…)

Margin of error = um…no error??? Baaaahhhh statistics! Sorry readers, I’m not a stats maven. But that’s ok, because SurveyMonkey greatly helped me to determine this

Margin of Error

Ok, so if I want to be SUPER confident (99%) then my goal of 1,275 students (or an 85% response rate) will get me a VERY small margin of error (read: this is good). But, turns out if I look at this from the angle of sample size, I could have the same small margin of error if I only had 1,103 students respond (74% response rate).

Sample Size

So, at this point, I could ask: Why the heck am I busting my butt to get those extra 11% of respondents??? YARG! And statistically, that would be a valid question.

But I don’t ask that question. I know I chase the 85% and 100% response rate dream because I aim to serve ALL students. And even if statistically all the students after 1,103 respond consistently, there is likely an outlier…one or a few student stories that tell me something that the first 1,103 couldn’t that shape a better student experience for all.

So to all of you regardless of if you have a relatively small population size (like me) or a much larger one (hint, Mark, Michigan Engineering, hint), I say keep up the good work trying to reach and understand the stories of 100% of your students. It may be an impossible dream but that doesn’t make it any less worthy a pursuit.

*3-12 question survey based on what the student did over the summer - skip logic, woot woot!

Stay the Course: Reminders for When Assessment Gets Messy

2015-03-14_OhNoLogo22-abby3My friends for the assessment revolution! My office is gearing up to take the next step in our learning outcomes assessment efforts. I’m VERY excited! It’s going to be fun, intellectually and professionally fulfilling, and (most importantly and hopefully) provide meaningful insight into the student experience. But in addition to excitement, I am also a bit nervous, because, as you’ve likely noticed, measuring for learning is messy – which is the largest part of its difficulty, but, also, its beauty. In my research about student learning and assessment over the past few years I’ve come to learn that it’s not just me who’s feeling this way:

In watching videos like the above and reading anything I can get my hands on, I’m hearing a few common themes (some old, some new) that I’m keeping in mind during this big year for our assessment efforts in the Career Center:

  1. Assess learning not just once, but at multiple different points and from different parts of the student experience. (read: Learning is happening all over campus, thus, assessing learning all over campus is not just a good idea, but needed.)
  2. Give students multiple opportunities to practice their learning in high-touch, intentional, reflection-centric ways. (read: It’s going to take a lot of time, there’s no quick fix, so settle in for the long haul and love the process.)
  3. Assessment tells the story of student learning, but let the student be the narrator. (read: Ask students to narrate their learning and they will tell you! Their story IS your assessment data. Now use that data to tell the larger story of student learning at large.)
  4. Set up assessment to do double duty for you – it can be a learning tool in it of itself, in addition to a data collection. 

    “…a really interesting kind of analytics should reveal to the learner even more possibilities for their own connected learning. The analytics shouldn’t simply be a kind of a diagnosis of what’s happening now but analytics at their best can be a doorway that suggests what else is possible.” -Gardner Campbell, Vice Provost for Learning Innovation and Student Success at Virginia Commonweath University

  5. Follow best practices in assessment while also breaking the mold, because learning’s blessed messiness means it’ll always need more than the gold standard. (read: Follow and break the “rules” of assessment at the same time – simple, right????)

It might be a messy year in assessment, but that’s ok, because it’s a worthwhile pursuit. And as my supervisor reminded me when I was wigging out about it recently: remember, nothing ventured, nothing gained.

So commit to the adventure and just do it.

Top Tier vs Lower Tier Engineering Programs

2015-03-14_OhNoLogo22-mark3Greetings everyone! In the past few weeks, I purchased a house. The buying process took up much of my free time (though didn’t seem to diminish my blog-ambition). Anyway, with summer in the home-stretch it’s time to get back into it. Abby and I will have our regular weekly posts returning soon. In the meantime, my brain is in the depths of “big thinking” mode. As a former engineer, and an advisor of first-year engineering students, I often find myself thinking about how we educate them.

Modern society holds complex problems. No longer is it enough for engineers to create a widget, present it to the world, and say “go forth, use this widget to make your life easier!” Beyond the question of making things faster or more-efficient, we want to know if a widget intended for a developing nation can be manufactured with local materials at low cost. Will the locals even want to use the widget?

In addition, we’re increasingly reliant on our engineers’ moral decision making. So you want to build a car that drives itself? How much will one of these cost? Can it share the road with human-driven automobiles? If the car finds itself in a certain-impact situation, should it rear-end the car in front of it or veer off onto the sidewalk? Our engineers should have the technical toolbelt to needed to design the widget as well as development required to decide how to use those tools.

This takes me to the state of today’s engineering education. Each engineering degree has a set of technical content expected of a person with that degree. The technical content takes up most of what can be covered in four years. This leaves little room for personal/moral/ethical development in the curricular portion of a student’s experience, and has us (student affairs folks) urging students to get involved outside of the classroom.

Students arrive to college with varying levels of abilities and backgrounds. The most desirable students tend to have credit from Advanced Placement courses or college courses they took while in high school. Assuming they plan to stay for four years, completing college requirements while in high school leaves students with more “wiggle room” in their schedule to pursue interests in addition to the bachelor’s degree. This may mean a minor or even taking fewer credits each term to allow for more extra-curricular time.

But what about the students who do not arrive with AP credit — the students who have the grit and determination, but have not yet studied calculus and may not have stellar standardized test scores? These students are often overlooked by the more competitive institutions. When you’re looking at thousands of applications, you need a means of sorting through them quickly.

We’re left with two tiers of institutions. Students accepted into the lower tier of institutions are not coming in with much college-level coursework completed. Since those students will be expected to have a certain toolbelt upon graduation, those programs are forced to put their resources into the classroom experience. Students accepted into the upper tier of institutions – already having college credit – have the room to take a minor in philosophy or spend time working on research in a professor’s lab.

What’s the difference? Top-tier students have the time for extra-curriculars which force them to juggle the more complex questions, often lead to the personal/moral/ethical development required of today’s engineers. Lower tier students are forced to put their time into learning technical content. They’re acquiring the tools but are not challenged to think about how to use them.

Both of these groups finish with bachelor’s degrees, the signal to employers that they’re ready for the workforce. What happens next? My gut tells me that top-tier students, equipped with experience in how to use the tools, are moving into leadership positions at a higher rate than their lower tier colleagues.

The questions I keep coming back to: Is the personal/moral/ethical development part of what is (or should be) a part of the bachelor’s degree? With the high cost of higher education, are we limiting the potential of some students by not allowing them the opportunity for that development? Does our current educational system, which nudges affluent students toward leadership, exacerbate the gap between the upper and lower socioeconomic classes?