From Fit to Benefit: the New Role of Learning Outcomes?

2015-03-14_OhNoLogo22-abby3Learning outcomes, goals, objectives – however you swing it – are increasingly a topic of conversation on campuses. The Obama administration (and, more saliently, families and students at our admission and orientation events) want to know what students get for enrolling. The conversation about selecting a college has gone from focusing on fit to benefit. As Mark’s post pointed out, tuition is expensive. So colleges and universities need to better communicate their worth before students and families will shell out serious ca$h to attend.

To many of us in higher ed, this can feel overly transactional at times because we know that learning is more than inputs and outputs. But students, families, and the community at large have a right to ask why they should invest their money and time in such a large investment. So why not be transparent about why college matters and the impact of the college experience? We know why the experience is fulfilling and life changing, so let’s just tell people that instead of getting exasperated by their not understanding (because we’re not telling them).

The Chronicle published an article (see a subscription-free version on the Augustana College website) about one college (Augustana College) attempting to convey its benefit to students by instituting learning outcomes in most outside-the-classroom experiences from athletics to student clubs and much in between. In the article, the college staff beautifully articulated the value of making extracurricular learning transparent to students. The director of advising comments that a major is only one part (though a very important part!) of what helps students find success when he says, “…it’s not what a student studies. It’s how they go about constructing an undergraduate education.” AGREED! Another staff member at a different institution comments on the importance of explicitly articulating these learning outcomes, “Until you make [students] say [their learning] out loud and prompt them to reflect on it, they may not make that connection at all.” AGREED!

I did a lot of AGREE-ing while reading this article. So, I highly suggest reading the entire piece.

But there was one quote that really got my goat: “But how much integration is too much? Advisers shouldn’t force students to fit their experiences into a neat package, and they should promote some degree of exploration.”

BAAAAAHHHH!!!

Or do I mean “BAAAAA”??

Ok, ok…bad goat joke.

When an institution or office constructs a list of learning outcomes, those learning outcomes do not (or at least should not) seek to dictate nor narrow the capacity or diversity of any individual student’s learning. Instead, when an institution constructs a list of learning outcomes it provides an intention for a learning foundation. A foundation which we hope students continue to build upon using their unique interests, values, and experiences. Learning outcomes aren’t trying to “fit [student] experiences into a neat package” and stifle exploration, as the article wonders. Rather, learning outcomes mark points on a map of students’ learning, so that they can explore, and then, as a result, look back and see where they’ve been and what tools they have to use moving forward.

Advertisements

The Narrative to the Numbers: Focus Groups

2015-03-14_OhNoLogo22-abby3Assessment may use many sources from which to collect data (e.g., surveys, pre/post-tests, etc.) – especially when you’re working with assessing learning (due to its the blessed messiness). Conducting a focus group can be a good way to collect the narrative that complements or explains the quantitative data; focus groups breathe life into an otherwise typically survey-based methodology (read: fancy way to say “the ways I plan to collect data”).

Not to mention, focus groups also tend to provide a flare in the data that surveys and other written methods have a harder time conveying: inspiration. I can collect all the survey data I want and find great trends and solid longitudinal results. But the minute I actually hear the same thing out of one student’s mouth, WHOA – talk about a call to action!

venus gif - 3 loop
Ok, for those who don’t know, I used to be a pretty…um…dedicated (read: obsessed 13 year old) Sailor Moon fan. But anyway…

Focus groups are great (hooray for actually hearing the student voice!) but my goodness can they be a BEAR to plan, coordinate, collect the information, synthesize the information, follow up with participants, assign compensation (when applicable), triangulate the findings with other data, etc. Again, WHOA – talk about exhausting.

moon - 3 loop

I think some of this comes from the dichotomy you’re trying to achieve with a focus group: structured, yet open. You want to construct an environment beforehand that gets your participants in the appropriate mindset to give you the feedback about the specific topic you’re wanting (because you don’t want their feedback on anything and everything; you are seeking their feedback about a certain kind of thing), but while also allowing the environment enough openness to get actual feedback (because you don’t want to structure it so much that their feedback is just a regurgitation of what you -the facilitator- already knows/thinks). Finding the balance between structured yet open with a focus group seems more like an ideal that you’re always seeking to achieve.

I recently conducted a focus group to gauge student perspectives our office learning outcomes. I still have a few steps to go in the process, but here are a few early reflections that I have about preparing for the focus group and creating the environment with the group, and then, questions I need to think about for next time:

Things I’m glad I considered beforehand:

  1. Clarify the aim of the focus group.
    • For me, I wanted student feedback on: [1] their usage and engagement with our learning outcomes, [2] the benefit of the learning outcomes to students, and [3] clarifying #1 and communicating #2 to students and the College community.
  2. Understand more precisely what I hoped to get out of their feedback.
    • I had eight discussion questions that I had them work through that came from the intended aim of the group.
  3. Anticipate what will get the group off track, and account for that.
    • I thought they might want to talk about lots of amazing ideas that, with our time and resources, could never be done. So I addressed that with them prior to starting.
  4. Decide the amount of context to give without overly directing their feedback.
    • I erred on the side of giving them little background because I wanted to hear as many new and different ideas as possible. So I shared our larger office vision for these student learning outcomes, but did not explain HOW (in detail) we hoped students would achieve them.
  5. Use different methods to engage everyone.
    • I had the students provide feedback to me in the large group, and then break into smaller groups to accommodate different learning and communication styles to discuss the discussion questions.

But next time, I need to ask myself:

  • What visuals should I provide to convey our aim with the focus group and the topic they’re focusing on?
  • How should I use facilitation to encourage innovation and mitigate tangents?
  • How should I handle the conversation-dominators? How to better engage introverts?
  • Which other facilitators should I bring into the focus group to provide greater perspective on the participant feedback?
  • And, just overall, how can I do this better next time?

I’m still at the early stages with this focus group. Next up for me will be the fun part – analyzing the feedback! Woot woot! But for now, I want to hear from you. Does any of this resonate with your experiences with setting up focus groups? What were some of your successes? What are things that you would change next time? I want to hear from you! Pretty please comment below.

Giggle or Think?

2015-03-14_OhNoLogo22-mark3Friday’s here, which means MORE THAN USUAL FUN!

If you’re looking to giggle, check out this video:

Want to think? Here’s an interesting article:
http://www.bbc.com/future/story/20150415-the-buttons-that-do-nothing

So, if fake buttons can make people more satisfied crossing the street. Maybe we need a “click here to make assessment more fun” button…

Why College Tuition Continues to Rise

2015-03-14_OhNoLogo22-mark3Have you heard? Tuition is expensive!  College Board (the group behind high school AP courses) reports the average numbers — including tuition, fees, and room and board — for 2013-14: private ($42.4k), public in-state ($18.9k), public 2-year ($11k).

I recently stumbled upon a few articles on the topic. One blames the expansion of administration, especially top-administrator salaries. Another blames… well… the boom of administration. Full disclosure, I’m one of the staff members these folks believe there are too many of. I decided to look into this myself and see what I could find. The content below was originally an e-mail to my fiancee (sorry Kate, I just couldn’t stop). About halfway through I decided to make it into post.

Federal money goes into two main areas: loans/scholarships and research grants. It also appears to be split rather evenly between them (check it out). The research grants do nothing for the price of tuition, as they fund research. The scholarships do nothing to the price of college aside from offering a more accessible way to pay for it.

This leaves the states with the responsibility of keeping their higher ed tuition cheap. But more and more students are going to college, 15 to 20 million from 2000-2012, an increase of about 2.5% per year. And inflation rates have been about 3% per year over the past 10 years. Assuming that schools are not offering more services (thus rising costs), state funding would need to rise about 5.5% annually over that period to keep per-student cost and education quality the same — and I don’t think that’s been happening. If this report is reputable, per the bottom-right-most cell on page 27, it looks like, on average, states spent 23% less per student over the last 5 years — about a 2% decrease each year. All this while colleges are asked, and sometimes required, to provide more support.

Now consider that colleges need to compete for their students. What do students want? High rankings (prestige), sports, and fancy dining halls/gyms/facilities. Nobody is wowed by your tutoring program, your counseling office, or your student conduct office because nobody plans to use them. Strong students want to get into the highest ranked college they can for their program of interest.

So colleges (and their funding sources) need to choose: Do we want to bring in strong students OR focus on access? But hold on, what if our funding is tied to student performance? If so, why would we bring in students who we know are likely to struggle? The best way to bring up retention numbers is to bring in stronger students. How do we bring in stronger students? Sports, fancy buildings, etcetera.

The point? The current funding system makes it difficult for low and mid-level schools to exist. We want more students attending college, but don’t want to fund the support required for those less-talented students. Treating schools as businesses where the (financially) strong thrive and weak fail is a poor strategy for keeping tuition down. I don’t know if there’s a secret model that allows for an inexpensive great education.

The bigger point? For us to move forward, the states and federal government need to sort out a big question: Are we committed to a system that allows a college education for all? Without some sort of consensus, funding correlates strongly with the economy and becomes unpredictable. The strong (i.e., ivies and public flagships) weather the storm and the weak (i.e., community colleges) fail. If we’re not careful, we might end up with… uh oh… For U.S. Universities, the Rich Get Richer Faster

Of course, this is a complicated issue. The rising cost of tuition does not boil down to just one cause. Evident by my choice of profession, I believe in the value of a college education. I think the experience improves who we are as people. More than simply a set of coursework,  it requires students to make decisions about their values and start uncovering their identity. I’m worried that in the search of an efficient education system, we’re squeezing the diversity out of the post-secondary options.

What Should Assessment Measure?

2015-03-14_OhNoLogo22-mark3When starting an assessment — which, to me is the moment you identify learning outcomes — I tend to back my way into the learning outcomes. I ask myself: what do we want students to gain from their whole college experience? I narrow that down to the outcomes we hope our office provides, and on to outcomes for students at this particular time in their college experience, and then to the level of outcomes targeted by a specific effort — that is, what we do in our office.

Often, some lofty outcomes duck and dodge their way through every revision. I’m referring to outcomes along the lines of “student takes responsibility for their education and development.” Is that important? Definitely! …but what the… heck… does it mean? And when have students met this outcome? When they wake up and go to class? Or when they’ve decided on an interest and pursued information about that interest without the prodding of an advisor?

This leads me to the question: What should assessment measure? Do we reach for those lofty outcomes or aim for those more measurable (e.g., student met with advisor*)? I’ve come to a conclusion on this. We need to aim for the measurable ones; then when presenting the data, explain the implications on the lofty outcomes.

Here’s why:

I spent the first two years of my first advising job creating the ultimate assessment tool. A tool that would put Nate Silver’s presidential election result models to shame. The tool featured a set of “indicators” for each outcome. The idea: each outcome is complicated, let’s take several different measurements that, together, would tell us the extent to which student meet the outcomes. I created an MS Word document to lay out the learning outcomes, then another to indicate which indicators told us about which outcomes. Finally, I created a PowerPoint presentation to clarify the overall process and indicate which measurements should be taken when.

Problem 1: Too many pieces! If you’re collecting data from 15 different sources each year (surveys, student data, focus groups, etc.), how will you keep all of that up? As my role within the office developed, I had less time for collecting data.

Problem 2: Try explaining to someone why this group of 7-8 indicators means that students are (or are not) able to assess and improve their study strategies. In time, I had two years of data and could not explain (or understand it) in a way that we could use to improve our office services.

My suggestion to you? Keep it simple.

  1. Limit the number of learning outcomes you create.
  2. Don’t use more than 3 measurements (triangulation) to capture student achievement of an outcome.
  3. Focus on outcomes people (your office, your administration, your students) care about.
  4. Focus on outcomes for which your office is responsible. For example, establishing open communication with your roommate may be a good outcome for a residence life office but probably not for an advising office.

It’s easy to get caught up in the details and for your assessment strategy to become a monster. Just remember, if you’re hit by a bus** you need a system that someone else in your office can pick up relatively easily.

*If you’re thinking “Mark, that’s an action, not a learning outcome,” bottle up that thought, I’m sure we’ll address the makings of a good learning outcome soon. In the meantime, feel free to browse this article from the NACADA website.

**Why is this phrase so popular? Are professionals particularly prone to bus accidents? If so, why is this not in the news?

Assessment + Humor (yes, they can coexist)

2015-03-14_OhNoLogo22-abby3

Friday is FUNday at Oh no! Time for a more-than-usual post…

Assessment, it’s easier than you think (what are the tubes telling us about the users? ;-D): 

3984047778_379eee1286_o

What I justify to myself when I’m being a cranky-pants “know-it-all” (any other ESTJ’s in the house????):

5bf74ce3a837b96b3461fb958b34274e

What my writing process was like this week (Mark’s inbox can attest to this :-P):the-creative-process

How I felt when so many people tweeted at us, favorited our tweets, and/or retweeted our posts:

99a0f4c07606a277d2ef88965cb68bfb

YOU GUYS! We cannot thank you enough for your support of Oh no. You have been the highlight of our week! Please continue reading, and please click “Subscribe” so you can get Oh no delivered right to your inbox. What luxury!

See you Monday!

Assessing FoMO: the Value of Self-Reported Data

2015-03-14_OhNoLogo22-abby3Fear of Missing Out (FoMO) is a term any millennial knows. That deep, almost painful feeling of wanting to be where all the fun is being had. Where is that magical place? It’s wherever you’re not. I hear this in student conversations all the time, usually in relation to evening and weekend activities. When did “Jealous” become an appropriate response to your friend regaling tales of Saturday night?? Since FoMO.

Since Fear of Missing Out has such a strong hold on so many people college-age, maybe FoMO can provide some glimpses into student motivation. In student affairs, understanding topics like student motivation, development, leadership, and other similar concepts is important and impactful to our work. But how do you measure it? Capturing these concepts is messy and (mostly) requires data that comes from students own perceptions of themselves (i.e., self-reported). Numerous data people in my life look down upon self-reported data saying, “How do we know (indefinitely and without flaw) that what a student reports is ‘accurate data’?” First of all, to that I say self-reported information IS data; and second,I think this fosters a different kind of fear in student affairs professionals. We fear that we cannot “do data” because often much of it is self-reported. In order to quell the fear that some of you may be feeling, I want to look at one example of a study done using self-reported data.

A few researchers,Andrew K. Przybylski, Kou Murayama, Cody R. DeHaan, and Valerie Gladwell (2013), studied the FoMO phenomenon. I’m no statistician, and I imagine you may not be either, so I suggest you start reading at the summary (pp. 1846-1847). But in case you’re not going to read it at all, in a rough nutshell: the researchers made a FoMO scale (wow, how can I take THAT quiz??) and used it with three groups of people in three studies, which showed results you probably would have guessed (e.g., people experiencing higher levels of FoMO tend to be younger, have less satisfaction in their own abilities and connections with others, and engage more frequently with Facebook), but doesn’t it feel good when your thoughts are backed by data?! :::le sigh:::

Now that you’re interested in the study (hopefully?), jog back to the methods and measures portions of the article, and you’ll find that these researchers tapped into something many in student affairs have been wondering: how do I take all the stuff I know to be true about student motivation, development, and learning out of the classroom and test it? And then how do I capture those findings in a format that has that data-strength-je-ne-sais-quoi that can give out of the classroom learning the credibility it deserves?

I know there were some scary words in this study like “regression model” and “confidence interval”, but, if you push through that and look at the bigger picture, I hope you’ll delight in that the researchers used participant self-reported data to show impact (you know, similar to when we ask students if they learned anything about themselves at our programs).

So, hooray! Ask your students to report their learning back to you in some sort of systematic way, and work the results into your next staff meeting. It doesn’t have to be hard; you don’t want to miss out, right?

How do you feel about asking students to self-report their learning, development, etc.? Have you had the same experience with self-reported student data?

Thanks to Computers in Human Behavior and Andrew K. Przybylski, Kou Murayama, Cody R. DeHaan, and Valerie Gladwell (2013) for the great article, and to College Humor for the great video.

Is what I’m doing assessment or evaluation?

2015-03-14_OhNoLogo22-abby3I got into assessment via a love of learning outcomes (as opposed to Mark who got in it via a love of SPSS…Mark + math = true love). Training as a French high school teacher (tres chic!) meant taking quite a few classroom assessment courses studying concepts such as: assessment, evaluation, goals, outcomes, etc. I spent years seeking to understand the nuanced differences and relationship between these terms – hence the flowchart I put together (see below – feel free to use it; just throw ‘Oh no’ some love):

04-13-2015 - Learning Goal definitions chart - image

I’ll get into all of these terms in later posts.

During the month of April, we’re focusing on assessment in the bigger picture; laying groundwork to really dive into detailed assessment topics. Today, I’m looking into the difference between assessment and evaluation. I know varying definitions exist, and what I’m about to embark upon may be controversial, but here’s my effort at making the assessment vs. evaluation a little more clear.

Assessment is… Evaluation is…
  • about LEARNING – what did the students learn?
  • about value – did students think it was worth their time? (which is what you might be asking yourself about reading this blog ;-D)
  • finding out if what you intended to teach students is what the students actually learned AND to what degree did they learn it?
  • gathering facts about the initiative/service/program (e.g., number of attendees, were they satisfied, was the event space good, was there enough food, how much did it cost, etc…AND the results of your assessment) in order to determine any number of answers, such as:
    • who are we serving?
    • who are we not serving?
    • did students think it added value?
    • was it worth our staff time and resources?
  • (ideally) driven by your institution, department, and/or office mission and/or goals
  • (ideally) driven by your institution, department, and/or office mission and/or goals
  • the tool you use to inform how you improve learning – if you’re not using the information taken from your assessment, then I’m not sure why you’re assessing – I mean, I know you all love assessment so much…HA!
  • the tool you use to determine if you should change the logistics, format, structure, or other elements of the initiative/service/program itself
  • not counting how many students were present nor if they were satisfied (counting and satisfaction are NOT assessment)
  • count your heart out! Go ahead, ask if they’d suggest this to their friends. Find out if they wanted  foie gras instead of pizza at the event. Ask away!

Both assessment and evaluation are needed to best serve students, but decide (before you get started) what YOU are seeking to know in order to ask the “right” questions. Most likely, you’ll use a little bit of both.

There’s so much more to all of this! Stay tuned for equally stimulating posts about:

  • goals vs. outcomes
  • how to create effective learning outcomes (or are they goals?!)
    • and how to align them with your institution, department, or office goals (or are they objectives?!)
  • data analytics vs. data mining (WHOA…wait a minute! – I promise you know than you think you know)

Try to contain your excitement, people. This is a blog, not an N’Sync reunion concert.

Did I get it right? What would you add to or change about my definitions? What other common assessment/evaluation terms should we explore?

Ben and Jerry’s Free Cone Day

2015-03-14_OhNoLogo22-mark3It’s Friday! Time for a more-than-usual fun post.

freeconedaylogo

Did you know, this Tuesday, April 14th, is Ben and Jerry’s free cone day?

This event usually comes with a line. At the 2014 Free Cone Day in Ann Arbor, Michigan, the line took about 30 minutes and looked like this…

IMG_20140408_191942_928

One student shouted from the other side of the street: “Is this worth three dollars?”

Yes. Yes it is.

Why Won’t Students Listen?

2015-03-14_OhNoLogo22-mark3I had a presentation recently in front of a slew of prospective students. They had a variety of questions ranging from “how will my AP credit count” to “what’s the difference between a co-op and an internship?” To be honest, those questions were from the parents. The students were in varied states of sleeping or looking at the floor.

Was the presentation that bad? Are you just so disinterested that you can’t manage to pay attention to a 45-minute long presentation? Wait, is this my fault? Should we change the presentation? Should we have known that students in a group have a limited attention span? Should I have been more engaging in my response to their questions? It’s almost as if they didn’t care about the research opportunities in robotics that we offer!?!

It’s unfortunate that the easiest means of communication are often the least effective. It’s fairly easy to plop down some information in an e-mail or in a presentation and put it out there. But it can be exhausting to pay attention for long periods of time; or short periods of time if the information is boring or repetitive. Raise your hand if you’ve ever ignored the flight safety speech right before takeoff. Bonus points if you held your book or newspaper up as though to say “No. I’m not listening. I’m much more interested in this crossword puzzle.” It’s not that the information is unimportant, it’s that we don’t think we’ll need it.

Think about all of the advice our students get. US News and World Report conveniently ranks institutions and gives median test scores — the students don’t even have to work to find their reaches and their safety schools! Their parents, afraid their student might move back in, are nudging them toward certain job-oriented majors.

Then students get to campus for orientation and we shout at them:

GET INVOLVED, BUT NOT TOO INVOLVED!

SPEND 30-45 HOURS PER WEEK STUDYING! (more than they spent on any single activity in high school)

USE OUR CAMPUS TUTORING RESOURCES, AND IF THIS STRESSES YOU OUT, IT’S NORMAL (AND VISIT OUR COUNSELING OFFICE!)

Our students don’t need more information. They don’t need automated e-mails with lists of the things they should be thinking about. They need the right information at the right time and they need help processing that information. This is where we come in. Students need help unpacking all of the information sent their way. They need support connecting what we have to offer with what’s right for them.

The support they need is not simple, it’s not easy, and it sure isn’t efficient. If you haven’t picked it up yet, I’m hinting at mentorship; someone who knows a student and can help them craft their college experience. Through these relationships, students build trust with an individual who knows them. Someone who’s not disappointed when they’re not the student body president or not getting straight A’s.

Most of our offices are not set up for mentorship. We have many students to serve and not much time for each one individually. I think the first step toward these relationships is to personalize an interaction whenever possible. Refer to something they said in an e-mail. Use their name frequently in the conversation. Find ways to communicate that they’re not just one of many students you work with. The more students feel like we know them, the more likely they are hear the things we want them to hear.