Guest Blogger: Incentivizing the Residential Curriculum

IMG_0466 (3)A new paradigm has emerged in residential education in recent history. This paradigm, referred generally as the “residential curriculum” approach flips the script on how we view the role of residence halls in the life of the university. While the curriculum approach is newer, the concept of learning happening outside the classroom is not, necessarily. The curriculum approach extends the concept of outside-the-classroom learning and utilizes the residence hall setting as a laboratory of student learning.

The traditional approach to learning in the residence halls can be summarized in the programmatic approach, where student staff (e.g., Resident Assistants or RAs) lean on active programs which could be educational in nature, but in many cases are purely social. Now hear me out—I am not against social programming. Social programming is necessary and needed to build relationships among residents towards the development of a community of depth.

The new curriculum approach takes some cues from our friends in academic affairs to start primarily with learning outcomes (i.e., what we want residents to learn). Once the learning outcomes are determined, it is necessary to identify various strategies which could help you achieve those learning outcomes. After the strategy is utilized comes the last, and most perplexing processes, of assessing if the learning outcome has been met, thereby assessing if student learning has taken place.

I’ve been grateful to work with learning-outcomes based models at three different institutions, now in my eighth year working in residence education. And at all three institutions I’ve observed the development of learning outcomes and strategies, but found the last step of assessment continuously perplexing to student affairs professionals in these contexts. We’ve mastered assessment of student satisfaction, but have found it much harder to quantify, or qualify, student learning. Why is that?

There are several dilemmas or barriers I’ve encountered in attempting to gage student learning. First, many times our student staff are the ones who are carrying out our curriculum. In their role as peer educators, can they effectively assess the learning of their peers when this is something student affairs professionals have been trained to do? Second, time. It is hard enough to get students to attend an educational program, and adding on an additional “last step” of an assessment survey can be asking a lot of our students who have volunteered their own time to attend our program. What’s the incentive to complete the survey? For many students, there isn’t one. Third, many of our attempts at assessment often fail and lean more toward anecdotes rather than valued evidence of learning. Wouldn’t it be great if, in a perfect world, we could have students stick around for a focus group to ask pointed questions which would help us to illicit if, in fact, learning took place?

Recently, I’ve been excited and grateful to join the Housing & Residence Life team at the University of Dayton in Dayton, Ohio. One of the decisions that played into my interest in applying for and eventually accepting my current position is the work my office is doing to promote student learning in our on-campus communities. Dayton is a highly residential campus (78%) and where students live is a central, even defining, experience to students. In an effort to better leverage this affinity for living on campus, we sought out to incentivize student learning in August 2014, a year before I started working there. We incentivize student learning by making the strategies of our residential curriculum (i.e., community meetings, one-on-ones, roommate agreements, programs) worth 1 point. The more points students accumulate, the greater chances they will receive favorable housing for the next academic year. We no longer have a housing lottery. Instead, we track student points by participating in opportunities for student learning, and apply those points to our student assignments process. This puts students in the driving seat of their housing assignment process. The main dilemma this could pose is something I thought about immediately when interviewing for my current job: Are students really going to programs and “engaging in learning” because they want to, or because they just want the points? What I began to realize is that the student’s motivation for attendance isn’t really what we care about. What we care about is the experience and, hopefully, the learning that takes place through their attendance and participation in our learning opportunities. BUT how do we gage if that learning is actually happening? Hence the dilemma. What we may have on our side is the dangling carrot of attaining points. Could we ask students to participate in their learning experience AND then complete a short “assessment of their learning” in order for them to attain their points? This may enable us to collect some valid data which could help us to demonstrate that students are learning in their residential environment, regardless of why they are there.

Matt Kwiatkowski is the Assistant Director of Residence Life at the University of Dayton in Dayton, Ohio. Please leave your comments below or feel free to contact Matt via email at mkwiatkowski1@udayton.edu. 

Assessment Conferences

2015-03-14_OhNoLogo22-abby3Assessment heads – happy almost November! There is so much assessment in November that I’m looking forward to, most specifically:

  1. Higher Education Assessment Friendship Summit (i.e., Mark + Abby + our incredible group of friends = meeting of the minds in the same time zone!), and
  2. my group presentation at the New England Educational Assessment Network (NEEAN – try saying it 3 times fast) fall forum at College of Holy Cross.
Stay tuned next week for Oh no, Friendship Summit edition. For this week, NEEAN!
I wrote last time about great assessment collaborations and the NEEAN presentation is one result of those. You’ve all read our many, many, many (did I mention many) posts about learning goals/outcomes. My office ties several of our programs and services to nine student learning goals, and we’re gearing up to do that on an even broader scale. The Associate Director of Institutional Research and Assessment at Carleton (Carol Trosset) has been invaluable as we move into this next phase.
At NEEAN, we’ll be exploring my office’s learning goals through one example of this expansion: our internship program. Students create learning goals and strategies prior to their summer internship. They write reflections during and after the internship about their learning, in order to capture their outcomes. Carol has been helping us code students’ goals so that we may understand on a larger scale what students intend and seek to learn prior to their internship. At NEEAN, we’ll be comparing Carleton’s process to the great process at Bennington College, where Carol examined the outcomes of student experiential learning. I can’t wait to learn more about the Bennington process and get inspired by so many my assessment professionals.
Next up on the home campus front, we’ll conduct focus groups with student interns about their learning outcomes. Over the next few weeks, I’ll have pre-interviews with student interns to start structuring the focus groups. So much great assessment happening – stay tuned for more assessment fun!

Stay the Course: Reminders for When Assessment Gets Messy

2015-03-14_OhNoLogo22-abby3My friends for the assessment revolution! My office is gearing up to take the next step in our learning outcomes assessment efforts. I’m VERY excited! It’s going to be fun, intellectually and professionally fulfilling, and (most importantly and hopefully) provide meaningful insight into the student experience. But in addition to excitement, I am also a bit nervous, because, as you’ve likely noticed, measuring for learning is messy – which is the largest part of its difficulty, but, also, its beauty. In my research about student learning and assessment over the past few years I’ve come to learn that it’s not just me who’s feeling this way:

In watching videos like the above and reading anything I can get my hands on, I’m hearing a few common themes (some old, some new) that I’m keeping in mind during this big year for our assessment efforts in the Career Center:

  1. Assess learning not just once, but at multiple different points and from different parts of the student experience. (read: Learning is happening all over campus, thus, assessing learning all over campus is not just a good idea, but needed.)
  2. Give students multiple opportunities to practice their learning in high-touch, intentional, reflection-centric ways. (read: It’s going to take a lot of time, there’s no quick fix, so settle in for the long haul and love the process.)
  3. Assessment tells the story of student learning, but let the student be the narrator. (read: Ask students to narrate their learning and they will tell you! Their story IS your assessment data. Now use that data to tell the larger story of student learning at large.)
  4. Set up assessment to do double duty for you – it can be a learning tool in it of itself, in addition to a data collection. 

    “…a really interesting kind of analytics should reveal to the learner even more possibilities for their own connected learning. The analytics shouldn’t simply be a kind of a diagnosis of what’s happening now but analytics at their best can be a doorway that suggests what else is possible.” -Gardner Campbell, Vice Provost for Learning Innovation and Student Success at Virginia Commonweath University

  5. Follow best practices in assessment while also breaking the mold, because learning’s blessed messiness means it’ll always need more than the gold standard. (read: Follow and break the “rules” of assessment at the same time – simple, right????)

It might be a messy year in assessment, but that’s ok, because it’s a worthwhile pursuit. And as my supervisor reminded me when I was wigging out about it recently: remember, nothing ventured, nothing gained.

So commit to the adventure and just do it.

Assessment on the Road: Boise State

Images of Idaho

My summer of travel is sadly coming to an end. I just got back from one of my last destinations: Idaho! (Boise, to be exact) It was my inaugural trip and I’m happy to report back what so many already know: Idaho is beautiful. (Thanks Wear Boise for the great beard image!)

Being the lover of colleges and campuses that I am, I had to visit Boise State University while I was there. By all of the extra signage around campus, clearly I was there on a summer orientation day (which I love because I love orientation).

Boise Career Center welcome

Once again, the Career Center of this new-to-me campus grabbed my attention. I REALLY loved their graphic representation of their career learning goals. What a great way to be transparent about and engage students in what goals the Career Center has for students. I think that is a pivotal step to students actually successfully learning and achieving those goals!

CC make it count detail

Another data visualization from Boise State that I loved was through their admission office. Here’s what I love about it:

It’s simple. Easy graphics, consistent color palette. Clean.

BSU data brochure5

It has numbers AND text. They bring life and strength to each other.

BSU data brochure

There is lots of different kinds of data. I think more and more the public wants LOTS of data all at once so that they can quickly skim and find what is most meaningful to them.

BSU dad brochure2

It’s a nice size. Folds up as a small brochure (6″x3″) and folds out as a small poster (18″x12″).

BSU data brochure3

It’s a nice paper weight. I know, this sounds weird, but if you’re going to be printing nice graphics, you want to have nice paper on which to put them.

Thanks for the great visit Boise State U! Until next time!

High Impact Practices: Resources

2015-03-14_OhNoLogo22-abby3Hands-on learning, experiential education, engaged learning, whatever you may call it, student affairs professionals can agree that creating an environment in which students test, reflect upon, and reapply their learning will result in better outcomes (read: more bang for your higher education buck). We know this anecdotally but the High Impact Practices (HIP) research out there provides the data to support the level of engagement HIP have on the collegiate experience as well as gives professionals ideas and steps for how to enact all of this goodness (or more likely maximize what you already have). What is clear in all of the research is that the next level of this engaged learning is not the mere existence of experiential education, but rather that students have multiple opportunities to engage in high impact learning and that we properly assess these efforts and students’ level of learning.

Provided today at Oh no are resources for you to dive in more…

According to the George Kuh via NSSE, high impact practices:

  • demand considerable time and effort,
  • facilitate learning outside of the classroom,
  • require meaningful interactions with faculty and students,
  • encourage collaboration with diverse others, and
  • provide frequent and substantive feedback

Below are the most widely held examples for HIPs from AAC&U:

HIP_tables (1)-page-001

On the NSSE website, you can build your own report with the data they’ve collected in 2013 and 2014 – so fun!! Give it a try and sift through it to review the fun findings. Have I mentioned FUN!

Ashley Finley (on behalf of the AAC&U) provides some brief (though important) thoughts on proper execution of HIPs:

Other Videos to Watch (or more likely, just listen to in the background while you work on something else and occasionally look at):

  • George Kuh presentation about HIPs:

  • Ashley Finley’s plenary presentation about integrative learning:

What high impact practices are you working within? Where have you found success?

Zero to Assessment

2015-03-14_OhNoLogo22-mark3As you know, it’s Make Assessment Easy Month here at Oh No. In the Engineering Advising Center, we recently (last year) re-vamped our office assessment(s), and I’ve learned oodles in the process. Whether you’re creating an office-wide strategy, or a strategy to measure the success of a specific program owned by your office, these four steps  (which I picked up from Nacada’s 2014 Assessment Institute) can help you get from nothing to a simple, focused, and effective strategy. Most of the links to which I’m referencing come from NACADA, though the concepts are applicable to more than just advising.

Step 1, Create Learning Outcomes: NACADA recommends that learning outcomes focus on what we want students to know, do, and value (see last paragraph in Concept of Academic Advising). It’s good to keep this list short. We have 8 outcomes we focus on in our office. The longer your list, the longer (and more boring) your report of results. If your colleagues fall asleep while you’re discussing the results, you may have too many outcomes.

Step 2, Opportunities for Students to Achieve Outcome: It’s good to have a plan for when (e.g., workshops, advising appointments, etc.) we want students to achieve our desired outcomes. This portion might include workshops, advising appointments, tutorials, etcetera. In most cases, this is what you’re already doing! Hopefully.

Step 3, By What Time Should Learning Occur? This step helps you indicate when you’d like students to achieve your outcomes. For example, if you’re a career services office and you want students to have created a resume, you probably want that to happen sometime before they’re job searching. We often use student academic years/terms for this. For the resume example, your deadline might be by the end of their first year*.

*Originally I put “junior year” here. Abby’s response gave me the sense that career services folks would riot in the streets if this didn’t happen until the junior year. My sincere apologies! Feel free to pretend this deadline is anytime you see fit…

Step 4, How Will You Know if the Outcome Has Been Met? We use this step to determine when we’re going to make a measurement. It helps to limit yourself to just a few surveys or queries a year — this keeps your process sustainable. Common times to collect data are at the end of orientation, fall, and spring term.

In the end, you will have a table, with the learning outcomes as rows and each step as a column.

Untitled

This system works whether you’re creating an assessment for the entire office or if you’re just trying to assess one program. I’m using this process to assess our training and development of our orientation leaders this summer.

I hope you found this table useful. As you start to dive into the process of creating an assessment, you will come across questions that the table does not address (e.g., should we use surveys or focus groups or some combination of the two? Is our data valid? etc.). Just remember the KISS rule of thumb: Keep It Simple Steve. You may want to replace “Steve” with your name. The assessment does not have to be perfect. It should be simple enough for you (or someone else) to explain and follow through.

Writing Assessment Questions: Keep It Simple

2015-03-14_OhNoLogo22-abby3The month of May at Oh no is focusing on making assessment easy. I mean, that’s always our focus but we’re really honing in this month. Today I want to bring all of Mark and I’s theoretical musings on assessment and its purpose down to some tangible basics.

In my office (Career Center! woot! woot! ) we have nine career learning outcomes toward which we hope, in working with us, students make significant learning progress by the end of their four years at Carleton. And our purpose for having these learning outcomes is fourfold:

[1] to be transparent with students (and families and the College community too) about what students will learn/be able to do by interacting with the Career Center, so that students can be partners with us in driving their learning,

[2] to hold ourselves accountable to offer programs and services that help students progress in this learning, being intentional to make sure our programs and services serve the purpose of helping students learn,

[3] to hold students accountable to be the drivers of their learning and career development because I can’t get the job for you and more than that, I can’t decide for you which career is going to help you pursue your meaningful life, and

[4] to show the value/impact of working with the Career Center.

Here’s a sampling of some of the learning outcomes:

  1. Understand how to work through a process of self-assessment and be able to identify their transferable and work-related skills.
  2. Learn about a wide variety of career fields and opportunities.
  3. Be able to market themselves through written communication to prospective employers and networks.

So if you’ve written some learning outcomes (hooray!) then at this point, how do you write your assessment questions to assess if the students learned them or not???

An easy way is…brace yourself…to just ask them if they learned it. Revolutionary, I know. Well worth reading this blog, huh? ;-P Here’s what I mean. If my learning outcomes is:

Be able to market themselves through written communication to prospective employers and networks.

Then ask:

Indicate how much you [i.e., the student] agree or disagree with the following statements about the [insert program/service].

After attending the [insert program/service], I am able to market myself through oral communication to prospective employers and networks.

Strongly Agree | Agree | Neither Agree nor Disagree | Disagree | Strongly Disagree

Done!

Now, some of you assessment scholars (who I’m sure have nothing better to do than read this blog full of Sailor Moon gifs and Mark’s jokes ;-P), might say that using solely this kind of assessment is too shallow and wouldn’t hold up in a journal article. And to that I say, this would be only one part of a more rigorous methodology for publishing an article. BUT, I’m guessing most of you aren’t reading Oh no because you’re trying to publish, but rather to better your current practice/office initiatives. And as we’ve mentioned before, there is value is using student-reported data (especially when it’s benchmarked in one year and then measure again the next). Assessing your services will not only keep your initiatives in line with the mission of your office/institution and the learning environment, it’ll also give greater purpose and structure to the work you do.

There is so much more to say on this topic, but for now, I wanted to give a very specific, practical, simple, “you could implement this minute” tip. What are some tips you might have for keeping assessment questions simple?

From Fit to Benefit: the New Role of Learning Outcomes?

2015-03-14_OhNoLogo22-abby3Learning outcomes, goals, objectives – however you swing it – are increasingly a topic of conversation on campuses. The Obama administration (and, more saliently, families and students at our admission and orientation events) want to know what students get for enrolling. The conversation about selecting a college has gone from focusing on fit to benefit. As Mark’s post pointed out, tuition is expensive. So colleges and universities need to better communicate their worth before students and families will shell out serious ca$h to attend.

To many of us in higher ed, this can feel overly transactional at times because we know that learning is more than inputs and outputs. But students, families, and the community at large have a right to ask why they should invest their money and time in such a large investment. So why not be transparent about why college matters and the impact of the college experience? We know why the experience is fulfilling and life changing, so let’s just tell people that instead of getting exasperated by their not understanding (because we’re not telling them).

The Chronicle published an article (see a subscription-free version on the Augustana College website) about one college (Augustana College) attempting to convey its benefit to students by instituting learning outcomes in most outside-the-classroom experiences from athletics to student clubs and much in between. In the article, the college staff beautifully articulated the value of making extracurricular learning transparent to students. The director of advising comments that a major is only one part (though a very important part!) of what helps students find success when he says, “…it’s not what a student studies. It’s how they go about constructing an undergraduate education.” AGREED! Another staff member at a different institution comments on the importance of explicitly articulating these learning outcomes, “Until you make [students] say [their learning] out loud and prompt them to reflect on it, they may not make that connection at all.” AGREED!

I did a lot of AGREE-ing while reading this article. So, I highly suggest reading the entire piece.

But there was one quote that really got my goat: “But how much integration is too much? Advisers shouldn’t force students to fit their experiences into a neat package, and they should promote some degree of exploration.”

BAAAAAHHHH!!!

Or do I mean “BAAAAA”??

Ok, ok…bad goat joke.

When an institution or office constructs a list of learning outcomes, those learning outcomes do not (or at least should not) seek to dictate nor narrow the capacity or diversity of any individual student’s learning. Instead, when an institution constructs a list of learning outcomes it provides an intention for a learning foundation. A foundation which we hope students continue to build upon using their unique interests, values, and experiences. Learning outcomes aren’t trying to “fit [student] experiences into a neat package” and stifle exploration, as the article wonders. Rather, learning outcomes mark points on a map of students’ learning, so that they can explore, and then, as a result, look back and see where they’ve been and what tools they have to use moving forward.

What Should Assessment Measure?

2015-03-14_OhNoLogo22-mark3When starting an assessment — which, to me is the moment you identify learning outcomes — I tend to back my way into the learning outcomes. I ask myself: what do we want students to gain from their whole college experience? I narrow that down to the outcomes we hope our office provides, and on to outcomes for students at this particular time in their college experience, and then to the level of outcomes targeted by a specific effort — that is, what we do in our office.

Often, some lofty outcomes duck and dodge their way through every revision. I’m referring to outcomes along the lines of “student takes responsibility for their education and development.” Is that important? Definitely! …but what the… heck… does it mean? And when have students met this outcome? When they wake up and go to class? Or when they’ve decided on an interest and pursued information about that interest without the prodding of an advisor?

This leads me to the question: What should assessment measure? Do we reach for those lofty outcomes or aim for those more measurable (e.g., student met with advisor*)? I’ve come to a conclusion on this. We need to aim for the measurable ones; then when presenting the data, explain the implications on the lofty outcomes.

Here’s why:

I spent the first two years of my first advising job creating the ultimate assessment tool. A tool that would put Nate Silver’s presidential election result models to shame. The tool featured a set of “indicators” for each outcome. The idea: each outcome is complicated, let’s take several different measurements that, together, would tell us the extent to which student meet the outcomes. I created an MS Word document to lay out the learning outcomes, then another to indicate which indicators told us about which outcomes. Finally, I created a PowerPoint presentation to clarify the overall process and indicate which measurements should be taken when.

Problem 1: Too many pieces! If you’re collecting data from 15 different sources each year (surveys, student data, focus groups, etc.), how will you keep all of that up? As my role within the office developed, I had less time for collecting data.

Problem 2: Try explaining to someone why this group of 7-8 indicators means that students are (or are not) able to assess and improve their study strategies. In time, I had two years of data and could not explain (or understand it) in a way that we could use to improve our office services.

My suggestion to you? Keep it simple.

  1. Limit the number of learning outcomes you create.
  2. Don’t use more than 3 measurements (triangulation) to capture student achievement of an outcome.
  3. Focus on outcomes people (your office, your administration, your students) care about.
  4. Focus on outcomes for which your office is responsible. For example, establishing open communication with your roommate may be a good outcome for a residence life office but probably not for an advising office.

It’s easy to get caught up in the details and for your assessment strategy to become a monster. Just remember, if you’re hit by a bus** you need a system that someone else in your office can pick up relatively easily.

*If you’re thinking “Mark, that’s an action, not a learning outcome,” bottle up that thought, I’m sure we’ll address the makings of a good learning outcome soon. In the meantime, feel free to browse this article from the NACADA website.

**Why is this phrase so popular? Are professionals particularly prone to bus accidents? If so, why is this not in the news?