When starting an assessment — which, to me is the moment you identify learning outcomes — I tend to back my way into the learning outcomes. I ask myself: what do we want students to gain from their whole college experience? I narrow that down to the outcomes we hope our office provides, and on to outcomes for students at this particular time in their college experience, and then to the level of outcomes targeted by a specific effort — that is, what we do in our office.
Often, some lofty outcomes duck and dodge their way through every revision. I’m referring to outcomes along the lines of “student takes responsibility for their education and development.” Is that important? Definitely! …but what the… heck… does it mean? And when have students met this outcome? When they wake up and go to class? Or when they’ve decided on an interest and pursued information about that interest without the prodding of an advisor?
This leads me to the question: What should assessment measure? Do we reach for those lofty outcomes or aim for those more measurable (e.g., student met with advisor*)? I’ve come to a conclusion on this. We need to aim for the measurable ones; then when presenting the data, explain the implications on the lofty outcomes.
I spent the first two years of my first advising job creating the ultimate assessment tool. A tool that would put Nate Silver’s presidential election result models to shame. The tool featured a set of “indicators” for each outcome. The idea: each outcome is complicated, let’s take several different measurements that, together, would tell us the extent to which student meet the outcomes. I created an MS Word document to lay out the learning outcomes, then another to indicate which indicators told us about which outcomes. Finally, I created a PowerPoint presentation to clarify the overall process and indicate which measurements should be taken when.
Problem 1: Too many pieces! If you’re collecting data from 15 different sources each year (surveys, student data, focus groups, etc.), how will you keep all of that up? As my role within the office developed, I had less time for collecting data.
Problem 2: Try explaining to someone why this group of 7-8 indicators means that students are (or are not) able to assess and improve their study strategies. In time, I had two years of data and could not explain (or understand it) in a way that we could use to improve our office services.
My suggestion to you? Keep it simple.
- Limit the number of learning outcomes you create.
- Don’t use more than 3 measurements (triangulation) to capture student achievement of an outcome.
- Focus on outcomes people (your office, your administration, your students) care about.
- Focus on outcomes for which your office is responsible. For example, establishing open communication with your roommate may be a good outcome for a residence life office but probably not for an advising office.
It’s easy to get caught up in the details and for your assessment strategy to become a monster. Just remember, if you’re hit by a bus** you need a system that someone else in your office can pick up relatively easily.
*If you’re thinking “Mark, that’s an action, not a learning outcome,” bottle up that thought, I’m sure we’ll address the makings of a good learning outcome soon. In the meantime, feel free to browse this article from the NACADA website.
**Why is this phrase so popular? Are professionals particularly prone to bus accidents? If so, why is this not in the news?