Survey Results? 5’s All Over The Place

2015-03-14_OhNoLogo22-mark3A Friday post?!? Yeah. Who knows how this happened.

A year ago, I created a short program assessment for our Peer Advisor program. The idea: to capture the extent to which our Peer Advisors (PAs) learned what we wanted them to learn. Some of the learning outcomes were related to office function. for example: How comfortable do you feel answering the phone?, others were a bit more about whether they are drinking the advising Kool-Aid — apparently Kool-Aid is spelled with an “K” — things like: To what extent do you agree that professional development is important to you? Of course, there were Lickert (pronounced LICK-ert. I learned that at a conference once. I’ll tell you the story later).

The results you ask? 5’s. 5’s all over the place (out of 5). Half of the not-5’s were the result of 1 peer advisor who must (…oh god I hope) have chosen 1’s but meant 5’s. What do I make of this? With such a small sample size (we hire 10 PAs per year), a 4.78 and a 4.89 sound near-identical to me. I’d certainly be hesitant to conclude that the 4.78 is all that different from the 4.89. Deep within me is a fool shouting this must mean everything is perfect! My brain says otherwise. Maybe this assessment needs some work.

As a quick side note: I’m a firm believer in the Strongly Agree to Strongly Disagree scale. When you use a standard scale of measurement, you can compare one question to another — which can be useful.

So what’s a man/woman/assessmenteur to do?  I have a few ideas:

Avoid the whole situation in the first place. When designing your surveys, consider how students are likely to respond to a given question. If your gut (trust your gut!) tells you almost every student will respond 4 or 5, raise the bar. Look for easy ways to bring down your scores. For example, replace words like familiar with, aware of, etc. with confident, command of, etc. If your question is quantitative or involves a frequency, up the numbers — for example, weekly not monthly.

Continue avoiding the situation in the first place! That is, avoid leading questions. If I’m asked a true or false question “Do you know that Michigan has the winningest softball coach in the country?” Well, I don’t think I knew, until you asked the question, and I don’t want to be wrong, so…. TRUE!
Lump your responses. Focus on the number of students to select “agree” or
“strongly.” For example, “90% of students agreed or strongly agreed with statement x.” By lumping scores together, you blur some of the noise that’s created with so many response options, and the data is simpler to look at. You can also check out the negative side. Let’s say you want to know if students are adjusting socially? You might want to see how many students disagree or strongly disagreed with the statement “I have made friends on campus.”

Break down your demographics. If you have a large enough sample size, try looking at how the men responded in comparison to the women, or how the history majors responded versus how the engineers responded. While I don’t recommend breaking down the data just for the sake of breaking it down — unless you have bundles of time on your hands — this might yield insights you otherwise would have missed.

ELSEWHERE IN HIGHEREDLAND:
Tax on higher ed endowments? Higher ed funding, you’re going the wrong way!

Advertisements

Assessment of Academic Probation

2015-03-14_OhNoLogo22-mark3As our office continues to develop, we’re starting to implement smaller assessment pieces to pair with larger programs. Two years ago, we created an assessment of our academic advising; probably a good place for an advising office to start. Since then, we’ve initiated assessments for our Peer Advising program (student staff who support our office) and our Peer Mentor program (a mentorship program we offer the students we serve).

This week, we discussed the addition of a new assessment for our probation program. We’ve established a structured process for our students not performing well academically but do not yet have a means of evaluating this effort. I thought I’d share some of my thoughts on this assessment; as disconnected and undeveloped as they may be.

This is a difficult group to assess. For one thing, we put a lot of work into developing relationships with our students. I don’t want to jeopardize that relationship with an end of term You failed your classes. Here, take this survey and become a row on a spreadsheet survey. These students need to feel like their advisor knows them. That’s an assumption, but I’d like to hear form those who disagree. I can’t help but feel that collecting data from them directly treats them like mail on a conveyor belt.

Beyond that, we make a lot of assumptions on what’s “good” or “right” for this group. For example, that they’re attending tutoring or meeting with a counselor. It’s quite possible for a student to completely follow the plan we lay out in September, and find themselves struggling again. If we decide Steve should be using a planner, and he uses the planner dutifully all semester and struggles, does this mean our process is working? Most can agree that a successful GPA is an important outcome, but if we look solely at GPA, we might miss a lot of relevant progress a student is making. Would then a lighter/easier class schedule skew the assessment — making it look like a student was making more progress than they really did?

What are we assessing here? Clearly, GPA is important; but can’t a student make progress personally without doing well academically? In such case, should our assessment conclude that our process is not succeeding?

Just who are we assessing here? How much impact can we make on a student? Though we’re charged with supporting the student, we’re not the ones taking the exams. Should our assessment measure a student’s progress or how well they adhere to our support program? If the former, it seems we’re measuring a group’s natural talents — that is, a good group might skew the data to make it look like our process is working. If the latter, we’re assuming that we know what’s right for the student and perhaps ignoring what’s really important. Yes, that was vague on purpose.

The question to statement ratio in this post is a bit out of control, I apologize for that. I’ll keep thinking and perhaps put together a post with some practical thoughts rather than the broad ones I pose above.

Keep on keeping on and if you have any thoughts, please reply.

Sharing Data Effectively

2015-03-14_OhNoLogo22-mark3One of the challenges we assessment-ites have is what data to share and how to share it. When sharing data, you want it to be both interesting and appropriate to the intended audience. For data to have impact, it must be interesting. But not all data should be shared. Because I don’t have a better word for it, I’ll call that the “appropriateness” of the data. If the data is detrimental to your mission, it may not be appropriate to share.

It all starts with your intended audience. Is the audience your director? the dean? students? Once you have the intended audience, it’s helpful to visualize with the table below:

InterestingAppropriateChartI created this chart from the perspective of the students; if your audience is the math department, Dr. A’s calculus class becomes more appropriate. Similarly, if I’m the intended audience, how many bagels am I eating each week? The idea is for all of your reported data to fall in the upper-right quadrant.

And now, more on the appropriateness of the data…. 

At our last advisor meeting in the fall term, we discussed a new tool available to our students. This tool, integrated into the registration system, gives students information about the courses for which they might enroll. The system tells them what past students often took after this class and the degrees they sought. It even shows them the grade distribution of the class over the past few years. I had the requisite student affairs knee jerk reaction: but do we want students to see grade data? Will they then avoid the “hard” classes and lean toward the “easy” ones? I put quotes around “hard” and “easy” because, you know, there are no easy or hard classes — every student’s experience is different.

After learning about the student interface, we were introduced to the staff interface. What we see has MUCH more information. The system allows us to drill down and look at specific groups of students (sophomores, engineering students only, underrepresented groups, etc.). It’s a powerful tool I found myself lost in for about 45 minutes that afternoon. It’s the Netflix of work; once opened, who knows how long you’ll be in there.

My thoughts bounced around like a ping pong ball in a box of mouse traps. From Students should not be able to see this! They’ll simply take the easier courses! To Students should have access to EVERYTHING! They need to learn to make data driven decisions! Then I started to settle down.

It’s good for us to share information with our students — especially information that interests them. They’ll make a ton of decisions in their lifetime and need to navigate the information that’s out there. Sure, some of them will choose the high average GPA classes, but would they have been better served if we stuck to the usual “Nope. I can’t give you any guidance on this. You need to surf the thousand-course course guide and find a class that interests you.“?

But some data shouldn’t be widely available. If you’re trying to increase the number of women in your institution’s math major, it might be counter productive to allow undeclared students to see “Oh, this major is 90% men. I don’t know if that’s a place I want to be.” It seems to me that kind of information sustains the imbalances we already have and are trying to mitigate.

To conclude…

It’s easy to get pulled into the “oh, can we add a question on ______ in the survey?” cycle. If you’re not careful, you end up with an oversized excel spreadsheet and a bored audience. When you feel the survey creep happening, get back to the questions of: Who is this for? Is this interesting to them? Is it appropriate for them?

Now go youtube other videos of ping pong balls and mousetraps.

Where do I begin?

 

2015-03-14_OhNoLogo22-mark3It starts when you notice the local construction projects winding down. Then you’re cut off by a Ford Escape with New Jersey plates and a back full of clothes, books, a pink rug, and one of those chairs made solely of bungee cords. That’s right, it’s back to school season.

Abby and I met to for a pre-season re-vamp of Oh No and you can look forward to two posts a week this year — Mondays and Thursdays. We’re trimming a bit because those Friday posts came upon us awful fast and we want to keep this thing valuable. So without further adieu…

I met with a colleague from across campus this week. She works for a newer (and smaller) office just starting to wrap its mind around how to capture its value to the students it serves. The office focuses on developing an entrepreneurial mindset in our students and supporting student ideas from conceptualization to implementation. To further complicate its assessment process, the office is not yet on permanent funding, thus is under pressure to justify its existence.

I’ve already covered starting over in my Zero to Assessment post, however this conversation yielded a few new questions I wanted to chew on a bit.

What if I don’t know what students are learning from this experience? In the old, dusty textbooks of assessment you’ll find a flow chart looking something like this…

Presentation1

oh, well hello smart art…

This flow chart is helpful if you have clear and measurable learning outcomes, but leaves out instructions for when your outcomes are a bit cloudy. My colleague proposed measuring this through a series of qualitative questions — which, despite my aversion to the labor intensive nature of properly analyzing qualitative questions, seemed appropriate given the situation. And you know what, old dusty textbook I made up to illustrate my point, if an office centered around innovation can’t build a plane while they’re flying it, can any office? That is, if we can’t get an initiative started until we have every detail (e.g., assessment) ironed out, we’ll be missing out on a good number of valuable initiatives.

While I’m complaining about the rigidity of fictitious textbooks, it’s worth acknowledging that neither her nor I was all too sure of how she would analyze the data she’s collecting. It would be great if she had the labor to code each response, but that doesn’t seem likely. I think this is okay. It takes a few cycles to get an assessment process ironed out. Even by simply reading through the responses, she’ll get a feel of what her students are learning and how to better support them.

How do I get students to reply to my surveys? If I ever figure this out, I’m leaving the profession of higher education to hang out with the inventor of stick it notes on an island covered in boats, flat screen TVs, and Tesla convertibles. And, I guess, charging stations for the convertibles.

Very few people know how to do this well, however I’ve come across a few strategies which seem to be working.

-Make it personal. More than half of my job is forming relationships with students. Surveys are one of the times I leverage those relationships. I’ll often send the survey link in an e-mail noting (very briefly) the importance of the data collected in this survey, letting them know that every response is a favor to me (for all of the time I spend e-mailing them with answers to questions I’ve already answered in previous e-mails, this is the least they could do). If you’re sending a survey out to thousands, you can expect a very low return rate.

-Get ‘em while they’re captive. Do you have advising meetings with students at the start of these programs? is there an application to get into your program? Can you (easily) tie in the survey as a requirement for completing the program? I don’t mean to hint that surveys are the only means of collecting assessment data — but they’re direct, effective, and tend to be less labor intensive than other means.

Countdown to College football: 3 DAYS!

Zero to Assessment

2015-03-14_OhNoLogo22-mark3As you know, it’s Make Assessment Easy Month here at Oh No. In the Engineering Advising Center, we recently (last year) re-vamped our office assessment(s), and I’ve learned oodles in the process. Whether you’re creating an office-wide strategy, or a strategy to measure the success of a specific program owned by your office, these four steps  (which I picked up from Nacada’s 2014 Assessment Institute) can help you get from nothing to a simple, focused, and effective strategy. Most of the links to which I’m referencing come from NACADA, though the concepts are applicable to more than just advising.

Step 1, Create Learning Outcomes: NACADA recommends that learning outcomes focus on what we want students to know, do, and value (see last paragraph in Concept of Academic Advising). It’s good to keep this list short. We have 8 outcomes we focus on in our office. The longer your list, the longer (and more boring) your report of results. If your colleagues fall asleep while you’re discussing the results, you may have too many outcomes.

Step 2, Opportunities for Students to Achieve Outcome: It’s good to have a plan for when (e.g., workshops, advising appointments, etc.) we want students to achieve our desired outcomes. This portion might include workshops, advising appointments, tutorials, etcetera. In most cases, this is what you’re already doing! Hopefully.

Step 3, By What Time Should Learning Occur? This step helps you indicate when you’d like students to achieve your outcomes. For example, if you’re a career services office and you want students to have created a resume, you probably want that to happen sometime before they’re job searching. We often use student academic years/terms for this. For the resume example, your deadline might be by the end of their first year*.

*Originally I put “junior year” here. Abby’s response gave me the sense that career services folks would riot in the streets if this didn’t happen until the junior year. My sincere apologies! Feel free to pretend this deadline is anytime you see fit…

Step 4, How Will You Know if the Outcome Has Been Met? We use this step to determine when we’re going to make a measurement. It helps to limit yourself to just a few surveys or queries a year — this keeps your process sustainable. Common times to collect data are at the end of orientation, fall, and spring term.

In the end, you will have a table, with the learning outcomes as rows and each step as a column.

Untitled

This system works whether you’re creating an assessment for the entire office or if you’re just trying to assess one program. I’m using this process to assess our training and development of our orientation leaders this summer.

I hope you found this table useful. As you start to dive into the process of creating an assessment, you will come across questions that the table does not address (e.g., should we use surveys or focus groups or some combination of the two? Is our data valid? etc.). Just remember the KISS rule of thumb: Keep It Simple Steve. You may want to replace “Steve” with your name. The assessment does not have to be perfect. It should be simple enough for you (or someone else) to explain and follow through.

Giggle or Think?

2015-03-14_OhNoLogo22-mark3Friday’s here, which means MORE THAN USUAL FUN!

If you’re looking to giggle, check out this video:

Want to think? Here’s an interesting article:
http://www.bbc.com/future/story/20150415-the-buttons-that-do-nothing

So, if fake buttons can make people more satisfied crossing the street. Maybe we need a “click here to make assessment more fun” button…

Why College Tuition Continues to Rise

2015-03-14_OhNoLogo22-mark3Have you heard? Tuition is expensive!  College Board (the group behind high school AP courses) reports the average numbers — including tuition, fees, and room and board — for 2013-14: private ($42.4k), public in-state ($18.9k), public 2-year ($11k).

I recently stumbled upon a few articles on the topic. One blames the expansion of administration, especially top-administrator salaries. Another blames… well… the boom of administration. Full disclosure, I’m one of the staff members these folks believe there are too many of. I decided to look into this myself and see what I could find. The content below was originally an e-mail to my fiancee (sorry Kate, I just couldn’t stop). About halfway through I decided to make it into post.

Federal money goes into two main areas: loans/scholarships and research grants. It also appears to be split rather evenly between them (check it out). The research grants do nothing for the price of tuition, as they fund research. The scholarships do nothing to the price of college aside from offering a more accessible way to pay for it.

This leaves the states with the responsibility of keeping their higher ed tuition cheap. But more and more students are going to college, 15 to 20 million from 2000-2012, an increase of about 2.5% per year. And inflation rates have been about 3% per year over the past 10 years. Assuming that schools are not offering more services (thus rising costs), state funding would need to rise about 5.5% annually over that period to keep per-student cost and education quality the same — and I don’t think that’s been happening. If this report is reputable, per the bottom-right-most cell on page 27, it looks like, on average, states spent 23% less per student over the last 5 years — about a 2% decrease each year. All this while colleges are asked, and sometimes required, to provide more support.

Now consider that colleges need to compete for their students. What do students want? High rankings (prestige), sports, and fancy dining halls/gyms/facilities. Nobody is wowed by your tutoring program, your counseling office, or your student conduct office because nobody plans to use them. Strong students want to get into the highest ranked college they can for their program of interest.

So colleges (and their funding sources) need to choose: Do we want to bring in strong students OR focus on access? But hold on, what if our funding is tied to student performance? If so, why would we bring in students who we know are likely to struggle? The best way to bring up retention numbers is to bring in stronger students. How do we bring in stronger students? Sports, fancy buildings, etcetera.

The point? The current funding system makes it difficult for low and mid-level schools to exist. We want more students attending college, but don’t want to fund the support required for those less-talented students. Treating schools as businesses where the (financially) strong thrive and weak fail is a poor strategy for keeping tuition down. I don’t know if there’s a secret model that allows for an inexpensive great education.

The bigger point? For us to move forward, the states and federal government need to sort out a big question: Are we committed to a system that allows a college education for all? Without some sort of consensus, funding correlates strongly with the economy and becomes unpredictable. The strong (i.e., ivies and public flagships) weather the storm and the weak (i.e., community colleges) fail. If we’re not careful, we might end up with… uh oh… For U.S. Universities, the Rich Get Richer Faster

Of course, this is a complicated issue. The rising cost of tuition does not boil down to just one cause. Evident by my choice of profession, I believe in the value of a college education. I think the experience improves who we are as people. More than simply a set of coursework,  it requires students to make decisions about their values and start uncovering their identity. I’m worried that in the search of an efficient education system, we’re squeezing the diversity out of the post-secondary options.

What Should Assessment Measure?

2015-03-14_OhNoLogo22-mark3When starting an assessment — which, to me is the moment you identify learning outcomes — I tend to back my way into the learning outcomes. I ask myself: what do we want students to gain from their whole college experience? I narrow that down to the outcomes we hope our office provides, and on to outcomes for students at this particular time in their college experience, and then to the level of outcomes targeted by a specific effort — that is, what we do in our office.

Often, some lofty outcomes duck and dodge their way through every revision. I’m referring to outcomes along the lines of “student takes responsibility for their education and development.” Is that important? Definitely! …but what the… heck… does it mean? And when have students met this outcome? When they wake up and go to class? Or when they’ve decided on an interest and pursued information about that interest without the prodding of an advisor?

This leads me to the question: What should assessment measure? Do we reach for those lofty outcomes or aim for those more measurable (e.g., student met with advisor*)? I’ve come to a conclusion on this. We need to aim for the measurable ones; then when presenting the data, explain the implications on the lofty outcomes.

Here’s why:

I spent the first two years of my first advising job creating the ultimate assessment tool. A tool that would put Nate Silver’s presidential election result models to shame. The tool featured a set of “indicators” for each outcome. The idea: each outcome is complicated, let’s take several different measurements that, together, would tell us the extent to which student meet the outcomes. I created an MS Word document to lay out the learning outcomes, then another to indicate which indicators told us about which outcomes. Finally, I created a PowerPoint presentation to clarify the overall process and indicate which measurements should be taken when.

Problem 1: Too many pieces! If you’re collecting data from 15 different sources each year (surveys, student data, focus groups, etc.), how will you keep all of that up? As my role within the office developed, I had less time for collecting data.

Problem 2: Try explaining to someone why this group of 7-8 indicators means that students are (or are not) able to assess and improve their study strategies. In time, I had two years of data and could not explain (or understand it) in a way that we could use to improve our office services.

My suggestion to you? Keep it simple.

  1. Limit the number of learning outcomes you create.
  2. Don’t use more than 3 measurements (triangulation) to capture student achievement of an outcome.
  3. Focus on outcomes people (your office, your administration, your students) care about.
  4. Focus on outcomes for which your office is responsible. For example, establishing open communication with your roommate may be a good outcome for a residence life office but probably not for an advising office.

It’s easy to get caught up in the details and for your assessment strategy to become a monster. Just remember, if you’re hit by a bus** you need a system that someone else in your office can pick up relatively easily.

*If you’re thinking “Mark, that’s an action, not a learning outcome,” bottle up that thought, I’m sure we’ll address the makings of a good learning outcome soon. In the meantime, feel free to browse this article from the NACADA website.

**Why is this phrase so popular? Are professionals particularly prone to bus accidents? If so, why is this not in the news?