Survey Results? 5’s All Over The Place

2015-03-14_OhNoLogo22-mark3A Friday post?!? Yeah. Who knows how this happened.

A year ago, I created a short program assessment for our Peer Advisor program. The idea: to capture the extent to which our Peer Advisors (PAs) learned what we wanted them to learn. Some of the learning outcomes were related to office function. for example: How comfortable do you feel answering the phone?, others were a bit more about whether they are drinking the advising Kool-Aid — apparently Kool-Aid is spelled with an “K” — things like: To what extent do you agree that professional development is important to you? Of course, there were Lickert (pronounced LICK-ert. I learned that at a conference once. I’ll tell you the story later).

The results you ask? 5’s. 5’s all over the place (out of 5). Half of the not-5’s were the result of 1 peer advisor who must (…oh god I hope) have chosen 1’s but meant 5’s. What do I make of this? With such a small sample size (we hire 10 PAs per year), a 4.78 and a 4.89 sound near-identical to me. I’d certainly be hesitant to conclude that the 4.78 is all that different from the 4.89. Deep within me is a fool shouting this must mean everything is perfect! My brain says otherwise. Maybe this assessment needs some work.

As a quick side note: I’m a firm believer in the Strongly Agree to Strongly Disagree scale. When you use a standard scale of measurement, you can compare one question to another — which can be useful.

So what’s a man/woman/assessmenteur to do?  I have a few ideas:

Avoid the whole situation in the first place. When designing your surveys, consider how students are likely to respond to a given question. If your gut (trust your gut!) tells you almost every student will respond 4 or 5, raise the bar. Look for easy ways to bring down your scores. For example, replace words like familiar with, aware of, etc. with confident, command of, etc. If your question is quantitative or involves a frequency, up the numbers — for example, weekly not monthly.

Continue avoiding the situation in the first place! That is, avoid leading questions. If I’m asked a true or false question “Do you know that Michigan has the winningest softball coach in the country?” Well, I don’t think I knew, until you asked the question, and I don’t want to be wrong, so…. TRUE!
Lump your responses. Focus on the number of students to select “agree” or
“strongly.” For example, “90% of students agreed or strongly agreed with statement x.” By lumping scores together, you blur some of the noise that’s created with so many response options, and the data is simpler to look at. You can also check out the negative side. Let’s say you want to know if students are adjusting socially? You might want to see how many students disagree or strongly disagreed with the statement “I have made friends on campus.”

Break down your demographics. If you have a large enough sample size, try looking at how the men responded in comparison to the women, or how the history majors responded versus how the engineers responded. While I don’t recommend breaking down the data just for the sake of breaking it down — unless you have bundles of time on your hands — this might yield insights you otherwise would have missed.

ELSEWHERE IN HIGHEREDLAND:
Tax on higher ed endowments? Higher ed funding, you’re going the wrong way!

Advertisements

One Whole Year of Assessment

2015-03-14_OhNoLogo22-mark3The blog turned one year old last week. Our first post went out on April Fools Day. I spent a week convincing my friends that I really was co-starting a blog. No really. It was not a joke. Yes, we probably could have come up with a better launch date, but who doesn’t love April Fools Day? Should I be capitalizing April Fools Day? Who knows. Wait. Yes, it’s a proper noun. Nevermind that last question.

Some people say having a dog is like having a baby. I think having a blog is like having a baby. Perhaps you didn’t come up with the idea, but once the wheels were in motion, it was hard to turn around. After a while, all my friends knew I had a blog. Most of them probably didn’t want me to remind them weekly with a facebook post, but heck, those posts are not going to read themselves. I think that’s where the metaphor ends.

When we started this thing, we had no idea what it would turn into. I didn’t even know what I wanted it to turn into. Of course, I wanted it to be read. To get other people thinking about assessment. To start a conversation about assessment. Deep down, part of me just wanted it to exist as a place I could go to write things out. Much like a journal (but, at times, significantly less interesting), the typing helps me organize the cluster of assessment thoughts passing through my mind.

<<<TIMEOUT: This is starting to feel like a blog-ending post. It is not. I repeat, not, the last post. This is merely a “one year in thoughts” post. Proceed. >>>

Generating content is tough. We went from 3 posts per week, to a short summer break, to 2 posts per week, and I think we’ve now settled in a sturdy (yet approximate) 1 post per week. It was easy to come up with ideas at first, but as we’ve worked through them, I find myself starting with single sentence assessment-related statements, and wondering “can that be stretched into a post?”

Marketing a blog is equally as difficult. Why have a blog if I don’t want people to read it? If I don’t try to publicize the blog, it feels like I’m not even trying. So, onward we go, collecting a few followers at a time. On the upside, wordpress gives us site statistics, and it seems folks are in fact reading this — enough folks, in fact, that they cannot all be family members.

Assessment, man/woman/you. Sometimes I wonder if assessment is inherently boring. While I’m always getting better at it, and I’m often interested in the results, discussing assessment in a universal way is difficult. I find myself often thinking “will anyone care to read this?” But here you are, paragraph 6 and still reading.

Onward we go. I will continue to generate content as long as I feel it ads value (to myself or others). I will continue to spend time staring at a blinking cursor trying to get started. I will try to make assessment sexy. I will not continue putting off blog entries. How do I know this? I’m almost out of new Mad Men episodes to watch.

Keep on assessin’!

Assessment of Academic Probation

2015-03-14_OhNoLogo22-mark3As our office continues to develop, we’re starting to implement smaller assessment pieces to pair with larger programs. Two years ago, we created an assessment of our academic advising; probably a good place for an advising office to start. Since then, we’ve initiated assessments for our Peer Advising program (student staff who support our office) and our Peer Mentor program (a mentorship program we offer the students we serve).

This week, we discussed the addition of a new assessment for our probation program. We’ve established a structured process for our students not performing well academically but do not yet have a means of evaluating this effort. I thought I’d share some of my thoughts on this assessment; as disconnected and undeveloped as they may be.

This is a difficult group to assess. For one thing, we put a lot of work into developing relationships with our students. I don’t want to jeopardize that relationship with an end of term You failed your classes. Here, take this survey and become a row on a spreadsheet survey. These students need to feel like their advisor knows them. That’s an assumption, but I’d like to hear form those who disagree. I can’t help but feel that collecting data from them directly treats them like mail on a conveyor belt.

Beyond that, we make a lot of assumptions on what’s “good” or “right” for this group. For example, that they’re attending tutoring or meeting with a counselor. It’s quite possible for a student to completely follow the plan we lay out in September, and find themselves struggling again. If we decide Steve should be using a planner, and he uses the planner dutifully all semester and struggles, does this mean our process is working? Most can agree that a successful GPA is an important outcome, but if we look solely at GPA, we might miss a lot of relevant progress a student is making. Would then a lighter/easier class schedule skew the assessment — making it look like a student was making more progress than they really did?

What are we assessing here? Clearly, GPA is important; but can’t a student make progress personally without doing well academically? In such case, should our assessment conclude that our process is not succeeding?

Just who are we assessing here? How much impact can we make on a student? Though we’re charged with supporting the student, we’re not the ones taking the exams. Should our assessment measure a student’s progress or how well they adhere to our support program? If the former, it seems we’re measuring a group’s natural talents — that is, a good group might skew the data to make it look like our process is working. If the latter, we’re assuming that we know what’s right for the student and perhaps ignoring what’s really important. Yes, that was vague on purpose.

The question to statement ratio in this post is a bit out of control, I apologize for that. I’ll keep thinking and perhaps put together a post with some practical thoughts rather than the broad ones I pose above.

Keep on keeping on and if you have any thoughts, please reply.

Sharing Data Effectively

2015-03-14_OhNoLogo22-mark3One of the challenges we assessment-ites have is what data to share and how to share it. When sharing data, you want it to be both interesting and appropriate to the intended audience. For data to have impact, it must be interesting. But not all data should be shared. Because I don’t have a better word for it, I’ll call that the “appropriateness” of the data. If the data is detrimental to your mission, it may not be appropriate to share.

It all starts with your intended audience. Is the audience your director? the dean? students? Once you have the intended audience, it’s helpful to visualize with the table below:

InterestingAppropriateChartI created this chart from the perspective of the students; if your audience is the math department, Dr. A’s calculus class becomes more appropriate. Similarly, if I’m the intended audience, how many bagels am I eating each week? The idea is for all of your reported data to fall in the upper-right quadrant.

And now, more on the appropriateness of the data…. 

At our last advisor meeting in the fall term, we discussed a new tool available to our students. This tool, integrated into the registration system, gives students information about the courses for which they might enroll. The system tells them what past students often took after this class and the degrees they sought. It even shows them the grade distribution of the class over the past few years. I had the requisite student affairs knee jerk reaction: but do we want students to see grade data? Will they then avoid the “hard” classes and lean toward the “easy” ones? I put quotes around “hard” and “easy” because, you know, there are no easy or hard classes — every student’s experience is different.

After learning about the student interface, we were introduced to the staff interface. What we see has MUCH more information. The system allows us to drill down and look at specific groups of students (sophomores, engineering students only, underrepresented groups, etc.). It’s a powerful tool I found myself lost in for about 45 minutes that afternoon. It’s the Netflix of work; once opened, who knows how long you’ll be in there.

My thoughts bounced around like a ping pong ball in a box of mouse traps. From Students should not be able to see this! They’ll simply take the easier courses! To Students should have access to EVERYTHING! They need to learn to make data driven decisions! Then I started to settle down.

It’s good for us to share information with our students — especially information that interests them. They’ll make a ton of decisions in their lifetime and need to navigate the information that’s out there. Sure, some of them will choose the high average GPA classes, but would they have been better served if we stuck to the usual “Nope. I can’t give you any guidance on this. You need to surf the thousand-course course guide and find a class that interests you.“?

But some data shouldn’t be widely available. If you’re trying to increase the number of women in your institution’s math major, it might be counter productive to allow undeclared students to see “Oh, this major is 90% men. I don’t know if that’s a place I want to be.” It seems to me that kind of information sustains the imbalances we already have and are trying to mitigate.

To conclude…

It’s easy to get pulled into the “oh, can we add a question on ______ in the survey?” cycle. If you’re not careful, you end up with an oversized excel spreadsheet and a bored audience. When you feel the survey creep happening, get back to the questions of: Who is this for? Is this interesting to them? Is it appropriate for them?

Now go youtube other videos of ping pong balls and mousetraps.

What Pitch Perfect 2 Says About College

2015-03-14_OhNoLogo22-mark3I found myself watching Pitch Perfect 2 last night while ironing. I’m not embarrassed — those shirts wouldn’t de-wrinkle themselves.

Most of the time, I’m in a world where student affairs is an accepted norm. Several of my local friends are higher ed professionals, many of my more distant friends are student affairs pros. After a while, I stopped noticing how my perspective on higher education might not line up with society’s view.

Often times, what we see in movies or on TV is clearly an adjustment with television in mind. Remember Saved by the Bell, the College Years? Nobody lives in a space that large their first year in college. How about the entire National Lampoon’s Animal House movie? Sure, there are elements of a “typical” college experience in there, but for the most part, it’s a parody. Most viewers know these examples are not what college looks like today; that some adjustments were made to make the show more viewable, or more humorous.

Then I saw Pitch Perfect 2. Yes, it’s a comedy, and many scenes fall into the pattern of “we’re stretching reality quite a bit here, but doing it in the name of comedy!” But the movie made some, more subtle hints about how we view higher education. Consider these examples.

The Welcome to College scene. Early on in the movie, the main character attends a commencement event in a large lecture hall. The type of event designed for new students that happens right as a semester is beginning. In this scene, one administrator is the host, bringing different student groups out to perform. The scene resembles a high school pep rally for a homecoming football game. Where were the common themes of you will be challenged! or expand your horizons!? Nowhere. The implication was that the students in the audience were in a new environment with a specific image of what’s expected of them — that they join an a cappella group. Success means conforming to the expectations of the college. The growth those students can expect has a very narrow definition.

The Student Affairs Discipline scene. In this scene, Anna Kendrick’s a cappella group met with a Student Affairs dean and the announcers from an a capella competition after a wardrobe malfunction on a nationally televised event. The role of the announcers is a bit unclear, let’s consider them representatives from the a capella league. The Dean of Student Affairs in this scene acts as a strict disciplinarian. The group members walk into his office and stand there while he tells them of their punishment. The members have little opportunity to talk — their fate has already been decided.

These scenes reminded me that in many ways, institutions are viewed as gatekeepers of your future. If you get in, you’ll succeed. If you can keep up with the rigor, you’ve made it. Your success depends on whether you do what they ask of you. Of course, we view the experience as a mutual effort — the institutions provide opportunities to grow, the students choose  the opportunities in which to engage (and the extent to which they engage).

It’s movies like this that remind me why students are so focused on high exam scores, and the “right” set of extra-curriculars. Between the movies and the countless articles on top money-earning majors (etc.), college seems much more a place where you collect merit badges than a place of growth. I got this badge because I attended [insert prestigious institution] University, and this badge because I got that minor in ______. This one for the dean’s list. Oh, and I got this one as captain of the _____ club.

This view of the working world completely ignores the idea that students can craft their own future from their values and the talents they develop; and bases itself on an environment where their future employers hold all the cards and need to be impressed if they have any hopes of a job.

Somehow, I’ve meandered from the view that high school students have of college to career preparation. I suppose my point here is that there is a misalignment between how we (higher ed professionals) view the role/purpose of college, and how the general public views college — and that the difference severely impedes a students’ ability to get the full value out of college.

Will You Be a Guest Blogger?

2015-03-14_OhNoLogo22-abby32015-03-14_OhNoLogo22-mark3When we started Oh No our hope was to have one LARGE conversation about assessment. Thusfar, it’s mainly been us talking to ourselves – which is fun but not achieving our goal.

We want to expand the conversation about assessment in higher education, and the best way to do that is to invite creative, innovative professionals to help take the conversation further. We have lots of smart professionals in our lives already who are doing amazing things in various areas of higher education (see some of them below!).seal

mac n joes

These friends of ours (and others who we don’t even know yet [i.e., hopefully YOU!]) will be adding their perspective in the coming weeks.

We’d love for you to add your voice and fill in the gaps that we are missing. If you’re interested in adding to the assessment conversation we’ve started, let us know by filling out the form below.

Sending you much assessment power, 

Abby and Mark
mark and abby

Should we Assess Mental Health?

2015-03-14_OhNoLogo22-mark3Of the students I meet with for academic difficulty, a startling proportion of them are in their present situation due to factors related to mental health. The shape of the issue varies. Sometimes it’s the stress of seeing everyone around them succeed — struggling students tend to be quiet about their performance. Other times it’s a depression that was mild and undiagnosed in high school, but starts to take hold in college. These students want to succeed and have the capacity to, but something about college life tosses a wrench in their ability to perform academically.

That I work at a competitive research 1 institution, most all of my students excelled in high school. Anyone who’s meeting with me for academic difficulty is seeing it for the first time, and rarely do they know how to cope.

These days, I’m hearing many in the education world discussing grit — the ability to overcome struggle and bounce back from failure. We’re even using the term in our new student orientation. At the same time that we’re telling our students “we want you to challenge yourself!” they know that their GPA is the first way they’re measured for their next phase of life (grad school, med school, employers, etc). Sure, overcoming adversity sounds cool, but it sure doesn’t feel good while it’s happening, and for someone who’s always succeeded, that first “C” on an exam can feel like the first crack in the dam.

It’s probably not much of a jump to conclude that a student experiencing mental health difficulty is more likely to struggle academically. And isn’t academic success part of the role of a fair number of student affairs offices? If we could identify the students struggling with mental health, we can go a long way towards supporting them through their journey.

The challenge here is that most student affairs practitioners (myself included) are not experts at diagnosing mental issues. Balance that with the fact that we (i.e., advising, housing, etc.) are often the first to notice a student is struggling. Aren’t we also the folks charged with supporting the successful transition of our students into the college environment?

I’m not quite comfortable claiming that we can be responsible in any way for the mental health of our students, or that fewer students with mental health challenges means that we’re succeeding, but I also believe that our response and support of these students should be a part of how our success is measured.

Statistics! Part 2

2015-03-14_OhNoLogo22-mark3So you’ve designed a new workshop, which you’ve guaranteed will bring up your student’s grades by a full letter! You spent weeks preparing the workshop, you gave the workshop, and now the grades are coming in. Did your students improve by a letter grade? That’s an easy calculation using descriptive statistics. Simply average last term’s GPAs among the students in your group and compare that to this term’s average.

But did your workshop really make a difference? Let’s say you want to know if your workshop can really be said to bring up student grades (or if you just got lucky). This is where inferential statistics come in! Remembering Abby’s post from last week, the sample in this case is the students in our workshop and the population is all of the students at the university.*

T-tests can tell you if a given experience (e.g., a workshop) impacts the mean score (e.g., grades) for a given student. When you hear folks describing “pre” and “post” tests, this is likely a scenario where t-tests are helpful.

Regression can tell you if (and the extent to which) two variables are connected. For example, if you want to know if a students grade in calculus can predict their grade in physics. A regression analysis will tell you is there’s a relationship and how strong that relationship is. This test is appropriate when both variables are quantitative.

Writing this post, it occurs to me that 1) trying to explain this stuff gets complicated FAST. 2) I’ve lost most of the details I learned in my statistics courses.

The main idea here is that we have mathematical tools which can calculate for us how likely it is that a given experience or situation can predict another experience or situation. Want to know if career counseling helps students find a career? Statistics can answer that.

The downside here is that not every statistic-based conclusion can be trusted. Much like Harry Potter’s wand,** this only works when a person knows (or at least sort-of knows) what they’re doing. I’ve noticed that it gets cold a few weeks after students arrive on campus — it’s the STUDENTS who cause winter!!!!

Correlation

In most cases, assessment doesn’t require statistics (beyond mean, median, etc.). As intelligent people with a limited amount of time on our hands, it’s okay to look at some numbers, make conclusions, and update our office processes. That said, if you happen to have someone on your staff with the time and the background, you’re in luck — you can start making conclusions about the effectiveness of your department practices. This allows you to identify the practices making a difference. In a time when resources are tight, the ability to carefully prune our student affairs bonsai trees (you’re welcome for that metaphor) will become more and more important.

*This assumes your workshop was attended by a random group of students among the university. If, for example, the workshop was only advertised to engineering students, then your population would be engineering students. In short (and probably under-sufficient), your population is the group from which the sample comes.

**This is just an assumption. I haven’t read any Harry Potter, but I assume he doesn’t want other people messing around with his wand.

Statistics! Part 1

2015-03-14_OhNoLogo22-mark3Last week, Abby opened the door to one of my favorite topics — statistics. When used properly, statistics can add a layer of justification to our assessment results by further explaining the numbers in our datasets. I thought I’d take some time in my next few posts to further explain statistics and its (potential) use in our assessments.

Descriptive vs Inferential Statistics
The vast majority of assessment results rely on descriptive statistics. Descriptive statistics merely describe what’s going on in a given dataset. Mean, median, mode, maximum, minimum, and standard deviation, are all descriptive statistics.

Mean – also known as the “average” this term is used to tell people they are dreadfully un-special! Mathematically, it’s the sum of the values divided by the number of values. That is, if I tell 3 friends a set of 10 puns and those three friends laugh at 4,5, and 6 of the puns, the mean is 5 (4+5+6 divided by 3). This may be the most used descriptive statistic!

Median – I have a pet peeve. One of our summer orientation presenters likes to say “only half of you can be above average!” This tears me up inside because it’s not true. If we have 4 students, 3 of them have a “B” grade (3.0) and one has a “D” grade (1.0), then the average is a 2.5. All three “B” students are above average and that one “D” student is making everyone else look good. This is why the median was invented. The median is the place where half of the people are above you and half are below you. To find the median, rank all of the values from lowest to highest (1,3,3,3) and take the middle value. In cases where you have an even number of values, average the two closest to the middle. For this dataset (1,3,3,3) the median is 3 (the average of 3 and 3). Since modern grade distributions look less like a bell and more like a wave — with everyone squished in the mid-3 range and a tail of students performing poorly — the median can be a great way for students to compare themselves to their peers academically.

Mode – this statistic is almost useless. It tells you which value occurs the most. It’s not mode’s fault, we just don’t often care which value shows up the most. I’m sorry Mode, it’s not you, its me. But it’s really you.

Maximum – this is the highest value in a dataset. When I’m at the gym, I often ask the maximum amount of weight a given bar can handle.  Because if I’m doing bench presses, I don’t want to break the bar.

Minimum – conversely, this is the lowest value in a set of data.

Standard Deviation – this value tells you how much your data varies. It’s useful for larger datasets (i.e., more than just a handful of numbers) because it can tell you how one value compares to the dataset. Standard Deviation is in some ways a gateway into inferential statistics, which I’ll explain in my next post.

This post explains the more useful descriptive statistics. You may be thinking — but Mark, my survey only covers %25 of my students (and I can’t chase down the rest), does this mean I can only make conclusions about that %25 of students? Is there a way I can, using this information, make conclusions about my entire group (%100)? The answer, an annoying aspect of statistics, is sort of. I’ll dive into this further in my next post!

Whistling Vivaldi

2015-03-14_OhNoLogo22-mark3I’m in the last few days of my 2-week summer vacation and I thought now would be as good a time as any to put together a post. It seems the closer I am to an institution, the more I get thinking about higher ed. Today, I’m at a Bruegger’s Bagels in Northampton, MA — home (or near-home) to a handful of colleges and universities. I’m also plagued by a very agile fly. He likes to fly around my hands. I can’t seem to get him, and fellow patrons are starting to stare.

This summer, we’re reading a book for professional development: whistling vivaldi by Claude M. Steele.  I won’t summarize the entire book for you — admittedly, I’m only about a third of the way into it. Thus far he’s exploring the impact of stigma on performance. Stereotype threat is the idea that our performance (in anything) is impacted by the stereotypes placed upon our identities. The expectations placed upon us by virtue of those identities affect our performance whether we’d like them to or not. Often times, the fear of confirming a stereotype about one of our identities hinders our performance in that identity, regardless whether that stereotype holds merit. We don’t want to give truth to that stereotype.

Consider this situation: In graduate school, we had many conversations in class about identity. As someone with many majority identities (e.g., white, heterosexual, male, etc.), I constantly second-guessed my contributions to class conversations — afraid that everything I said would be an opportunity for a classmate to think “oh, he just doesn’t get it, he’s [straight, white, male, etc.].” You can bet this fear kept me from fully engaging in the class conversations. I didn’t want to be seen as out of touch — or worse, unable to understand.

Stereotypes blur the way we understand the world. In the book, Steele points out the difference between the “observer’s perspective” and the “actor’s perspective.” As we’re often in the observer’s perspective, we’re only able to focus on what we can see or notice. This perspective tends to be a view from the clouds and causes us to miss context in which the actor (i.e., person studied) is making those decisions.

To illustrate his point, Steele references the 1978 Seattle Supersonics basketball team. The team started out the season losing at an alarming rate. Local sports analysts were able to break down, in detail, all of the reasons the team struggled. Shortly after the beginning of the season, the team hired a new coach. From there, the team started to win — and would later reach the NBA finals — despite having exactly the same players with the same skill sets ridiculed in the first few weeks of the season. When viewed from a different lense, characteristics originally seen as contributing to their struggles were now the reasons for their success.

It’s almost as though our expectations highlight the things we expect to see, and hide those we don’t expect.