Should we Assess Mental Health?

2015-03-14_OhNoLogo22-mark3Of the students I meet with for academic difficulty, a startling proportion of them are in their present situation due to factors related to mental health. The shape of the issue varies. Sometimes it’s the stress of seeing everyone around them succeed — struggling students tend to be quiet about their performance. Other times it’s a depression that was mild and undiagnosed in high school, but starts to take hold in college. These students want to succeed and have the capacity to, but something about college life tosses a wrench in their ability to perform academically.

That I work at a competitive research 1 institution, most all of my students excelled in high school. Anyone who’s meeting with me for academic difficulty is seeing it for the first time, and rarely do they know how to cope.

These days, I’m hearing many in the education world discussing grit — the ability to overcome struggle and bounce back from failure. We’re even using the term in our new student orientation. At the same time that we’re telling our students “we want you to challenge yourself!” they know that their GPA is the first way they’re measured for their next phase of life (grad school, med school, employers, etc). Sure, overcoming adversity sounds cool, but it sure doesn’t feel good while it’s happening, and for someone who’s always succeeded, that first “C” on an exam can feel like the first crack in the dam.

It’s probably not much of a jump to conclude that a student experiencing mental health difficulty is more likely to struggle academically. And isn’t academic success part of the role of a fair number of student affairs offices? If we could identify the students struggling with mental health, we can go a long way towards supporting them through their journey.

The challenge here is that most student affairs practitioners (myself included) are not experts at diagnosing mental issues. Balance that with the fact that we (i.e., advising, housing, etc.) are often the first to notice a student is struggling. Aren’t we also the folks charged with supporting the successful transition of our students into the college environment?

I’m not quite comfortable claiming that we can be responsible in any way for the mental health of our students, or that fewer students with mental health challenges means that we’re succeeding, but I also believe that our response and support of these students should be a part of how our success is measured.


Data Storytelling with Fantasy Football

2015-03-14_OhNoLogo22-abby3Before 2010, I didn’t care AT ALL about sports. But, being the extrovert (and PROUD past-time bandwaggoner) that I am, I decided to get into football because that’s what people were talking about. So this Iowa girl started following the New Orleans Saints…a natural choice (former French teacher over here, remember? NOLA was the best I could do!).

In a similar vein, for the past 5 years, Mark, myself, and some of our friends from MiamiU have had a fantasy football league together.**

My team = the Tenacious Trouts

Mark’s (I’m using air quotes here) “clever” team = Co-constructing PAIN (Very student development theory of him…nice, Mark!)

Anyhoo…we use Yahoo Fantasy Football (YFF) and one feature that I’ve enjoyed this year is the Game Recap. Yahoo blends together highlights from the “game”, images, and data to tell the story of (in this rare case) my amazing upset against another team, Handy Mart (no air quotes for that team – she’s won multiple years in a row!).

summary ff

This game data recap makes reading about my fake team’s fake game much more dramatic and interesting than just the bunch of computer algorithms that it is.

sections of ff

It weaves the story of the game data together so accessibly that it makes even the more nuanced highlights and plays from the game exciting for a sports novice like myself. And, in thinking about collecting data and assessing learning, really, isn’t that one of the main goals? Lots of offices collect data – and while that’s by no means easy, I think the deeper challenge is what do you do with that data? And how do you tell the story of your data (i.e., what students learned and were able to do as a result of your efforts) to make it accessible to important stakeholders?

Data storytelling means I need to do more than show what % of students responded “agree” or “disagree” on a survey. I need to use the data to narrate what all those survey responses mean and the overarching story that arose. Practically, here are a few simple strategies Yahoo Fantasy Football uses that can apply to us. When you have a bunch of data:

  1. Cluster information into categories – not only will categories make your data much more digestible to your audience, but the groups in it of themselves will make telling the story easier for you and the audience.
  2. Use interpretive titles – show your data but also give it a title that helps the audience understand what they’re seeing/reading and what it means (the way a headline to an article quickly and succinctly communicates the main point).
  3. Blend images, text, and data together – there’s no need to exile all the graphs to one page and text to another. Instead, put them side-by-side so they can complement and strengthen each other.

Happy storytelling!

**I'd like to note that I have been the league champion ONE time! Again, a rare occurrence which probably was due to my opponents getting too busy to change their rosters, but I'll take it!

Statistics! Part 2

2015-03-14_OhNoLogo22-mark3So you’ve designed a new workshop, which you’ve guaranteed will bring up your student’s grades by a full letter! You spent weeks preparing the workshop, you gave the workshop, and now the grades are coming in. Did your students improve by a letter grade? That’s an easy calculation using descriptive statistics. Simply average last term’s GPAs among the students in your group and compare that to this term’s average.

But did your workshop really make a difference? Let’s say you want to know if your workshop can really be said to bring up student grades (or if you just got lucky). This is where inferential statistics come in! Remembering Abby’s post from last week, the sample in this case is the students in our workshop and the population is all of the students at the university.*

T-tests can tell you if a given experience (e.g., a workshop) impacts the mean score (e.g., grades) for a given student. When you hear folks describing “pre” and “post” tests, this is likely a scenario where t-tests are helpful.

Regression can tell you if (and the extent to which) two variables are connected. For example, if you want to know if a students grade in calculus can predict their grade in physics. A regression analysis will tell you is there’s a relationship and how strong that relationship is. This test is appropriate when both variables are quantitative.

Writing this post, it occurs to me that 1) trying to explain this stuff gets complicated FAST. 2) I’ve lost most of the details I learned in my statistics courses.

The main idea here is that we have mathematical tools which can calculate for us how likely it is that a given experience or situation can predict another experience or situation. Want to know if career counseling helps students find a career? Statistics can answer that.

The downside here is that not every statistic-based conclusion can be trusted. Much like Harry Potter’s wand,** this only works when a person knows (or at least sort-of knows) what they’re doing. I’ve noticed that it gets cold a few weeks after students arrive on campus — it’s the STUDENTS who cause winter!!!!


In most cases, assessment doesn’t require statistics (beyond mean, median, etc.). As intelligent people with a limited amount of time on our hands, it’s okay to look at some numbers, make conclusions, and update our office processes. That said, if you happen to have someone on your staff with the time and the background, you’re in luck — you can start making conclusions about the effectiveness of your department practices. This allows you to identify the practices making a difference. In a time when resources are tight, the ability to carefully prune our student affairs bonsai trees (you’re welcome for that metaphor) will become more and more important.

*This assumes your workshop was attended by a random group of students among the university. If, for example, the workshop was only advertised to engineering students, then your population would be engineering students. In short (and probably under-sufficient), your population is the group from which the sample comes.

**This is just an assumption. I haven’t read any Harry Potter, but I assume he doesn’t want other people messing around with his wand.

Statistics! Part 1

2015-03-14_OhNoLogo22-mark3Last week, Abby opened the door to one of my favorite topics — statistics. When used properly, statistics can add a layer of justification to our assessment results by further explaining the numbers in our datasets. I thought I’d take some time in my next few posts to further explain statistics and its (potential) use in our assessments.

Descriptive vs Inferential Statistics
The vast majority of assessment results rely on descriptive statistics. Descriptive statistics merely describe what’s going on in a given dataset. Mean, median, mode, maximum, minimum, and standard deviation, are all descriptive statistics.

Mean – also known as the “average” this term is used to tell people they are dreadfully un-special! Mathematically, it’s the sum of the values divided by the number of values. That is, if I tell 3 friends a set of 10 puns and those three friends laugh at 4,5, and 6 of the puns, the mean is 5 (4+5+6 divided by 3). This may be the most used descriptive statistic!

Median – I have a pet peeve. One of our summer orientation presenters likes to say “only half of you can be above average!” This tears me up inside because it’s not true. If we have 4 students, 3 of them have a “B” grade (3.0) and one has a “D” grade (1.0), then the average is a 2.5. All three “B” students are above average and that one “D” student is making everyone else look good. This is why the median was invented. The median is the place where half of the people are above you and half are below you. To find the median, rank all of the values from lowest to highest (1,3,3,3) and take the middle value. In cases where you have an even number of values, average the two closest to the middle. For this dataset (1,3,3,3) the median is 3 (the average of 3 and 3). Since modern grade distributions look less like a bell and more like a wave — with everyone squished in the mid-3 range and a tail of students performing poorly — the median can be a great way for students to compare themselves to their peers academically.

Mode – this statistic is almost useless. It tells you which value occurs the most. It’s not mode’s fault, we just don’t often care which value shows up the most. I’m sorry Mode, it’s not you, its me. But it’s really you.

Maximum – this is the highest value in a dataset. When I’m at the gym, I often ask the maximum amount of weight a given bar can handle.  Because if I’m doing bench presses, I don’t want to break the bar.

Minimum – conversely, this is the lowest value in a set of data.

Standard Deviation – this value tells you how much your data varies. It’s useful for larger datasets (i.e., more than just a handful of numbers) because it can tell you how one value compares to the dataset. Standard Deviation is in some ways a gateway into inferential statistics, which I’ll explain in my next post.

This post explains the more useful descriptive statistics. You may be thinking — but Mark, my survey only covers %25 of my students (and I can’t chase down the rest), does this mean I can only make conclusions about that %25 of students? Is there a way I can, using this information, make conclusions about my entire group (%100)? The answer, an annoying aspect of statistics, is sort of. I’ll dive into this further in my next post!

Podcast Recommendation: Show About Race



I love podcasts. My current favorite is Our National Conversation about Conversations about Race (a.k.a. “Show About Race”) with co-discussants Raquel Cepeda, Baratunde Thurston, and Tanner Colby. They describe their podcast as:

Authors Baratunde Thurston (How To Be Black), Raquel Cepeda (Bird Of Paradise: How I Became Latina) and Tanner Colby (Some Of My Best Friends Are Black) host a lively multiracial, interracial conversation about the ways we can’t talk, don’t talk, would rather not talk, but intermittently, fitfully, embarrassingly do talk about culture, identity, politics, power, and privilege in our pre-post-yet-still-very-racial America. This show is “About Race.”

show about race logoWHAT AN EXCELLENT PODCAST!

I so enjoy and appreciate this show – this is important stuff (holy understatement, Batman) and their conversations inform and challenge me in the way that all people (but especially me as a white person) should be about power, privilege, race, etc.This trio’s thoughtful and frank conversations keep these topics/issues/people’s lived experiences at the forefront of my thinking about collecting data in higher education, assessing learning, and meeting the needs of all students.

Show About Race also posts a response episode during the off-weeks called the B-side, on which they read listener feedback about the previous show as well as reflect on their conversation and clarify/expand on their comments. I like the B-side as much as the regular show because it feels like a rare opportunity to have a discussion, get feedback and time to reflect on it, and then come back and discuss it again (and in a public forum!). It also happens to cater to my enjoyment of talking…about talking (can you tell I’m an extrovert???).

Get to iTunes (or your favorite podcast app) and subscribe to Show About Race. My favorite episodes so far have been #009 about white fragility and #002 about many things and colorism. Cannot wait to hear more!

For the Love of Counting: The Response Rate Rat Race

2015-03-14_OhNoLogo22-abby3I’m in the midst of our annual summer experiences survey – my office’s push to understand what do students do over the summer? And is it meaningful? We know that getting ALL students to respond to our 3-12* question survey would be near impossible, but, as the assessment person on the team, it’s my job to always chase that dream (let’s be real, it’s my obsession to chase that dream!). And at a small institution like where I work getting a response rate of 100% (~1500 students) is seemingly an attainable goal. But this raises so many questions for me.

A little bit of context about the survey. Students do many valuable things over the summer that add meaning to their college experience; the particular subsection of this data that chiefly interests me (as I rep the Career Center) is the number of students who intern.

Common statistical wisdom would tell me that if I am indeed going to report on how many students intern over the summer then I need a certain response rate in order to make an accurate, broader statement about what percentage of Carleton students intern. This stats wisdom is based on a few factors: my population size, my sample size, and the margin of error with which I’m comfortable (I know, I know…ugh, statistic terms. Or maybe some of you are saying YAY! Statistic terms! Don’t let me stereotype you):

Population size = 1500 (all the upperclass students)

Sample size = 1275 (well…this is the goal…which is an 85% response rate…but do I need this # to be accurate and broad?? Hmm…better look at what margin of error I’m comfortable with…)

Margin of error = um…no error??? Baaaahhhh statistics! Sorry readers, I’m not a stats maven. But that’s ok, because SurveyMonkey greatly helped me to determine this

Margin of Error

Ok, so if I want to be SUPER confident (99%) then my goal of 1,275 students (or an 85% response rate) will get me a VERY small margin of error (read: this is good). But, turns out if I look at this from the angle of sample size, I could have the same small margin of error if I only had 1,103 students respond (74% response rate).

Sample Size

So, at this point, I could ask: Why the heck am I busting my butt to get those extra 11% of respondents??? YARG! And statistically, that would be a valid question.

But I don’t ask that question. I know I chase the 85% and 100% response rate dream because I aim to serve ALL students. And even if statistically all the students after 1,103 respond consistently, there is likely an outlier…one or a few student stories that tell me something that the first 1,103 couldn’t that shape a better student experience for all.

So to all of you regardless of if you have a relatively small population size (like me) or a much larger one (hint, Mark, Michigan Engineering, hint), I say keep up the good work trying to reach and understand the stories of 100% of your students. It may be an impossible dream but that doesn’t make it any less worthy a pursuit.

*3-12 question survey based on what the student did over the summer - skip logic, woot woot!

Whistling Vivaldi

2015-03-14_OhNoLogo22-mark3I’m in the last few days of my 2-week summer vacation and I thought now would be as good a time as any to put together a post. It seems the closer I am to an institution, the more I get thinking about higher ed. Today, I’m at a Bruegger’s Bagels in Northampton, MA — home (or near-home) to a handful of colleges and universities. I’m also plagued by a very agile fly. He likes to fly around my hands. I can’t seem to get him, and fellow patrons are starting to stare.

This summer, we’re reading a book for professional development: whistling vivaldi by Claude M. Steele.  I won’t summarize the entire book for you — admittedly, I’m only about a third of the way into it. Thus far he’s exploring the impact of stigma on performance. Stereotype threat is the idea that our performance (in anything) is impacted by the stereotypes placed upon our identities. The expectations placed upon us by virtue of those identities affect our performance whether we’d like them to or not. Often times, the fear of confirming a stereotype about one of our identities hinders our performance in that identity, regardless whether that stereotype holds merit. We don’t want to give truth to that stereotype.

Consider this situation: In graduate school, we had many conversations in class about identity. As someone with many majority identities (e.g., white, heterosexual, male, etc.), I constantly second-guessed my contributions to class conversations — afraid that everything I said would be an opportunity for a classmate to think “oh, he just doesn’t get it, he’s [straight, white, male, etc.].” You can bet this fear kept me from fully engaging in the class conversations. I didn’t want to be seen as out of touch — or worse, unable to understand.

Stereotypes blur the way we understand the world. In the book, Steele points out the difference between the “observer’s perspective” and the “actor’s perspective.” As we’re often in the observer’s perspective, we’re only able to focus on what we can see or notice. This perspective tends to be a view from the clouds and causes us to miss context in which the actor (i.e., person studied) is making those decisions.

To illustrate his point, Steele references the 1978 Seattle Supersonics basketball team. The team started out the season losing at an alarming rate. Local sports analysts were able to break down, in detail, all of the reasons the team struggled. Shortly after the beginning of the season, the team hired a new coach. From there, the team started to win — and would later reach the NBA finals — despite having exactly the same players with the same skill sets ridiculed in the first few weeks of the season. When viewed from a different lense, characteristics originally seen as contributing to their struggles were now the reasons for their success.

It’s almost as though our expectations highlight the things we expect to see, and hide those we don’t expect.