Statistics! Part 1

2015-03-14_OhNoLogo22-mark3Last week, Abby opened the door to one of my favorite topics — statistics. When used properly, statistics can add a layer of justification to our assessment results by further explaining the numbers in our datasets. I thought I’d take some time in my next few posts to further explain statistics and its (potential) use in our assessments.

Descriptive vs Inferential Statistics
The vast majority of assessment results rely on descriptive statistics. Descriptive statistics merely describe what’s going on in a given dataset. Mean, median, mode, maximum, minimum, and standard deviation, are all descriptive statistics.

Mean – also known as the “average” this term is used to tell people they are dreadfully un-special! Mathematically, it’s the sum of the values divided by the number of values. That is, if I tell 3 friends a set of 10 puns and those three friends laugh at 4,5, and 6 of the puns, the mean is 5 (4+5+6 divided by 3). This may be the most used descriptive statistic!

Median – I have a pet peeve. One of our summer orientation presenters likes to say “only half of you can be above average!” This tears me up inside because it’s not true. If we have 4 students, 3 of them have a “B” grade (3.0) and one has a “D” grade (1.0), then the average is a 2.5. All three “B” students are above average and that one “D” student is making everyone else look good. This is why the median was invented. The median is the place where half of the people are above you and half are below you. To find the median, rank all of the values from lowest to highest (1,3,3,3) and take the middle value. In cases where you have an even number of values, average the two closest to the middle. For this dataset (1,3,3,3) the median is 3 (the average of 3 and 3). Since modern grade distributions look less like a bell and more like a wave — with everyone squished in the mid-3 range and a tail of students performing poorly — the median can be a great way for students to compare themselves to their peers academically.

Mode – this statistic is almost useless. It tells you which value occurs the most. It’s not mode’s fault, we just don’t often care which value shows up the most. I’m sorry Mode, it’s not you, its me. But it’s really you.

Maximum – this is the highest value in a dataset. When I’m at the gym, I often ask the maximum amount of weight a given bar can handle.  Because if I’m doing bench presses, I don’t want to break the bar.

Minimum – conversely, this is the lowest value in a set of data.

Standard Deviation – this value tells you how much your data varies. It’s useful for larger datasets (i.e., more than just a handful of numbers) because it can tell you how one value compares to the dataset. Standard Deviation is in some ways a gateway into inferential statistics, which I’ll explain in my next post.

This post explains the more useful descriptive statistics. You may be thinking — but Mark, my survey only covers %25 of my students (and I can’t chase down the rest), does this mean I can only make conclusions about that %25 of students? Is there a way I can, using this information, make conclusions about my entire group (%100)? The answer, an annoying aspect of statistics, is sort of. I’ll dive into this further in my next post!

Podcast Recommendation: Show About Race



I love podcasts. My current favorite is Our National Conversation about Conversations about Race (a.k.a. “Show About Race”) with co-discussants Raquel Cepeda, Baratunde Thurston, and Tanner Colby. They describe their podcast as:

Authors Baratunde Thurston (How To Be Black), Raquel Cepeda (Bird Of Paradise: How I Became Latina) and Tanner Colby (Some Of My Best Friends Are Black) host a lively multiracial, interracial conversation about the ways we can’t talk, don’t talk, would rather not talk, but intermittently, fitfully, embarrassingly do talk about culture, identity, politics, power, and privilege in our pre-post-yet-still-very-racial America. This show is “About Race.”

show about race logoWHAT AN EXCELLENT PODCAST!

I so enjoy and appreciate this show – this is important stuff (holy understatement, Batman) and their conversations inform and challenge me in the way that all people (but especially me as a white person) should be about power, privilege, race, etc.This trio’s thoughtful and frank conversations keep these topics/issues/people’s lived experiences at the forefront of my thinking about collecting data in higher education, assessing learning, and meeting the needs of all students.

Show About Race also posts a response episode during the off-weeks called the B-side, on which they read listener feedback about the previous show as well as reflect on their conversation and clarify/expand on their comments. I like the B-side as much as the regular show because it feels like a rare opportunity to have a discussion, get feedback and time to reflect on it, and then come back and discuss it again (and in a public forum!). It also happens to cater to my enjoyment of talking…about talking (can you tell I’m an extrovert???).

Get to iTunes (or your favorite podcast app) and subscribe to Show About Race. My favorite episodes so far have been #009 about white fragility and #002 about many things and colorism. Cannot wait to hear more!

For the Love of Counting: The Response Rate Rat Race

2015-03-14_OhNoLogo22-abby3I’m in the midst of our annual summer experiences survey – my office’s push to understand what do students do over the summer? And is it meaningful? We know that getting ALL students to respond to our 3-12* question survey would be near impossible, but, as the assessment person on the team, it’s my job to always chase that dream (let’s be real, it’s my obsession to chase that dream!). And at a small institution like where I work getting a response rate of 100% (~1500 students) is seemingly an attainable goal. But this raises so many questions for me.

A little bit of context about the survey. Students do many valuable things over the summer that add meaning to their college experience; the particular subsection of this data that chiefly interests me (as I rep the Career Center) is the number of students who intern.

Common statistical wisdom would tell me that if I am indeed going to report on how many students intern over the summer then I need a certain response rate in order to make an accurate, broader statement about what percentage of Carleton students intern. This stats wisdom is based on a few factors: my population size, my sample size, and the margin of error with which I’m comfortable (I know, I know…ugh, statistic terms. Or maybe some of you are saying YAY! Statistic terms! Don’t let me stereotype you):

Population size = 1500 (all the upperclass students)

Sample size = 1275 (well…this is the goal…which is an 85% response rate…but do I need this # to be accurate and broad?? Hmm…better look at what margin of error I’m comfortable with…)

Margin of error = um…no error??? Baaaahhhh statistics! Sorry readers, I’m not a stats maven. But that’s ok, because SurveyMonkey greatly helped me to determine this

Margin of Error

Ok, so if I want to be SUPER confident (99%) then my goal of 1,275 students (or an 85% response rate) will get me a VERY small margin of error (read: this is good). But, turns out if I look at this from the angle of sample size, I could have the same small margin of error if I only had 1,103 students respond (74% response rate).

Sample Size

So, at this point, I could ask: Why the heck am I busting my butt to get those extra 11% of respondents??? YARG! And statistically, that would be a valid question.

But I don’t ask that question. I know I chase the 85% and 100% response rate dream because I aim to serve ALL students. And even if statistically all the students after 1,103 respond consistently, there is likely an outlier…one or a few student stories that tell me something that the first 1,103 couldn’t that shape a better student experience for all.

So to all of you regardless of if you have a relatively small population size (like me) or a much larger one (hint, Mark, Michigan Engineering, hint), I say keep up the good work trying to reach and understand the stories of 100% of your students. It may be an impossible dream but that doesn’t make it any less worthy a pursuit.

*3-12 question survey based on what the student did over the summer - skip logic, woot woot!

Whistling Vivaldi

2015-03-14_OhNoLogo22-mark3I’m in the last few days of my 2-week summer vacation and I thought now would be as good a time as any to put together a post. It seems the closer I am to an institution, the more I get thinking about higher ed. Today, I’m at a Bruegger’s Bagels in Northampton, MA — home (or near-home) to a handful of colleges and universities. I’m also plagued by a very agile fly. He likes to fly around my hands. I can’t seem to get him, and fellow patrons are starting to stare.

This summer, we’re reading a book for professional development: whistling vivaldi by Claude M. Steele.  I won’t summarize the entire book for you — admittedly, I’m only about a third of the way into it. Thus far he’s exploring the impact of stigma on performance. Stereotype threat is the idea that our performance (in anything) is impacted by the stereotypes placed upon our identities. The expectations placed upon us by virtue of those identities affect our performance whether we’d like them to or not. Often times, the fear of confirming a stereotype about one of our identities hinders our performance in that identity, regardless whether that stereotype holds merit. We don’t want to give truth to that stereotype.

Consider this situation: In graduate school, we had many conversations in class about identity. As someone with many majority identities (e.g., white, heterosexual, male, etc.), I constantly second-guessed my contributions to class conversations — afraid that everything I said would be an opportunity for a classmate to think “oh, he just doesn’t get it, he’s [straight, white, male, etc.].” You can bet this fear kept me from fully engaging in the class conversations. I didn’t want to be seen as out of touch — or worse, unable to understand.

Stereotypes blur the way we understand the world. In the book, Steele points out the difference between the “observer’s perspective” and the “actor’s perspective.” As we’re often in the observer’s perspective, we’re only able to focus on what we can see or notice. This perspective tends to be a view from the clouds and causes us to miss context in which the actor (i.e., person studied) is making those decisions.

To illustrate his point, Steele references the 1978 Seattle Supersonics basketball team. The team started out the season losing at an alarming rate. Local sports analysts were able to break down, in detail, all of the reasons the team struggled. Shortly after the beginning of the season, the team hired a new coach. From there, the team started to win — and would later reach the NBA finals — despite having exactly the same players with the same skill sets ridiculed in the first few weeks of the season. When viewed from a different lense, characteristics originally seen as contributing to their struggles were now the reasons for their success.

It’s almost as though our expectations highlight the things we expect to see, and hide those we don’t expect.

Where do I begin?


2015-03-14_OhNoLogo22-mark3It starts when you notice the local construction projects winding down. Then you’re cut off by a Ford Escape with New Jersey plates and a back full of clothes, books, a pink rug, and one of those chairs made solely of bungee cords. That’s right, it’s back to school season.

Abby and I met to for a pre-season re-vamp of Oh No and you can look forward to two posts a week this year — Mondays and Thursdays. We’re trimming a bit because those Friday posts came upon us awful fast and we want to keep this thing valuable. So without further adieu…

I met with a colleague from across campus this week. She works for a newer (and smaller) office just starting to wrap its mind around how to capture its value to the students it serves. The office focuses on developing an entrepreneurial mindset in our students and supporting student ideas from conceptualization to implementation. To further complicate its assessment process, the office is not yet on permanent funding, thus is under pressure to justify its existence.

I’ve already covered starting over in my Zero to Assessment post, however this conversation yielded a few new questions I wanted to chew on a bit.

What if I don’t know what students are learning from this experience? In the old, dusty textbooks of assessment you’ll find a flow chart looking something like this…


oh, well hello smart art…

This flow chart is helpful if you have clear and measurable learning outcomes, but leaves out instructions for when your outcomes are a bit cloudy. My colleague proposed measuring this through a series of qualitative questions — which, despite my aversion to the labor intensive nature of properly analyzing qualitative questions, seemed appropriate given the situation. And you know what, old dusty textbook I made up to illustrate my point, if an office centered around innovation can’t build a plane while they’re flying it, can any office? That is, if we can’t get an initiative started until we have every detail (e.g., assessment) ironed out, we’ll be missing out on a good number of valuable initiatives.

While I’m complaining about the rigidity of fictitious textbooks, it’s worth acknowledging that neither her nor I was all too sure of how she would analyze the data she’s collecting. It would be great if she had the labor to code each response, but that doesn’t seem likely. I think this is okay. It takes a few cycles to get an assessment process ironed out. Even by simply reading through the responses, she’ll get a feel of what her students are learning and how to better support them.

How do I get students to reply to my surveys? If I ever figure this out, I’m leaving the profession of higher education to hang out with the inventor of stick it notes on an island covered in boats, flat screen TVs, and Tesla convertibles. And, I guess, charging stations for the convertibles.

Very few people know how to do this well, however I’ve come across a few strategies which seem to be working.

-Make it personal. More than half of my job is forming relationships with students. Surveys are one of the times I leverage those relationships. I’ll often send the survey link in an e-mail noting (very briefly) the importance of the data collected in this survey, letting them know that every response is a favor to me (for all of the time I spend e-mailing them with answers to questions I’ve already answered in previous e-mails, this is the least they could do). If you’re sending a survey out to thousands, you can expect a very low return rate.

-Get ‘em while they’re captive. Do you have advising meetings with students at the start of these programs? is there an application to get into your program? Can you (easily) tie in the survey as a requirement for completing the program? I don’t mean to hint that surveys are the only means of collecting assessment data — but they’re direct, effective, and tend to be less labor intensive than other means.

Countdown to College football: 3 DAYS!

Stay the Course: Reminders for When Assessment Gets Messy

2015-03-14_OhNoLogo22-abby3My friends for the assessment revolution! My office is gearing up to take the next step in our learning outcomes assessment efforts. I’m VERY excited! It’s going to be fun, intellectually and professionally fulfilling, and (most importantly and hopefully) provide meaningful insight into the student experience. But in addition to excitement, I am also a bit nervous, because, as you’ve likely noticed, measuring for learning is messy – which is the largest part of its difficulty, but, also, its beauty. In my research about student learning and assessment over the past few years I’ve come to learn that it’s not just me who’s feeling this way:

In watching videos like the above and reading anything I can get my hands on, I’m hearing a few common themes (some old, some new) that I’m keeping in mind during this big year for our assessment efforts in the Career Center:

  1. Assess learning not just once, but at multiple different points and from different parts of the student experience. (read: Learning is happening all over campus, thus, assessing learning all over campus is not just a good idea, but needed.)
  2. Give students multiple opportunities to practice their learning in high-touch, intentional, reflection-centric ways. (read: It’s going to take a lot of time, there’s no quick fix, so settle in for the long haul and love the process.)
  3. Assessment tells the story of student learning, but let the student be the narrator. (read: Ask students to narrate their learning and they will tell you! Their story IS your assessment data. Now use that data to tell the larger story of student learning at large.)
  4. Set up assessment to do double duty for you – it can be a learning tool in it of itself, in addition to a data collection. 

    “…a really interesting kind of analytics should reveal to the learner even more possibilities for their own connected learning. The analytics shouldn’t simply be a kind of a diagnosis of what’s happening now but analytics at their best can be a doorway that suggests what else is possible.” -Gardner Campbell, Vice Provost for Learning Innovation and Student Success at Virginia Commonweath University

  5. Follow best practices in assessment while also breaking the mold, because learning’s blessed messiness means it’ll always need more than the gold standard. (read: Follow and break the “rules” of assessment at the same time – simple, right????)

It might be a messy year in assessment, but that’s ok, because it’s a worthwhile pursuit. And as my supervisor reminded me when I was wigging out about it recently: remember, nothing ventured, nothing gained.

So commit to the adventure and just do it.

Top Tier vs Lower Tier Engineering Programs

2015-03-14_OhNoLogo22-mark3Greetings everyone! In the past few weeks, I purchased a house. The buying process took up much of my free time (though didn’t seem to diminish my blog-ambition). Anyway, with summer in the home-stretch it’s time to get back into it. Abby and I will have our regular weekly posts returning soon. In the meantime, my brain is in the depths of “big thinking” mode. As a former engineer, and an advisor of first-year engineering students, I often find myself thinking about how we educate them.

Modern society holds complex problems. No longer is it enough for engineers to create a widget, present it to the world, and say “go forth, use this widget to make your life easier!” Beyond the question of making things faster or more-efficient, we want to know if a widget intended for a developing nation can be manufactured with local materials at low cost. Will the locals even want to use the widget?

In addition, we’re increasingly reliant on our engineers’ moral decision making. So you want to build a car that drives itself? How much will one of these cost? Can it share the road with human-driven automobiles? If the car finds itself in a certain-impact situation, should it rear-end the car in front of it or veer off onto the sidewalk? Our engineers should have the technical toolbelt to needed to design the widget as well as development required to decide how to use those tools.

This takes me to the state of today’s engineering education. Each engineering degree has a set of technical content expected of a person with that degree. The technical content takes up most of what can be covered in four years. This leaves little room for personal/moral/ethical development in the curricular portion of a student’s experience, and has us (student affairs folks) urging students to get involved outside of the classroom.

Students arrive to college with varying levels of abilities and backgrounds. The most desirable students tend to have credit from Advanced Placement courses or college courses they took while in high school. Assuming they plan to stay for four years, completing college requirements while in high school leaves students with more “wiggle room” in their schedule to pursue interests in addition to the bachelor’s degree. This may mean a minor or even taking fewer credits each term to allow for more extra-curricular time.

But what about the students who do not arrive with AP credit — the students who have the grit and determination, but have not yet studied calculus and may not have stellar standardized test scores? These students are often overlooked by the more competitive institutions. When you’re looking at thousands of applications, you need a means of sorting through them quickly.

We’re left with two tiers of institutions. Students accepted into the lower tier of institutions are not coming in with much college-level coursework completed. Since those students will be expected to have a certain toolbelt upon graduation, those programs are forced to put their resources into the classroom experience. Students accepted into the upper tier of institutions – already having college credit – have the room to take a minor in philosophy or spend time working on research in a professor’s lab.

What’s the difference? Top-tier students have the time for extra-curriculars which force them to juggle the more complex questions, often lead to the personal/moral/ethical development required of today’s engineers. Lower tier students are forced to put their time into learning technical content. They’re acquiring the tools but are not challenged to think about how to use them.

Both of these groups finish with bachelor’s degrees, the signal to employers that they’re ready for the workforce. What happens next? My gut tells me that top-tier students, equipped with experience in how to use the tools, are moving into leadership positions at a higher rate than their lower tier colleagues.

The questions I keep coming back to: Is the personal/moral/ethical development part of what is (or should be) a part of the bachelor’s degree? With the high cost of higher education, are we limiting the potential of some students by not allowing them the opportunity for that development? Does our current educational system, which nudges affluent students toward leadership, exacerbate the gap between the upper and lower socioeconomic classes?

Assessment on the Road: Boise State

Images of Idaho

My summer of travel is sadly coming to an end. I just got back from one of my last destinations: Idaho! (Boise, to be exact) It was my inaugural trip and I’m happy to report back what so many already know: Idaho is beautiful. (Thanks Wear Boise for the great beard image!)

Being the lover of colleges and campuses that I am, I had to visit Boise State University while I was there. By all of the extra signage around campus, clearly I was there on a summer orientation day (which I love because I love orientation).

Boise Career Center welcome

Once again, the Career Center of this new-to-me campus grabbed my attention. I REALLY loved their graphic representation of their career learning goals. What a great way to be transparent about and engage students in what goals the Career Center has for students. I think that is a pivotal step to students actually successfully learning and achieving those goals!

CC make it count detail

Another data visualization from Boise State that I loved was through their admission office. Here’s what I love about it:

It’s simple. Easy graphics, consistent color palette. Clean.

BSU data brochure5

It has numbers AND text. They bring life and strength to each other.

BSU data brochure

There is lots of different kinds of data. I think more and more the public wants LOTS of data all at once so that they can quickly skim and find what is most meaningful to them.

BSU dad brochure2

It’s a nice size. Folds up as a small brochure (6″x3″) and folds out as a small poster (18″x12″).

BSU data brochure3

It’s a nice paper weight. I know, this sounds weird, but if you’re going to be printing nice graphics, you want to have nice paper on which to put them.

Thanks for the great visit Boise State U! Until next time!

Assessment on the Go!

2015-03-14_OhNoLogo22-abby3Summer is busy! Between attending conferences, catching up on planning items, and (hopefully) a little R&R, people are here, there, and everywhere. What’s a data head to do?

As you know, my summer goals include all things data visualizations. In my search for learning and inspiration, I stumbled across the podcast Data Stories. With my 32-minute commute to work everyday, listening to hosts Moritz and Enrico on Data Stories is perfect. I first got hooked on an episode with Miriah Meyer about exploratory data viz tools. Imagine talking about data visualizations, data tools, and methods with your friends – that’s Data Stories. Was Miriah’s research waaaaay beyond me?? Yes. BUT it was so enjoyable to listen their banter AND it gave me lot of ideas, so I was drawn back to DS for more.


Here’s why I like it:

  • Data + friendship = my favorite
  • Cataloged points in each episode, so you can pick and choose the parts of an individual episode you want to listen to – just visit their website. Some of the episode topics are over my head, so this is a useful tool!
  • Learn, get inspired, and brainstorm ideas all while on your way to work
  • Resources galore – they always provide related and discussed links from each episode on their website – very handy!
  • You can subscribe (read: automatic updates! no thinking invovled!)

Go ahead, peruse the Data Stories archives, get onto your favorite podcast app, and binge! Next in my queue are about data art and data journalism. The perfect complement to a summer on the go.

See you next week!

The Dog Days of Summer Orientation

2015-03-14_OhNoLogo22-mark3Somewhere around the time I mentioned our college’s honor code, I looked up at the students sitting in a “U” around the room. Several of them sat with their hands to their cheeks supporting their heads. One was leaning so far back he was nearly sleeping. Our Peer Advisors (undergraduate student staff) were trying their best to stay awake. I thought to myself “self, this is painful.”

I’m a fairly energetic presenter. One of my favorite crowd tricks is to ask them a question: By a round of applause, how many of you are excited to be here? The initial response varies, but it doesn’t matter. I then place both arms out, palms up, waist height; raise my eyebrows and slowly lift my hands. Even the comatose crowds tend to get a respectable clap going. If it’s a good crowd, I’ll even lower one hand while I keep another one up — about half the crowds make it that far.

I’ll then transition into one of my favorite energizers — I avoid the term “ice breakers” because of their inherent negative connotation. I explain the rules of the rock paper scissors tournament. You know the one, where if you win, you accumulate the person you beat and all of their fans (the people they beat) as your fans. By the end of the ice breaker you have two groups raucously cheering on their respective representative. The best energizers are the ones that are easy to understand, and hard to do while looking cool.

Then we break into smaller groups. Everyone leaves cheery and riled up. We get to a classroom, sit down, I turn on the projector, *WHAM — I’ve lost them.

*That was someone’s sleepy head bouncing off the desk

Sometimes I sympathize with Bill Murray in Groundhog Day. Only the difference is that he gets to see the same people every day. The people I meet are there for one day only before I get a completely new crowd. I try the same jokes — every day. Sometimes they work. When they don’t, I’ll say something like “well… it’s pretty clear I need to work on my jokes… or maybe my audiences!” That one works half the time.

For inspiration, I weave into my presentation little tidbits of my story. That is, the experiences I’m comfortable sharing with my advisees about my undergraduate years. You know, to give them something to look up to (yes, that was simultaneously sarcastic and completely serious). I’ll even get our Peer Advisors involved by having them discuss their experiences.

Later on I meet with each of my advisees one on one for a few minutes. We chat for a few minutes about their interests and come up with a set of classes. This is the time when they’re most alive. It seems a fair number of them are too careful to fully engage when we’re with the group of nine.

When we’re preparing for orientation, we spend SO MUCH TIME discussing the same of orientation. How much time do we devote to energizers and ice breakers and how much to presentations. How much to organized time and how much to free flowing conversations.

And the worst part is, when you ask them later on why they joined a particular extra-curricular, or how they knew about our tutoring program, some of them will say “I remembered it from orientation.” It’s as though they all get together and agree one which parts they will each individually remember. Come on guys, we can do this. Frank, you remember the first slide of the presentation. Tina, you pretend to be asleep for the first half, then at the very end ask a question that clearly indicated you were paying attention the whole time. Thomas, you pretend to sleep for the whole thing — only, actually be asleep.

When I make up names, I almost always go with Frank and Tina. I’m not sure where Thomas came from.

It’s orientation season folks. No matter how you lay out the time, just about everything you do will be well-received by a portion of the group, but not everyone. Some students are worried about making friends. Others about if they can handle college. Some wonder if they’ve picked the right one. While I don’t think there’s a perfect way to do orientation. It seems to me that orientation should be a time of meeting people (both students and staff), thinking deeply about their college experience, and learning just enough to make do for the first semester — they’ll pick up the rest.